[pymvpa] What is the value of using errorfx when using Cross validation?

Nick Oosterhof n.n.oosterhof at googlemail.com
Fri Feb 27 09:46:03 UTC 2015

On 25 Feb 2015, at 16:55, gal star <gal.star3051 at gmail.com> wrote:

> > < Python Code for selecting only '0' chunk for train and '1' for test>
> Can you provide this code please?
> int_train = numpy.array([l in [0] for l in fds.sa.chunks])
> int_test = numpy.array([l in [1] for l in fds.sa.chunks])
> train = fds[int_train]
> test = fds[int_test]
> > clf.train(train)
> > print clf.predict(test.samples)
> How exactly do you determine the standard deviation among classification accuracies?
> I am running this script code k times (each time, different part of the data input with '1' chunk).

If I understand correctly, you could have just one script and have a for-loop over the folds, something like:

for fold in xrange(nfolds):

assuming the folds are in the range 0..(nfolds-1).

> In each time i get an accuracy result. then i'm averaging those k results and calculate standard divination.

With the above idea, do you get identical results?
> Also, the topic of your email is about errorfx, but I didn’t see you using that function anywhere. Could you clarify?
> Yes, i'm performing manual cross validation, but i've seen that 
> if i use the CrossValidation class there is an errrofx parameter.
> I'm trying to understand what is it contributes and how can i use it manually. 

errorfx is used to compute the accuracy, by computing, for each fold, how many samples have the same predicted label as the target label.

More information about the Pkg-ExpPsy-PyMVPA mailing list