[pymvpa] What is the value of using errorfx when using Cross validation?
Nick Oosterhof
n.n.oosterhof at googlemail.com
Wed Feb 25 15:43:55 UTC 2015
On 25 Feb 2015, at 16:36, gal star <gal.star3051 at gmail.com> wrote:
> Here is the minimal running example:
>
> fds=fmri_dataset(samples='4D_scans.nii.gz')
> zscore(fds, param_est=('targets', ['control']))
> int = numpy.array([l in ['class A','class B'] for l in fds.sa.targets])
> fds = fds[int]
>
> clf = FeatureSelectionClassifier(LinearCSVMC(), SensitivityBasedFeatureSeletion(OneWayAnova(), FixedNElementTailSelector(1000 ,tail='upper',mode='select')))
>
> nfold = NFoldPartitioner(attr='chunks')
>
> < Python Code for selecting only '0' chunk for train and '1' for test>
Can you provide this code please?
> clf.train(train)
> print clf.predict(test.samples)
How exactly do you determine the standard deviation among classification accuracies?
Also, the topic of your email is about errorfx, but I didn’t see you using that function anywhere. Could you clarify?
More information about the Pkg-ExpPsy-PyMVPA
mailing list