[pymvpa] FW: What does a classification accuracy that is significantly lower than chancel level mean?
meng.liang at hotmail.co.uk
Sat Nov 10 19:19:19 UTC 2012
Thanks very much for your reply! Please see below for details.
> > I'm running MVPA on some fMRI data (four different stimuli, say A, B, C
> > and D; six runs in each subject) to see whether the BOLD signals from a
> > given ROI can successfully predict the type of the stimulus. The MVPA
> > (leave-one-run-out cross-validation) was performed on each subject for
> > each two-way classification task. In a particular classification task (say
> > classification A vs. B), in some subjects, the classification accuracy was
> > (almost) significantly LOWER than the chance level (somewhere between 0.2
> > and 0.4).
> depending on number of trials/cross-validation scheme even values of 0
> could come up by chance ;-) but indeed should not be 'significant'
> > What could be the reason for a significantly-lower-than-chance-level
> > accuracy?
> and how significant is this 'significantly LOWER'?
The significant level was assessed by P value obtained from 10,000 permutations. Permutation was done within each subject, by randomly assigning stimulus labels to each trial (the number of trials under each label was still balanced; there were 8 trials per condition in each run, and there were six runs in total). The P value was calculated as the percentage of random permutations in which the resultant classification accuracy was higher than the actual classification accuracy obtained from the correct labels (for example, if none of 10,000 random permutations led to a classification accuracy that was higher than the actual classification accuracy, the P value would be 0). In this way, in 5 out of 14 subjects, the P values were greater than 0.95. In other words, the actual classification accuracy was located around the end of the left tail of the null distribution in these 5 subjects (the shape of the null distribution is like a bell, centered around 50%). In other 9 subjects, the actual classification accuracies were near or higher than chance level.
> details of # trials/cross-validation?
There were 8 trials per condition in each run, and there were six runs in total. Leave-one-run-out cross-validation was performed, that is, the classifier (linear SVM) was trained on the data obtained from five runs and tested on the remaining run (repeat the same procedure six times and each time using a different run as a testing dataset).
> > The P value was obtained from 10,000 permutations.
> is that permutations within the subject which at the end showed
> significant below 0? how permuations were done?
I hope the reply above provide enough details of how the permutation was done. Please let me know if there is anything unclear.
> > But the
> > accuracies of all other classifications look fine in all subjects.
> fine means all above chance or still distributed around chance?
By 'fine' I mean the classification accuracy was around (i.e. not far from the chance level, can be lower or higher than chance level) or above chance level. To me, around or above chance level makes more sense than significantly lower than chance level.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pkg-ExpPsy-PyMVPA