[pymvpa] Interpreting mis-classification

Matthew Cieslak mattcieslak at gmail.com
Fri Sep 11 13:08:31 UTC 2009

Hi fellow PyMVPA users,

I have a non-software-related question for you all. Imagine a scenario where
there are N runs in a scanning session and a searchlight is used to compute
transfer error from an N-fold cross validation over all voxels. If there are
only two categories you are trying to classify, how would interpret large
spatially contiguous clusters of voxels that are performing significantly
below chance (20% correct or so) appearing in the results? Could it be that
there is something changing in the data in relation to the number of times
the subject has seen examples of a category? If mis-classification could be
caused by an across run repetition-suppression type effect, would re-running
the searchlights with an odd-even split and seeing if these voxels are at
chance be a legitimate way to show there is meaning in the
mis-classification of my svm's?

I haven't been able to find any neuroimaging papers that address or report
below-chance performance, does anyone know of one? Maybe it would be better
to search the machine-learning literature?

I hope to hear your thoughts on this

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20090911/1568140b/attachment.htm>

More information about the Pkg-ExpPsy-PyMVPA mailing list