[pymvpa] Peculiar case of cross-validation performance

Vadim Axelrod axel.vadim at gmail.com
Wed Feb 1 13:12:19 UTC 2017

Hi all,

I have encountered a rather  peculiar scenario.  My experiment consists of
two conditions and I have ~40 subjects. A standard group-level
random-effects analysis t-contrast resulted in a very significant cluster
(p<0.001, cluster size correction p<0.05). Now, for this cluster and for
the same data for each subject I run SVM classification between two
conditions (leave one session out cross-validation). Classification is done
with one dimension (average response of ROI). So, for each subject I get a
hit-rate which I submit to group t-test vs. 0.5. The surprising thing is
that even despite a clear double-dipping, I fail to get beyond chance
significant decoding in this cluster (hit-rate is only ~0.51-0.52). How
this can be? Because classification does not care about direction of a
difference, I thought that it should be always more sensitive than a
directional comparison of activations. Thinking and simulating lead me to
think that it is probably not a bug. Consider an extreme case, that in
every one of my subjects condition_1  is slightly above condtion_2. Group
t-test will show highly significant difference. But in individual
classification, my prediction will fluctuate around 0.5 with a slight above
0.5 bias, that would not be enough to reach significance above 0.5 at group
level. Indeed, in case of my ROI, for 90% of the subjects the difference
was in one direction.  Does all this make sense to you? If so, what does it
tell us about reliability of a standard random-effects group level analysis.

Many thanks,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20170201/cedd2678/attachment.html>

More information about the Pkg-ExpPsy-PyMVPA mailing list