[pymvpa] below chance/anti-learning

basile pinsard basile.pinsard at gmail.com
Wed Mar 16 19:05:13 UTC 2016


Hello pymvpa users,

I have questions about the anti-learning phenomenon which have been
discussed previously.

my design is the following:
2 scans, each scan has 8 samples of 4 conditions = 32 blocks
the pseudo-random ordering was chosen to be 2 repetitions of De Bruijn
cycles (length 16 eg. 0302113312232010) which implies balancing of
successive pairs of conditions.

I aim at measuring cross-validation accuracy in 2 groups of the 2
conditions among the 4, but I think the ordering allows such subset.

When computing cross-validated accuracy (notably with searchlight), some
subject's map distributions are skewed often to the below chance side (one
is skewed to above chance), and some nodes/rois are really significantly
below chance.
When looking at the map, the significantly below and above chance level
completely make sense regarding the expected network, so this is clearly
localized anti-learning and not confounds.

My cross-validation excludes samples neighboring the test set (60sec
margin) and balances the classes in training set. I used both leave-one-one
(with one or all scans) and leave-one-scan-out cross-validation, and the
anti-learning is more pronounced in the former.
For an idea of the cross-validation schema see here:
http://i.imgur.com/s4rlZZd.png
with blue=train and green=test red=exclude

Do you have any idea of the potential cause of this?
The design, the randomization, the data, the preprocessing, the
cross-validation schema?
Have you found similar problems with your data?

Thanks.

basile
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160316/7d6d293a/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list