[pymvpa] significance

Yaroslav Halchenko debian at onerussian.com
Fri May 8 03:19:19 UTC 2009


Thank you Jonas for raising a very important concern, and thanks Scott
for sharing... although I am not yet clear on how to use McNemar even if
I pair the samples... and either it somehow scales to more than 2
classes, and what it actually means besides possibly present preference
to one of the classes. I would really appreciate if you could elaborate
a bit or may be provide some link (pardon my ignorance if it is an
obvious thing)

Let me start myy reply to Jonas from the end:

MC permutation testing -- see permutation_test.py example shipped along
 with pymvpa, or described  at 
 http://www.pymvpa.org/examples/permutation_test.html

 in case of sensitivity estimates, look at the
 http://www.pymvpa.org/modref/mvpa.measures.base.html#mvpa.measures.base.DatasetMeasure
 and analogous null_dist argument (sorry that we have no explanatory
 example for it, but test_stats.py has some basic usage tested ;))

Binomial test -- that is a touchy point ;) The problem is that the main
assumption of it (independence of the trials) is violated in 99% of the
cases -- most of the time samples in the 'testing' set are not
independent among themselves. And I do not know any reliable way to
assess their independence to be truly unbiased... so, in your case,
since runs are independent, to get possibly conservative (which is
better imho than naively optimistic), if I was to go with binomial I
would take n=8 for assessing the significance of overall accuracy ;)

Another somewhat ad-hoc approach to appreciate the inappropriateness of
binomial testing, is to run the searchlight with some "reasonable"
radius (5mm?) and then look at the complete distribution of performances
;) so, theoretically it consists of 2 distributions -- null distribution
(with the tail in 'below chance' performances) and some distribution
of somewhat meaningful models (when searchlight hits more or less
appropriate spots).  It is most of the fun to look at 2-class case and
assume that null-distribution is symmetric around .5 and all
below-chance performances is just 1 half of the distribution... so you
can simply 'recreate' full distribution by mirroring below-chance part
around .5... then compare it to the binomial distribution which
many people would take to judge on 'significance'.
it might be really fun experience ;)  

For more evolved discussion / links to papers / etc I would simply refer
you to not-that-old discussion on an mvpa-toolbox mailing list and
Francisco's email in particular:

http://groups.google.com/group/mvpa-toolbox/browse_thread/thread/6efb07a1300d2075?pli=1

I hope anything of written here is of any help ;)

-- 
                                  .-.
=------------------------------   /v\  ----------------------------=
Keep in touch                    // \\     (yoh@|www.)onerussian.com
Yaroslav Halchenko              /(   )\               ICQ#: 60653192
                   Linux User    ^^-^^    [175555]





More information about the Pkg-ExpPsy-PyMVPA mailing list