[pymvpa] high prediction rate in a permutation test

Vadim Axel axel.vadim at gmail.com
Thu May 19 06:35:40 UTC 2011


Yes, I agree with you. However, I somehow feel that reporting significance
based on permutation values is more cumbersome than t-tests. Consider the
case that out of 10 subjects 8 have significant result (based on
permutation) and two remaining are not. What should I say in my results?
Does the ROI discriminate between two classes? When I use group t-test
everything is simple  - the result is true or false for the whole group.
Now, suppose that I have more than one ROI and I want to compare their
results. Though I can show average prediction rate across subjects, I am
afraid that when I start to report for each ROI for how many subjects it was
significant and for how many not,  everybody (including myself) would be
confused....

BTW, how you recommend to correct for multiple comparisons? For example I
run 100 search lights.Making Bonferoni correction (0.05/100) = 0.0005
results in very high threshold. Consider my case with the mean values, which
is based on 1000 tests only. Based on 0.0005 threshold I need to get
classification of 0.75+ (!). My data are not that good :( What people are
doing for whole brain when the number of search lights is tens of
thousands...


On Thu, May 19, 2011 at 1:54 AM, Yaroslav Halchenko
<debian at onerussian.com>wrote:

>
> On Wed, 18 May 2011, J.A. Etzel wrote:
> > A t-test is possible, assuming you're doing a within-subjects
> > analysis (classifying every person separately). But it's not what I
> > prefer. One reason is that we're often on the edge of what
> > parametric tests can handle (number of subjects, distributions,
> > dependencies, etc.). Another is that a t-test isn't quite focused on
> > what I want to know: I want to know if the average accuracy is
> > greater than what we'd get with random images, which is what's
> > tested with a well-designed permutation test. For example, imagine
> > your subjects had very similar accuracies just above chance (0.52,
> > 0.53, etc.). Under the right conditions this could turn out as
> > significant with a t-test, but probably shouldn't be considered
> > important.
>
> exactly!  additional example to appreciate the topic:
>
> which of the two cases in case of binary classification you would prefer to
> see
> as the "significant" or trustful result? ;)
>
>   0.60000   0.70000   0.80000   0.90000   1.00000
>
> or
>
>   0.51000   0.52000   0.53000   0.54000   0.55000
>
> which, if I didn't get it wrong should have the same t-score against the
> chance
> level of 0.5 ;-)
>
> in other additional words: who said that raw accuracies are normally
> distributed? ;)
>
> But since it is a common practice, Vadim please do not take those words
> above as the "stop sign".  Just keep in mind the "effect size" ;)
>
> > As a practical matter, I sometimes calculate t-test p-values in the
> > early stages of analysis because they're so fast, then calculate
> > permutation tests for the final p-values. In some datasets the
> > p-values from the two methods are close, in others they've been far
> > apart, sometimes with the t-test p-values much less significant.
>
> It is the evening, and we already celebrated the successful  launch of our
> neuroscience software survey (I bet all of you participated already, didn't
> you?) --- Jo, could you please elaborate a bit more on above "fast t-test
> -> permutations -> p-values"? I might be missing something obvious ;)
>
> Thanks in advance!
>
> > ps: always good to plot data to eyeball normality, not just run tests. :)
>
> Good advice, but data is scary -- blobs in the stat plots are eyecandies!
> ;)
>
> --
> =------------------------------------------------------------------=
> Keep in touch                                     www.onerussian.com
> Yaroslav Halchenko                 www.ohloh.net/accounts/yarikoptic
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20110519/aab4d22c/attachment-0001.htm>


More information about the Pkg-ExpPsy-PyMVPA mailing list