[pymvpa] high prediction rate in a permutation test
J.A. Etzel
jetzel at artsci.wustl.edu
Wed May 18 21:25:22 UTC 2011
A t-test is possible, assuming you're doing a within-subjects analysis
(classifying every person separately). But it's not what I prefer. One
reason is that we're often on the edge of what parametric tests can
handle (number of subjects, distributions, dependencies, etc.). Another
is that a t-test isn't quite focused on what I want to know: I want to
know if the average accuracy is greater than what we'd get with random
images, which is what's tested with a well-designed permutation test.
For example, imagine your subjects had very similar accuracies just
above chance (0.52, 0.53, etc.). Under the right conditions this could
turn out as significant with a t-test, but probably shouldn't be
considered important.
As a practical matter, I sometimes calculate t-test p-values in the
early stages of analysis because they're so fast, then calculate
permutation tests for the final p-values. In some datasets the p-values
from the two methods are close, in others they've been far apart,
sometimes with the t-test p-values much less significant.
Jo
ps: always good to plot data to eyeball normality, not just run tests. :)
On 5/18/2011 4:03 PM, Vadim Axel wrote:
> Thank you a lot for your advices!
> I have one more related question:
> How reliable in your opinion would be to test significance of the
> classification using t-test vs.0.5, where my vector of classification
> results contain subjects results? In other words, subject's A prediction
> was 0.6, B - 0.52, C - 0.55 etc. I take all those values as an input to
> t-test. The values are independent and the normality condition is also
> fulfilled (I can check it using Lilie-test).
>
More information about the Pkg-ExpPsy-PyMVPA
mailing list