[pymvpa] high prediction rate in a permutation test

Jonas Kaplan jtkaplan at usc.edu
Thu May 19 17:44:58 UTC 2011


> BTW, how you recommend to correct for multiple comparisons? For example I run 100 search lights.Making Bonferoni correction (0.05/100) = 0.0005 results in very high threshold. Consider my case with the mean values, which is based on 1000 tests only. Based on 0.0005 threshold I need to get classification of 0.75+ (!). My data are not that good :( What people are doing for whole brain when the number of search lights is tens of thousands...


We've been dealing with this issue as well.  Part of our problem is that permutation tests give a limited amount of precision in estimating p values... to find a p value small enough to satisfy a full Bonferoni correction would require ridiculous amounts of iterations.   For example, if we do 10,000 iterations we can only estimate what value corresponds to p < .0001.  If we do a searchlight on a whole brain with 60K voxels... 

But full Bonferoni correction is not necessary anyways given that spheres in a searchlight overlap with each other, and adjacent fMRI voxels are not independent from each other even without smooth.   So the number of independent tests is not equal to the number of spheres in a searchlight.   One approach is to estimate the number of resolution elements and divide your p value by that number instead of the number of voxels.  For example, you could divide the number of voxels by the size of the sphere to estimate the number of independent spheres in the data. 

I suppose something like FDR is also an option. 

Curious to hear what other people are doing. 

-Jonas

----
Jonas Kaplan, Ph.D.
Research Assistant Professor
Brain & Creativity Institute
University of Southern California




More information about the Pkg-ExpPsy-PyMVPA mailing list