[pymvpa] Searchlight and permutation tests.

Christopher J Markiewicz effigies at bu.edu
Tue Apr 26 14:06:34 UTC 2016


On 04/26/2016 09:41 AM, Roberto Guidotti wrote:
> Hi all,
> 
> I'm writing the second part of my story, begun with the thread
> "Balancing with searchlight and statistical issue"!
> 
> I have a problem with searchlight results, I ran some whole brain
> searchlight to decode betas of an fMRI dataset regarding a memory task
> (this time the dataset is balanced).
> I divided the dataset in 5 parts (11 betas per condition in each fold)
> and run a searchlight using Linear SVM (C=1) with a leave-one out
> cross-validation. 
> The across-subject average map has some suspicious results,

For group results, I'd consider using something like the more
conservative approach in Lee, et al. 2012
(https://www.ncbi.nlm.nih.gov/pubmed/22423114). They subtracted the
spatial mean from each subject, treating the center of the histogram as
an empirically discovered chance accuracy. They then use t tests to find
clusters that are consistently informative across subjects.

This can be pretty easily extended to a permutation test to get a null
distribution of cluster sizes.

> the accuracy
> histogram is not peaked at chance level (0.5) but is peaked at
> 0.55-0.56, so the majority of voxels has that range of value. Do you
> think is it reasonable? Or it depends on some cross-validation scheme,
> beta issue, or who knows?

I've used a similar analysis strategy as yours. While most of my
analyses tend to clump around chance, some are shifted a little right (I
don't think I've seen any shifted left). On a two-class problem,
0.55-0.56 doesn't seem terribly unreasonable to me, though it might not
be a bad idea to check your GLM to make sure that you're not
accidentally introducing trial information into your betas.

> To validate that result I ran a exploratory permutation test (n=100) on
> a single subject to look at accuracy distribution, in that subject, the
> histogram after permutation test is correctly peaked at chance level
> (the map with correct labeling is peaked at 0.57). I don't know if I'm
> correct but this should validate the hypothesis that the map histogram
> peaked at 0.55-0.56 is reasonable!

It would be even more surprising if permutations were peaked at anything
other than chance. Even if you introduced trial information into the
betas, shuffling the trial labels would destroy the usefulness of that
information. So I wouldn't consider this a validation. Sorry.

-- 
Christopher J Markiewicz
Ph.D. Candidate, Quantitative Neuroscience Laboratory
Boston University



More information about the Pkg-ExpPsy-PyMVPA mailing list