[pymvpa] mysterious whole-brain decoding
Floris van Vugt
floris.vanvugt at unipv.it
Tue Jan 21 19:40:21 GMT 2020
Hello PyMVPA developers,
First of all I wanted to say thank you for your work on making this
fantastic package available and documenting it.
I have a question. Let me formulate the question with different amount
of detail to cater to what you may or may not want to know.
**Shortest version**
What may I be doing wrong if I my group-level significance maps for
searchlight decoding cover the entire brain?
**Slightly longer version**
I ran a movement study in fMRI and I am decoding on a subject-level
basis which of a set of 4 movements was performed on each trial. I then
combine these into a group-level map using permutation testing and
clustering. What comes out is one huge cluster that covers each and
every voxel in the brain. I find it hard to believe that each voxel is
informative as to which movement subjects perform, so I suspect I am
doing something wrong, but I can't figure out what that would be.
**Slightly longer yet version**
I ran an fMRI study where people make movement while we collect fMRI. I
do the usual de-trending and motion correction preprocessing. I run MVPA
to decode which movement they did. I use off-the-shelf cross-validated
spherical searchlight decoding with SVM on the subject level, yielding
maps that to me look pretty decent, with decoding close to theoretical
chance level. Then I generate group-level significance levels using the
Stelzer method (GroupClusterThreshold), i.e. I re-run decoding on
subject-level datasets with permuted labels, and then sample from those
to create a group-level null-distribution. Subsequently the true average
decoding map is clustered and thresholded using cluster forming
threshold p=.001 and clusters with cluster-level p value below .05 are
retained.
What I expected instead is to see some clusters covering parts of the brain.
What I actually find is that there is one significant cluster which
covers the entire brain. That is, each and every voxel is deemed to
significantly decode the movement that subjects make.
See the linked glass brain image, in which the decoding accuracy is
shown for each voxel:
https://drive.google.com/file/d/1Lb6e1qZlKDcmu5WDAV84McRKDi1ITL4P/view?usp=sharing
(as you can see the darker areas, which correspond to higher decoding,
are indeed where we expect them, i.e. in the motor areas of the brain,
but what I don't expect is that everywhere else in the brain there is
significant decoding as well, close to the theoretical chance level).
I can also submit scripts but it's a bit complex because it's part of a
large pipeline.
I felt maybe some of you off the top of your head can already indicate
what might be going wrong here or maybe somebody has experienced
something similar.
If you want an actual reproducible pipeline then I can generate that, I
just wanted to see if I could spare myself that effort :)
So at this point, feel free to respond: "need more details, send code"
and I will do that.
Any help much appreciated!
Best wishes,
Floris van Vugt
--
Floris van Vugt, PhD
https://www.florisvanvugt.com
University of Pavia / McGill University / Haskins Laboratories
More information about the Pkg-ExpPsy-PyMVPA
mailing list