[pymvpa] detecting changes between fMRI sessions: a new mapper?

Yaroslav Halchenko debian at onerussian.com
Thu Sep 18 14:03:27 UTC 2008


> Good point. In the experiment example that I was briefly describing,
> each session is indeed made of just one run. But differences across runs in
> a multi-run experiment are expected to be less (or better, somewhat
> different) from differences across sessions weeks far from each other.
right... so one of the things to 'look at' is not pure activation, but
rather some patterns (of sensitivities).

Also in the data I looked at, some "sensitive" voxels were in the brain
stem, which was differently 'tilted' between sessions due to head
position. So one of the things to do prior to analysis, is a nice brain
extraction which would remove brain stem

> Unfortunately, for the specific experiment I was mentioning,
> each picture is shown no more than once per session. So
> you can't label samples with "dog" or "dog1,dog2", since there is
I was just groupping all samples of dogs into dog-nottrained (for dogs
of 1st session), and
dog-trained(for dogs of 2nd session, ie after training), so it is easier ;-)


> So the only useful labelling you can use here seems to be "trained"
> and "not trained", without taking into account the information
> of which picture was actually shown in each sample.
right!

> A naive (crazy?) but maybe interesting idea to prepare a dataset for a
> classifier could be to defines ROIs using a known atlas (say
> Harvard-Oxford cortical atlas ;) ). Then average all voxels in a ROI
> to get a "per ROI" average BOLD signal.
hm... I think detrending per-voxel in each session would
accomplish similar thing ;-)

> So you reduce 40k voxels to ~50 features/ROIs . Then
> subtract average BOLD of the same ROI between the two session,
> to get the differences across sessions (maybe z-scored or else).
> Since you mapped data on a standard atlas you can even numpy.vstack()
> different subjects together and get something like this:
ah -- we are doing multi-subject classification now? :-) cool ;-)

> sample | deltaROI1 | delatROI2 | ... | deltaROIN | label
> --------------------------------------------------------------
> S1dog  |           |           | ... |           | trained
> --------------------------------------------------------------
> S1house|           |           | ... |           | trained
> --------------------------------------------------------------
> S1cat  |           |           | ... |           | not trained
> --------------------------------------------------------------
> S1tree |           |           | ... |           | not trained
> --------------------------------------------------------------
> S1...  |           |           | ... |           | trained
> --------------------------------------------------------------
> S2dog  |           |           | ... |           | not trained
> --------------------------------------------------------------
> S2...  |           |           | ... |           | trained
> --------------------------------------------------------------
> SNdog  |           |           | ... |           | trained
> --------------------------------------------------------------
> SN...  |           |           | ... |           | not trained
> --------------------------------------------------------------
> (Sn means subject 'n')
hm... I would make dog-trained, dog-nottrained labels as I mentioned
before. if you just do trained/not-trained, you would have no some kind validity
check... if there is a category (e.g. cat) which wasn't trained for,
then I can expect that there should be now contrast in comparing
cat-nottrained to cat-trained. Sure thing inability to disprove
null-hypothesis is not the same as proving it, but ... ;-)

> which, given the figures above, is a manageable dataset with 60x10
> samples (assuming N=10 subjects) with just ~50 features, one per ROI.
> A classifier trained on this dataset (assuming that averages don't
> destroy all relevant information :D ) in case of success would
> tell if "trained/untrained" are predictable class-label and which ROIs
> are relevant for the prediction, and (maybe naively :) ) which
> ROIs are related to neuroplasticity for this task.
well -- we also have plans to do such per-ROI searchlight... never
formalized it though. Michael has smth on the way to such analysis in
fmri analysis we have just published -- it can serve as a base to do
per-ROI training I guess

> Ciao,

> Emanuele



> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa


-- 
Yaroslav Halchenko
Research Assistant, Psychology Department, Rutgers-Newark
Student  Ph.D. @ CS Dept. NJIT
Office: (973) 353-5440x263 | FWD: 82823 | Fax: (973) 353-1171
        101 Warren Str, Smith Hall, Rm 4-105, Newark NJ 07102
WWW:     http://www.linkedin.com/in/yarik        



More information about the Pkg-ExpPsy-PyMVPA mailing list