[pymvpa] detecting changes between fMRI sessions: a new mapper?

Emanuele Olivetti emanuele at relativita.com
Wed Sep 17 16:07:12 UTC 2008

[forwarded from an unknown super-secret list ;) , with minor editing]

Dear All,

Sorry for the long email. It is just some initial random
thoughts about a new question. When you have some spare time
have a look ;)

Recently I came across this kind of
experiment, which is not new at all to neuroscientists
but new to me. Super-brief description:
- Stimulate the subject while recording fMRI data (e.g.,
showing pictures and requiring the subject to name them)
- After scanning, the subject starts training alone at
home on the same task (picture->name), but on a
_subset_ of the pictures (let's say half of them), spending
something like 15 minutes each day, for some weeks
- Then, after long boring training, a second fMRI session
is done, showing _all_ pictures as the first time.

The goal of this experiment is to find if there are differences
between the two fMRI sessions that depend on the training
the subject did.
Super-simplified E.g.:
* Session one (in scanner): Dog, Cat, House and Tree are shown
to the subject and she names each after their appearance.
* Training at home: the subject trains himself _just_ on Dog
and House in the same way as she did in session one.
* Session two (in scanner): House, Dog, Tree and Cat are shown
again to the subject and he names them as usual.

As you can easily imagine the goal is to find differences
in voxels' BOLD signal between the two runs for Dog and House
and not Cat and Tree, in order to get evidences of neural
plasticity due to the boring training at home.

Besides the many details a real experiment could have I observe that:
- in order to apply ML techniques here and try to answer the
main question (if and where plasticity occurs) there is the
need of a slightly new approach w.r.t. what I've seen before.
- The class labels to predict (with a classifier) are just:
"trained at home" (class 1) and  "not trained at home" (class 0),
for all pictures shown during fMRI sessions.
- The samples that can be labelled as defined before are of
this kind:

Stimulus | v1S1 | v2S1 | ... | vNS1 | v1S2 | ... | vNS2 | label
Dog      |      |      |     |      |      |     |      |  1
House    |      |      |     |      |      |     |      |  1
Cat      |      |      |     |      |      |     |      |  0
Tree     |      |      |     |      |      |     |      |  0

where viSj means voxel 'i' value (or values if using a boxcar
mapper related to the interval between the stimulus onset
and final word production) during session 'i'. We assume that
'N' voxels are recorder/studied.
This dataset shows that voxels values of different runs should be
considered jointly.
- If perfect alignment between voxels of the 2 runs is available
(remember my previous email) then you can reduce the set of
features of the previous dataset in some way, e.g. :

Stimulus | v1S1-v1S2 | v2S1-v2S2 | ... | vNS1-vNS2 | label
Dog      |           |           |     |           |  1
House    |           |           |     |           |  1
Cat      |           |           |     |           |  0
Tree     |           |           |     |           |  0

where "v1S1-v1S2" means the difference (maybe absolute) between
the z-scored BOLD values (or values in case of further boxcar
mapper) of voxel 1 between the two sessions.
- Other assumptions (like the previous one about perfect alignement)
can be used in order to reduce the set of features, which is
ridiculously high in first dataset :) . I have some ideas on that.

So the question: is it correct that this kind of experiments
("find differences between different sessions") requires at least
something like a new Mapper in order to be analyzed with PyMVPA?
The mapper should numpy.hstack() data from different sessions
in some way.

Any comments on this?



More information about the Pkg-ExpPsy-PyMVPA mailing list