[pymvpa] Preprocessing in FSL, question on specifics

Michael Hanke mih at debian.org
Tue Jul 12 20:07:13 UTC 2011


On Tue, Jul 12, 2011 at 02:20:32PM -0400, Mike E. Klein wrote:
> So essentially I'm wondering the best order of events: (a) when to
> concatenate the shorter 4D files into one large 4D file, (b) whether I
> should run motion correction 1 time (after the concatenation) or whether it
> should be run separately for each session's nifti file, before being run a
> second time on the concatenated file. While there is very little head motion
> within each session, there looks to be considerably more between sessions,
> which probably comes as no surprise. A test of running motion correction a
> single time (after concatenation) looks like it does not perform very well:
> there is still a large amount of motion visible to the naked eye.

For the purpose of analyzing with PyMVPA most of these aspects are up to
you. What you want to achieve as an optimal starting point is as much
feature/voxel correspondence as you can get, i.e. a voxel in one part of
the dataset should represent the same location as in any other. We all
know that this is difficult to achieve and the impact of loosing feature
correspondence depends on the spatial scale/extend/smoothness of the
signal you are looking for. Hence:

If double motion-correction (in-session + across-session) works best for
you: do it. Often we find that concatenating all volumes into a single
4D timeseries and a one-pass motion correction run works well. However,
that obviously depends on the actual motion in the data (also between
sessions). In general, the less motion there is the better. You'll never
be able to fully recover the signal by motion correction. You can
regress out residual motion from the data, but that is post-mortem
problem handling ;-)

> As an aside, if I plan to discard some specific volumes, I also would value
> any input as to whether it makes more sense to delete them from the 4D time
> series (using fslsplit and fslmerge), or to leave them in and give them
> their own label ("discard") in the attributes.txt file.

That is also up to you. I personally keep the data intact and discard
samples within the actual analysis pipeline in PyMVPA. The sole reason
is that I feel that it is better documented in the analysis script
instead of being lost in my shell history. But it would work either way.

Michael


-- 
Michael Hanke
http://mih.voxindeserto.de



More information about the Pkg-ExpPsy-PyMVPA mailing list