[pymvpa] question about cross-subject analysis
J.A. Etzel
jetzel at artsci.wustl.edu
Wed Jan 18 17:04:07 UTC 2012
To run multiple-subjects tests I've usually converted everyone's
functional images to a standard space first (MNI or whatever), then
subsetted to only have voxels with non-zero variance in all subjects.
This sometimes works surprisingly well and is fairly straightforward.
Jo
On 1/18/2012 9:30 AM, John Magnotti wrote:
> Hi All,
>
> I'm trying to work build a cross-subject analysis using the Haxby et
> al data (http://data.pymvpa.org/datasets/haxby2001/). The problem is
> that the masks for each subject don't necessarily cover the same
> voxels. Poldrack et al. [1] mention using an intersection mask to
> ensure they were looking at the same voxels across subjects. Is there
> a way to do this in PyMVPA, and should I do something like convert to
> standard space beforehand? I could also just use the whole timeseries,
> but I think there is still the issue of ensuring that the voxels
> "match" across subjects, right?
>
> Any hints or tips would be much appreciated.
>
>
> Thanks,
>
> John
More information about the Pkg-ExpPsy-PyMVPA
mailing list