[pymvpa] question about cross-subject analysis
rawi707 at yahoo.com
Wed Jan 18 17:17:05 UTC 2012
> Poldrack et al.  mention using an intersection mask to
> ensure they were looking at the same voxels across subjects.
As a simple alternative approach, maybe you can try the intersection of (1pt)-dilated masks.
> From:J.A. Etzel <jetzel at artsci.wustl.edu>
>To:pkg-exppsy-pymvpa at lists.alioth.debian.org
>Sent:Wednesday, January 18, 2012 5:04 PM
>Subject:Re: [pymvpa] question about cross-subject analysis
>To run multiple-subjects tests I've usually converted everyone's
>functional images to a standard space first (MNI or whatever), then
>subsetted to only have voxels with non-zero variance in all subjects.
>This sometimes works surprisingly well and is fairly straightforward.
>On 1/18/2012 9:30 AM, John Magnotti wrote:
>> Hi All,
>> I'm trying to work build a cross-subject analysis using the Haxby et
>> al data (http://data.pymvpa.org/datasets/haxby2001/). The problem is
>> that the masks for each subject don't necessarily cover the same
>> voxels. Poldrack et al.  mention using an intersection mask to
>> ensure they were looking at the same voxels across subjects. Is there
>> a way to do this in PyMVPA, and should I do something like convert to
>> standard space beforehand? I could also just use the whole timeseries,
>> but I think there is still the issue of ensuring that the voxels
>> "match" across subjects, right?
>> Any hints or tips would be much appreciated.
>Pkg-ExpPsy-PyMVPA mailing list
>Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pkg-ExpPsy-PyMVPA