[pymvpa] question about cross-subject analysis
Yaroslav Halchenko
debian at onerussian.com
Wed Jan 18 15:37:21 UTC 2012
I you only care about classification performances I bet you might soon
hear from Raj...
> ... the masks for each subject don't necessarily cover the same
> voxels.
like snowflakes -- neither of two voxels are "the same" ;) they might be alike
though... Therefore you could also have a look at the recent regarding
either that is closer to your goals:
Haxby, J. V. , Guntupalli, J. S. , Connolly, A. C. , Halchenko, Y. O. ,
Conroy, B. R., Gobbini, M. I. , Hanke, M. and Ramadge, P. J. (2011). A Common,
High-Dimensional Model of the Representational Space in Human Ventral Temporal
Cortex. Neuron, 72, 404–416. [PDF] [PDF:Supp] DOI:
10.1016/j.neuron.2011.08.026,
http://haxbylab.dartmouth.edu/publications/HGC+11.pdf
http://haxbylab.dartmouth.edu/publications/HGC+11_Supplementals.pdf
Hyperalignment is available as a part of mvpa2
On Wed, 18 Jan 2012, John Magnotti wrote:
> Hi All,
> I'm trying to work build a cross-subject analysis using the Haxby et
> al data (http://data.pymvpa.org/datasets/haxby2001/). The problem is
> that the masks for each subject don't necessarily cover the same
> voxels. Poldrack et al. [1] mention using an intersection mask to
> ensure they were looking at the same voxels across subjects. Is there
> a way to do this in PyMVPA, and should I do something like convert to
> standard space beforehand? I could also just use the whole timeseries,
> but I think there is still the issue of ensuring that the voxels
> "match" across subjects, right?
> Any hints or tips would be much appreciated.
--
=------------------------------------------------------------------=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
More information about the Pkg-ExpPsy-PyMVPA
mailing list