[pymvpa] Help with map2nifti()
Taku Ito
taku.ito1 at gmail.com
Mon Nov 4 20:08:52 UTC 2013
Great! Works like a charm. Thanks a lot.
On Mon, Nov 4, 2013 at 1:03 PM, Yaroslav Halchenko <debian at onerussian.com>wrote:
>
> On Mon, 04 Nov 2013, Taku Ito wrote:
>
> > You're right, thank you, I was looking through the incorrect data set
> - My
> > original data set (the one I wrote to right after fmri_dataset), has
> the
> > following mappers:
> > In [104]: data[1].a.mapper
> > Out[104]: ChainMapper(nodes=[FlattenMapper(shape=(59, 69, 59),
> > auto_train=True, space='voxel_indices'),
> > StaticFeatureSelection(dshape=(240189,), slicearg=array([False, False,
> > False, ..., False, False, False], dtype=bool))])
> > However, after this data set, I constructed a sub-data set by just
> > selecting certain samples with the target values I wanted, splitting
> the
> > datasets accordingly, and then reconcatenating them into a new data
> set.
>
> that is where probably the mapper got "lost" ;-)
>
> > (I then ran the ZScoreMapper() on this new data set, since I only
> wanted
> > to Z score the samples I was going to use in the searchlight
> analysis.)
> > In [105]: data_stack[1].a.mapper
> > Out[105]: ZScoreMapper()
> > Finally, I ran the searchlight analysis which resulted in:
> > In [106]: res[1].a.mapper
> > Out[106]: ZScoreMapper()
> > Would it then be accurate for me to simply use the ChainMapper from
> the
> > original data set (exclude the ZScoreMapper from the map2nifti
> function),
> > like so:
> > map2nifti(data[1], res[1].samples)
>
> yes -- should work
>
> > Or would it be more accurate of me to construct a new ChainMapper
> which
> > includes the ZScoreMapper, append it to the ChainMapper of the
> original
> > data set, and then re-attribute it to the searchlight output data
> set?
>
> nah -- probably not needed/not worth the hassle. If you did no action
> which would change 'features' origin, taking original dataset would be
> the easiest and most straightforward
>
> > Thanks again! This has been a huge hassle, and you've been a great
> help
> > (though I realize that much of the troubles were my own carelessness
> > now... Sorry!).
>
> well -- we should bear our portion of blame here too ;) may be we
> should issue a warning whenever hstacking/etc datasets with the mappers
> so they do not get lost without notice (actually current master might
> keep them... I would need to check).
>
> Cheers,
> --
> Yaroslav O. Halchenko, Ph.D.
> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
> Senior Research Associate, Psychological and Brain Sciences Dept.
> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
> WWW: http://www.linkedin.com/in/yarik
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
--
Takuya Ito
Cognitive Control & Psychopathology Laboratory
Washington University in St. Louis
Cole Neuroscience Laboratory (http://www.mwcole.net/)
Center for Molecular and Behavioral Neuroscience (CMBN)
Rutgers-Newark University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20131104/dd58f2a3/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list