[pymvpa] Obtaining max responsive masks via MVPA
Yaroslav Halchenko
debian at onerussian.com
Wed Oct 6 16:48:53 UTC 2010
On Wed, 06 Oct 2010, MS Al-Rawi wrote:
> First, I have a question about the data
> in [1]http://data.pymvpa.org/datasets/haxby2001/
> Was it preprocessed for; motion correction, slice-timing, normalized
> to MNI, or smoothed? or any other preprocessing?
AFAIK nothing was done to it
that subj1 was taken for the tutorial_data and there it was slightly
preprocessed:
http://data.pymvpa.org/datasets/tutorial_data/
> The max-responsive masks it contains were obtained per subject via GLM
> contrast based localizer maps, this goes for mask4_vt.nii too. Why
> there isn't one mask (per category or vt) for all subjects via group
> analysis in order to use it in MVPA analysis?
because no one created such I guess ;-) and each subject VT is different
(if you want to be more or less precise)
> Also, is it possible to
> obtain such masks using MVPA via searchlight or any other alternative?
sure ;) but, be warned that to avoid the independence and circularity in
the data analysis, if you are to derive functional localizers masks
(e.g. FFA etc) you better use a separate dataset (run), or do it along
with your MVPA analysis within cross-validation folds (thus possibly
having different masks across folds)
--
.-.
=------------------------------ /v\ ----------------------------=
Keep in touch // \\ (yoh@|www.)onerussian.com
Yaroslav Halchenko /( )\ ICQ#: 60653192
Linux User ^^-^^ [175555]
More information about the Pkg-ExpPsy-PyMVPA
mailing list