[pymvpa] hyperalignment inquiry

Swaroop Guntupalli swaroopgj at gmail.com
Wed Jul 27 21:57:33 UTC 2016

Hi David,

Beta maps should work fine for hyperalignment. The more maps (or TRs) there
are, better the estimate.
We used within-subject hyperalignment in Haxby et al. 2011, which uses maps
from 6 categories (we used 3 successive betas per condition I think).

vstack() merges multiple datasets into a single dataset, and if there is
any voxel count (nfeatures) mismatch across subjects, it won't work (as
evidenced by the error).
Hyperalignment takes in a list of datasets, one per each subject.
So, you can make that a list as
ds_all =[ds1, ds2, ...., ds16]
and use for Hyperalignment()


On Wed, Jul 27, 2016 at 2:28 PM, David Soto <d.soto.b at gmail.com> wrote:

> hi,
> in my experiment I have 28 betas in condition A and 28 parameter estimate
> images and 28  in condition B for each subject (N=16 in total).
> i have performed across-subjects SVM-based searchlight classification
> using MNI-registered individual beta images and I would like to repeat and
> confirm my results using searchlight based on hyperaligned data.
> i am not aware of any paper using hyperaligment on  beta images but I
> think this should be possible, any advise please would be nice
> i've created individual datasets concatenating the 28 betas in condition A
> and the 28 in condition (in the actual experiment condition A and B can
> appear randomly on each trial). I have 16 nifti datasets, one per subject,
> with each in individual native anatomical space. In trying to get a dataset
> in the same format as in the hyperlignment tutorial I use fmri_dataset on
> each individual wholebrain 48 betas  and then try to merged then all i.e. ds_merged
> = vstack((d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, d12, d13, d14,
> d15,d16)) but this gives the following error pasted at the end,
> which I think it is becos the number of voxels is different across
> subjects. This is one issue.
> Another is that the function vstack does appear to produce the list of
> individual datasets that is in the hyperligment tutorial dataset, but a
> list of individual betas, I would be grateful to receive some tips.
> thanks!
> david
> ------------------------------------------------------------
> ---------------
> ValueError                                Traceback (most recent call last)
> <ipython-input-64-2fef46542bfc> in <module>()
>      19 h5save('/home/dsoto/dsoto/fmri/wmlearning/h5.hdf5', [d1,d2])
>      20 #ds_merged = vstack((d1, d2, d3, d4, d5, d6, d7,d8,d9, d10, d11,
> d12, d13, d14, d15, d16))
> ---> 21 ds_merged = vstack((d1, d2))
> /usr/local/lib/python2.7/site-packages/mvpa2/base/dataset.pyc in
> vstack(datasets, a)
>     687                              "datasets have varying attributes.")
>     688     # will puke if not equal number of features
> --> 689     stacked_samp = np.concatenate([ds.samples for ds in datasets],
> axis=0)
>     690
>     691     stacked_sa = {}
> ValueError: all the input array dimensions except for the concatenation
> axis must match exactly
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160727/9cf9d4ce/attachment.html>

More information about the Pkg-ExpPsy-PyMVPA mailing list