[pymvpa] hyperalignment inquiry

David Soto d.soto.b at gmail.com
Wed Jul 27 22:17:17 UTC 2016


sounds great thanks, a further thing is that I have seen that in order to
preclude  circularity issues, hyperalinment is implemented on a subset of
training chunks and then the transformation is applied to the full datasets
prior to classification analyses.  Given that I have no proper chunks/runs
here, but only 56 betas across trials, would it be okay to train
hyperaligment just on half of the 56 betas, eg artificially split the data
set in 2 chunks  each containing 14 betas of class A and 14 of class B? Or
would it be just OK to train hyperaligment on the 56 betas in the first
instance?
thanks!
david

On 28 July 2016 at 00:00, Swaroop Guntupalli <swaroopgj at gmail.com> wrote:

> The hyperalignment example on PyMVPA uses one beta map for each category
> per run.
>
> On Wed, Jul 27, 2016 at 2:57 PM, Swaroop Guntupalli <swaroopgj at gmail.com>
> wrote:
>
>> Hi David,
>>
>> Beta maps should work fine for hyperalignment. The more maps (or TRs)
>> there are, better the estimate.
>> We used within-subject hyperalignment in Haxby et al. 2011, which uses
>> maps from 6 categories (we used 3 successive betas per condition I think).
>>
>> vstack() merges multiple datasets into a single dataset, and if there is
>> any voxel count (nfeatures) mismatch across subjects, it won't work (as
>> evidenced by the error).
>> Hyperalignment takes in a list of datasets, one per each subject.
>> So, you can make that a list as
>> ds_all =[ds1, ds2, ...., ds16]
>> and use for Hyperalignment()
>>
>> Best,
>> Swaroop
>>
>>
>> On Wed, Jul 27, 2016 at 2:28 PM, David Soto <d.soto.b at gmail.com> wrote:
>>
>>> hi,
>>>
>>> in my experiment I have 28 betas in condition A and 28 parameter
>>> estimate images and 28  in condition B for each subject (N=16 in total).
>>>
>>> i have performed across-subjects SVM-based searchlight classification
>>> using MNI-registered individual beta images and I would like to repeat and
>>> confirm my results using searchlight based on hyperaligned data.
>>>
>>> i am not aware of any paper using hyperaligment on  beta images but I
>>> think this should be possible, any advise please would be nice
>>>
>>> i've created individual datasets concatenating the 28 betas in condition
>>> A and the 28 in condition (in the actual experiment condition A and B can
>>> appear randomly on each trial). I have 16 nifti datasets, one per subject,
>>> with each in individual native anatomical space. In trying to get a dataset
>>> in the same format as in the hyperlignment tutorial I use fmri_dataset on
>>> each individual wholebrain 48 betas  and then try to merged then all i.e. ds_merged
>>> = vstack((d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, d12, d13, d14,
>>> d15,d16)) but this gives the following error pasted at the end,
>>> which I think it is becos the number of voxels is different across
>>> subjects. This is one issue.
>>>
>>> Another is that the function vstack does appear to produce the list of
>>> individual datasets that is in the hyperligment tutorial dataset, but a
>>> list of individual betas, I would be grateful to receive some tips.
>>>
>>> thanks!
>>> david
>>> ------------------------------------------------------------
>>> ---------------
>>> ValueError                                Traceback (most recent call
>>> last)
>>> <ipython-input-64-2fef46542bfc> in <module>()
>>>      19 h5save('/home/dsoto/dsoto/fmri/wmlearning/h5.hdf5', [d1,d2])
>>>      20 #ds_merged = vstack((d1, d2, d3, d4, d5, d6, d7,d8,d9, d10, d11,
>>> d12, d13, d14, d15, d16))
>>> ---> 21 ds_merged = vstack((d1, d2))
>>>
>>> /usr/local/lib/python2.7/site-packages/mvpa2/base/dataset.pyc in
>>> vstack(datasets, a)
>>>     687                              "datasets have varying attributes.")
>>>     688     # will puke if not equal number of features
>>> --> 689     stacked_samp = np.concatenate([ds.samples for ds in
>>> datasets], axis=0)
>>>     690
>>>     691     stacked_sa = {}
>>>
>>> ValueError: all the input array dimensions except for the concatenation
>>> axis must match exactly
>>>
>>> _______________________________________________
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>
>>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160728/2e2b0c40/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list