[pymvpa] hyperalignment inquiry

David Soto d.soto.b at gmail.com
Fri Jul 29 15:24:53 UTC 2016


​Hi, thanks I will try that,  I understand therefore that the number of
features per subject need not be equal across subjects for searchlight
hyperalignment - but please correct me if am wrong.
best
david

On 29 July 2016 at 16:17, Swaroop Guntupalli <swaroopgj at gmail.com> wrote:

> Hi David,
>
> If you are using searchlight hyperalignment, it is advisable to align the
> data across subjects using anatomy first. Simplest would be to be align
> them to an MNI template and then run the searchlight hyperalignment.
> Our tutorial dataset is affine aligned to MNI template.
>
> Best,
> Swaroop
>
> On Thu, Jul 28, 2016 at 10:51 AM, David Soto <d.soto.b at gmail.com> wrote:
>
>> Thanks Swaroop, I managed to get the dataset in the right format as per
>> the hyperaligmentsearchlight tutorial
>> however when I run the hyperaligment I get the following error (IndexError:
>> index 46268 is out of bounds for axis 1 with size 43506, see further
>> below)...to recap the dataset is a concatenation of each subject data, each
>> in individual native space, so number of features are different across
>> subjects...
>> The code I use is the same as in the tutorial, namely, any feedback would
>> be great, thanks, david
>> cv = CrossValidation(clf, NFoldPartitioner(attr='subject'),
>>                      errorfx=mean_match_accuracy)
>>
>> for test_run in range(nruns):
>>     ds_train = [sd[sd.sa.chunks != test_run, :] for sd in ds_all]
>>     ds_test = [sd[sd.sa.chunks == test_run, :] for sd in ds_all]
>>
>>     slhyper = SearchlightHyperalignment(radius=3, featsel=0.4,
>> sparse_radius=3)
>>     slhypmaps = slhyper(ds_train)
>>     ds_hyper = [h.forward(sd) for h, sd in zip(slhypmaps, ds_test)]
>>
>>     ds_hyper = vstack(ds_hyper)
>>     zscore(ds_hyper, chunks_attr='subject')
>>     res_cv = cv(ds_hyper)
>>     bsc_slhyper_results.append(res_cv)
>>
>> OUTPUT MESSAGE.........
>> Performing classification analyses...
>>   between-subject (searchlight hyperaligned)...
>>
>> ------------------------------------------------------------
>> ---------------
>> IndexError                                Traceback (most recent call
>> last)
>> <ipython-input-191-85bdb873d4f1> in <module>()
>>      24     # Searchlight Hyperalignment returns a list of mappers
>> corresponding to
>>      25     # subjects in the same order as the list of datasets we
>> passed in.
>> ---> 26     slhypmaps = slhyper(ds_train)
>>      27
>>      28     # Applying hyperalignment parameters is similar to applying
>> any mapper in
>>
>> /usr/local/lib/python2.7/site-packages/mvpa2/algorithms/searchlight_hyperalignment.pyc
>> in __call__(self, datasets)
>>     626             node_blocks = np.array_split(roi_ids, params.nblocks)
>>     627             p_results = [self._proc_block(block, datasets,
>> hmeasure, queryengines)
>> --> 628                          for block in node_blocks]
>>     629         results_ds = self.__handle_all_results(p_results)
>>     630         # Dummy iterator for, you know, iteration
>>
>> /usr/local/lib/python2.7/site-packages/mvpa2/algorithms/searchlight_hyperalignment.pyc
>> in _proc_block(self, block, datasets, featselhyper, queryengines, seed,
>> iblock)
>>     387                 continue
>>     388             # selecting neighborhood for all subject for
>> hyperalignment
>> --> 389             ds_temp = [sd[:, ids] for sd, ids in zip(datasets,
>> roi_feature_ids_all)]
>>     390             if self.force_roi_seed:
>>     391                 roi_seed = np.array(roi_feature_ids_all[self.params.ref_ds])
>> == node_id
>>
>> /usr/local/lib/python2.7/site-packages/mvpa2/datasets/base.pyc in
>> __getitem__(self, args)
>>     139
>>     140         # let the base do the work
>> --> 141         ds = super(Dataset, self).__getitem__(args)
>>     142
>>     143         # and adjusting the mapper (if any)
>>
>> /usr/local/lib/python2.7/site-packages/mvpa2/base/dataset.pyc in
>> __getitem__(self, args)
>>     445         if isinstance(self.samples, np.ndarray):
>>     446             if np.any([isinstance(a, slice) for a in args]):
>> --> 447                 samples = self.samples[args[0], args[1]]
>>     448             else:
>>     449                 # works even with bool masks (although without
>>
>> IndexError: index 46268 is out of bounds for axis 1 with size 43506
>>
>> On 28 July 2016 at 00:25, Swaroop Guntupalli <swaroopgj at gmail.com> wrote:
>>
>>> Hi David,
>>>
>>> If you have limited data, you can use a part of it (however you split
>>> the data for training and testing)
>>> to train hyperalignment, and also use the same part to train the
>>> classifier and then apply hyperalignment and test classifier on the
>>> left-out part. Yes, you can artificially create 2 chunks (or more if you
>>> prefer).
>>>
>>>
>>> On Wed, Jul 27, 2016 at 3:17 PM, David Soto <d.soto.b at gmail.com> wrote:
>>>
>>>> sounds great thanks, a further thing is that I have seen that in order
>>>> to preclude  circularity issues, hyperalinment is implemented on a subset
>>>> of training chunks and then the transformation is applied to the full
>>>> datasets prior to classification analyses.  Given that I have no proper
>>>> chunks/runs here, but only 56 betas across trials, would it be okay to
>>>> train hyperaligment just on half of the 56 betas, eg artificially split the
>>>> data set in 2 chunks  each containing 14 betas of class A and 14 of class
>>>> B? Or would it be just OK to train hyperaligment on the 56 betas in the
>>>> first instance?
>>>> thanks!
>>>> david
>>>>
>>>> On 28 July 2016 at 00:00, Swaroop Guntupalli <swaroopgj at gmail.com>
>>>> wrote:
>>>>
>>>>> The hyperalignment example on PyMVPA uses one beta map for each
>>>>> category per run.
>>>>>
>>>>> On Wed, Jul 27, 2016 at 2:57 PM, Swaroop Guntupalli <
>>>>> swaroopgj at gmail.com> wrote:
>>>>>
>>>>>> Hi David,
>>>>>>
>>>>>> Beta maps should work fine for hyperalignment. The more maps (or TRs)
>>>>>> there are, better the estimate.
>>>>>> We used within-subject hyperalignment in Haxby et al. 2011, which
>>>>>> uses maps from 6 categories (we used 3 successive betas per condition I
>>>>>> think).
>>>>>>
>>>>>> vstack() merges multiple datasets into a single dataset, and if there
>>>>>> is any voxel count (nfeatures) mismatch across subjects, it won't work (as
>>>>>> evidenced by the error).
>>>>>> Hyperalignment takes in a list of datasets, one per each subject.
>>>>>> So, you can make that a list as
>>>>>> ds_all =[ds1, ds2, ...., ds16]
>>>>>> and use for Hyperalignment()
>>>>>>
>>>>>> Best,
>>>>>> Swaroop
>>>>>>
>>>>>>
>>>>>> On Wed, Jul 27, 2016 at 2:28 PM, David Soto <d.soto.b at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> hi,
>>>>>>>
>>>>>>> in my experiment I have 28 betas in condition A and 28 parameter
>>>>>>> estimate images and 28  in condition B for each subject (N=16 in total).
>>>>>>>
>>>>>>> i have performed across-subjects SVM-based searchlight
>>>>>>> classification using MNI-registered individual beta images and I would like
>>>>>>> to repeat and confirm my results using searchlight based on hyperaligned
>>>>>>> data.
>>>>>>>
>>>>>>> i am not aware of any paper using hyperaligment on  beta images but
>>>>>>> I think this should be possible, any advise please would be nice
>>>>>>>
>>>>>>> i've created individual datasets concatenating the 28 betas in
>>>>>>> condition A and the 28 in condition (in the actual experiment condition A
>>>>>>> and B can appear randomly on each trial). I have 16 nifti datasets, one per
>>>>>>> subject, with each in individual native anatomical space. In trying to get
>>>>>>> a dataset in the same format as in the hyperlignment tutorial I use
>>>>>>> fmri_dataset on each individual wholebrain 48 betas  and then try to merged
>>>>>>> then all i.e. ds_merged = vstack((d1, d2, d3, d4, d5, d6, d7, d8,
>>>>>>> d9, d10, d11, d12, d13, d14, d15,d16)) but this gives the following error
>>>>>>> pasted at the end,
>>>>>>> which I think it is becos the number of voxels is different across
>>>>>>> subjects. This is one issue.
>>>>>>>
>>>>>>> Another is that the function vstack does appear to produce the list
>>>>>>> of individual datasets that is in the hyperligment tutorial dataset, but a
>>>>>>> list of individual betas, I would be grateful to receive some tips.
>>>>>>>
>>>>>>> thanks!
>>>>>>> david
>>>>>>> ------------------------------------------------------------
>>>>>>> ---------------
>>>>>>> ValueError                                Traceback (most recent
>>>>>>> call last)
>>>>>>> <ipython-input-64-2fef46542bfc> in <module>()
>>>>>>>      19 h5save('/home/dsoto/dsoto/fmri/wmlearning/h5.hdf5', [d1,d2])
>>>>>>>      20 #ds_merged = vstack((d1, d2, d3, d4, d5, d6, d7,d8,d9, d10,
>>>>>>> d11, d12, d13, d14, d15, d16))
>>>>>>> ---> 21 ds_merged = vstack((d1, d2))
>>>>>>>
>>>>>>> /usr/local/lib/python2.7/site-packages/mvpa2/base/dataset.pyc in
>>>>>>> vstack(datasets, a)
>>>>>>>     687                              "datasets have varying
>>>>>>> attributes.")
>>>>>>>     688     # will puke if not equal number of features
>>>>>>> --> 689     stacked_samp = np.concatenate([ds.samples for ds in
>>>>>>> datasets], axis=0)
>>>>>>>     690
>>>>>>>     691     stacked_sa = {}
>>>>>>>
>>>>>>> ValueError: all the input array dimensions except for the
>>>>>>> concatenation axis must match exactly
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Pkg-ExpPsy-PyMVPA mailing list
>>>>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>>>>>>
>>>>>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Pkg-ExpPsy-PyMVPA mailing list
>>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>>>>
>>>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Pkg-ExpPsy-PyMVPA mailing list
>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>>>
>>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>>
>>>
>>>
>>> _______________________________________________
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160729/0c7b0e86/attachment-0001.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list