[pymvpa] Hyperalignment: "For now do not handle invariant in time datasets"

Müller, K. (Katja) K.Muller at psych.ru.nl
Thu Nov 23 10:41:49 UTC 2017


Hi Swaroop, 

Thanks for your response, but I had already checked for 0 variance in multiple variants and could not find voxels with 0 variance. I also checked the variables used in this call (means and sum of squares ssqs), and could not find something suspicious. 

This turned out to be an issue of the numpy SVD, which is used as a default by the ProcrusteanMapper. For large matrices it apparently may result in this error: 

>> init_dgesdd failed init


Initializing Hyperalignment() as 

Hyperalignment(alignment=ProcrusteanMapper(svd='dgesvd'))

(i.e. using the LAPACK dgesvd) solves it. There might be documentation about this that I'm not aware of. 


Best regards, 
Katja

> Am 17.11.29 Heisei um 20:11 schrieb Swaroop Guntupalli <swaroopgj at gmail.com>:
> 
> Hi Katja,
> 
> It looks the code is checking if variance/sum of squares of time
> series in your data is practically zero, and raising that error.
> Check if all your input datasets have non-zero variance in all voxels.
> 
> Best,
> Swaroop
> 
> On Mon, Nov 13, 2017 at 4:41 AM, Müller, K. (Katja)
> <K.Muller at psych.ru.nl> wrote:
>> Dear all,
>> 
>> It would be great if somebody has any kind of information about what causes this issue, and ways to solve it.
>> 
>> I am running hyperalignment ROI-by-ROI on 3 subjects. It works on every ROI except the largest one with ~28k voxels (the dataset has 53k voxels in total), where it fails with the error message:
>> "For now do not handle invariant in time datasets"
>> 
>> It was suggested earlier (similar mailing list question from 2013) to remove invariant voxels to solve this issue. I tried both remove_invariant_features() and my own code for this, but the behaviour and error message did not change.
>> 
>> My datasets are inside a standard Python list.
>> 
>>       train_pymv_datasets = [mvpa2.datasets.Dataset(dataset) for dataset in roi0_train_data]
>>       hyperalign_fit = mvpa2.algorithms.hyperalignment.Hyperalignment()(train_pymv_datasets)
>> 
>> This is what I get when calling the second line:
>> 
>> init_dgesdd failed init
>> init_dgesdd failed init
>> init_dgesdd failed init
>> init_dgesdd failed init
>> Traceback (most recent call last):
>> File "save_hyperaligned_subject.py", line 77, in <module>
>>   hyperalign_fit = mvpa2.algorithms.hyperalignment.Hyperalignment()(train_pymv_datasets)
>> File ".../anaconda2/lib/python2.7/site-packages/mvpa2/algorithms/hyperalignment.py", line 339, in __call__
>>   self.train(datasets)
>> File ".../anaconda2/lib/python2.7/site-packages/mvpa2/algorithms/hyperalignment.py", line 319, in train
>>   residuals)
>> File ".../anaconda2/lib/python2.7/site-packages/mvpa2/algorithms/hyperalignment.py", line 483, in _level2
>>   m.train(ds_new)
>> File ".../anaconda2/lib/python2.7/site-packages/mvpa2/base/learner.py", line 137, in train
>>   self._train(ds)
>> File ".../anaconda2/lib/python2.7/site-packages/mvpa2/mappers/procrustean.py", line 123, in _train
>>   raise ValueError, "For now do not handle invariant in time datasets"
>> ValueError: For now do not handle invariant in time datasets
>> 
>> 
>> I looked up the respective code section in procrustean.py, but am not sure what the code is checking for there.
>> 
>> 
>> Best regards from the Netherlands,
>> Katja Müller
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> 
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa



More information about the Pkg-ExpPsy-PyMVPA mailing list