[pymvpa] Hyperalignment: SVD did not converge

Kiefer Katovich kieferk at stanford.edu
Fri May 18 23:27:23 UTC 2012


Hi again,

I will try those two ways of outputting the feature selection, they both
seem straightforward.

I spoke to soon on the SVD non-convergence issue. As it turns out it really
seems to depend on how I set the targets for the datasets. SVD seems to
converge better when there are more "classes" set with the targets. For
example, if i set there to be 8 different categories in my data SVD can
converge on the entire dataset, but if I set it to be binary it has a lot
of trouble converging.

For example, I set the first two TRs of every trial to 1 and all other TRs
to 0 in the targets file. If I use this for feature selection and then
hyperalignment, it does not converge with most combinations of subjects.

However, if I designate trialtype and specific TRs within the targets file
it is able to converge with all the subjects.

I'm not to clear on why this would be the case. I assume that the anova
feature selection is picking out the voxels that best match the time series
of the targets that you assign to each TR? I know that in a univariate
solution with the binary targets there are definitely voxels that correlate
significantly with the 1s (first two TRs of every trial), so I figured that
the feature selector would probably pull out those.

I hope that wasn't too confusing. I'm just wondering if there is some
criterion that I am missing when assigning the target file that is
necessary for hyperalignment to run correctly.

Thank you,
Kiefer



On Fri, May 18, 2012 at 12:29 PM, Swaroop Guntupalli <swaroopgj at gmail.com>wrote:

> Hi Kiefer,
>
> Glad that it's working.
> Whatever mapper you are using (StaticFeatureSelection?) should have a
> slicearg argument that contains the list of voxel indices.
> Another way is to create an array of ones of the same size as the number
> of features selected and pass it bakward through the mappers through which
> the Data came through before hyperalignment
> (mapper_name.reverse(new_data)), which should put the data in the original
> space. You can then use map2nifti to map those selected voxels (as ones)
> into a nifti file.
> Does that make sense?
>
> Best,
> Swaroop
>
>
>
>
> On Fri, May 18, 2012 at 3:02 PM, Kiefer Katovich <kieferk at stanford.edu>wrote:
>
>> Hey Swaroop,
>>
>> I actually managed to fix the SVD non-convergence issue. Turns out that I
>> had foolishly not been lagging my data for the hemodynamic response. Once I
>> lagged the targets appropriately, i was able to hyperalign all of the
>> brains without encountering any SVD problems.
>>
>> I would like to visualize the features that are being selected that
>> hyperalignment is using for the transformation. What should I do after
>> preforming the OneWayAnova and the StaticFeatureSelector to save those
>> selected features into a nifti that I can overlay on the subjects' brains?
>> It would be really nice to know which areas of the brain end up being
>> selected for alignment (I am allowing it to choose the top 5% voxels of any
>> voxels in the brain).
>>
>> Thanks for your help!
>> Kiefer
>>
>>
>>
>> On Fri, May 18, 2012 at 6:33 AM, Swaroop Guntupalli <swaroopgj at gmail.com>wrote:
>>
>>> Hi Kiefer,
>>>
>>> Sorry for late response (I blame abstract submission deadlines).
>>>
>>> I sometimes (very few though) encounter this SVD non convergence problem.
>>> One workaround I use (and that works for me) is to try a different SVD
>>> implementation: dgesvd instead of numpy (option in ProcrusteanMapper)
>>> If it doesn't work any SVD implementation, it means the matrix is
>>> probably bad for some reason, which might mean one or more of the data
>>> matrices is messed up (SVD is on the product of 2 data matrices), so
>>> make sure you exclude all invariant voxels from the data (you can do that
>>> using "remove_invariant_features".
>>> HTH.
>>> Keep us posted on your progress.
>>>
>>> Thanks,
>>> Swaroop
>>>
>>>
>>>
>>> On Tue, May 1, 2012 at 2:38 PM, Kiefer Katovich <kieferk at stanford.edu>wrote:
>>>
>>>> Hi again,
>>>>
>>>> Sorry my messages keep starting new threads; I've been receiving email
>>>> in digest mode but changed my settings to single mail, so I should be
>>>> able to reply properly soon.
>>>>
>>>> First off – I re-ran the iterative test of hyperalignment starting
>>>> with a different set of subjects. This time 8 of the 21 subjects
>>>> managed to be hyperaligned to each other, and most of the successful
>>>> subjects were different than in the last batch. I just did this to
>>>> confirm that the success of hyperalignment is contingent upon the
>>>> unique set of datasets that you put into it, and not just that some
>>>> subjects were bad and others good.
>>>>
>>>> Now, on to your comments:
>>>>
>>>> Thanks for the clarification on hyperalignment and SVD. I should
>>>> probably read the source code to get a better idea of exactly what
>>>> hyperalignment and the procrustean transformation is attempting to do
>>>> with the datasets I give it.
>>>>
>>>> By "classification error" I only meant the way in which I had coded
>>>> the time points of the dataset into separate classes, not actually
>>>> running a classification algorithm. Sorry for the misconception, that
>>>> was poor phrasing on my part.
>>>>
>>>> A related question: how much of an impact does the coding of time
>>>> points have on hyperalignment? I assume that the feature selector,
>>>> such as OneWayAnova, chooses features according to the "targets" that
>>>> you assign to each time point, and that this is then fed into
>>>> hyperalignment and procrustean?
>>>>
>>>> Here are some details on my data:
>>>>
>>>> 432 time points
>>>> ~58000 voxels per time point (whole brain, masked)
>>>> 3000 features selected using FixedNElementTailSelector
>>>>
>>>> I assumed that it is only the 3000 features from the tail selector
>>>> that hyperalignment and procrustean use to make the alignment?
>>>>
>>>> Ideally I would not have to mask out to a specific area of the brain
>>>> prior to the feature selection. I prefer, for this data, to not make
>>>> an initial assumption about which brain areas contain the best
>>>> features for alignment.
>>>>
>>>> Thanks you,
>>>>
>>>> Kiefer
>>>>
>>>> _______________________________________________
>>>> Pkg-ExpPsy-PyMVPA mailing list
>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>>>
>>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>>
>>>
>>>
>>> _______________________________________________
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20120518/370ec120/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list