[pymvpa] Recovering original image dimensions after remove_invariant_features
Nick Oosterhof
n.n.oosterhof at googlemail.com
Tue Oct 21 15:29:19 UTC 2014
On 21 Oct 2014, at 17:20, Shane Hoversten <shanusmagnus at gmail.com> wrote:
> When I run the spotlight using SVM I get complaints about C not being able to be normalized. This seems to be because the group mask includes voxels that are not live for all subjects. I can make these warnings go away with more aggressive masking for each subject, but what I'd really like to do is tell the classifier to just ignore constant features.
>
> I found the remove_invariant_features function, which seems just the thing. However, when writing the searchlight results produced on a dataset filtered through this function, the original image dimensions are lost. Is there a way to have the invariant features removed in a similar way as when you specify a mask with the dataset, which allows the original image dimensions to be preserved in the mapper chain s.t. searchlight results written to disk Just Work?
I guess that should be possible by attaching / updating a mapper in the dataset.
We could consider adding a function that does just that, or replace the current function that removes invariant features. Michael / Yarik, any preferences for either option?
More information about the Pkg-ExpPsy-PyMVPA
mailing list