[pymvpa] chunked searchlight hyperalignment
Schoffelen, J.M. (Jan Mathijs)
janmathijs.schoffelen at donders.ru.nl
Fri Jul 23 07:32:46 BST 2021
My name is Jan-Mathijs Schoffelen, and I am a staff scientist at the Donders Institute in the Netherlands. I mainly use matlab for my quantitative work, and I just recently started using pymvpa with the intention to do a searchlight hyperalignment analysis.
Thanks to the excellent online resources I am more or less up and running, but when I try to run the hyperalignment, I run into prohibitive memory issues, that even our largest cluster node - with 256 GB of RAM - does not seem to be able to solve for me.
I could not easily find an answer to my questions in this list’s archive or online, and I apologize if some of my questions have already been asked (and answered) before.
The context is a dataset, consisting of 112 participants, sampled at 2 mm isotropic (~120000 grey matter voxels), and about 2000 time points.
From my (perhaps somewhat naive) understanding of the heuristics of the searchlight hyperalignment approach, I understand that the mappers are estimated from an aggregation of searchlight based mappers, where each of those smaller mappers are estimated from a subset of the voxel space (starting from a set of center voxels, with their spherical neighborhoods). Based on this, I thought it would be in theory possible to chunk the brain volume (e.g. in manageable 50x50x50 chunks, with some overlap at the edges) outside the hyperalignment step, do the hyperalignment per chunk, and then combine the mappers afterwards (or perhaps alternatively, the mapped results).
Yet, playing around a bit with this idea myself, I took a very small subset of the data (only a few participants, and overlapping ~20 voxel cubic masks), and inspected the numeric values in the mappers for voxels that were used in both masks, and sufficiently far (> sphere radius) away from the boundary of the mask (using sparse_radius=None, and combine_neighbormappers=None). The underlying idea here was that - in order for the chunking to be ‘allowed’ as an alternative to the full-brain-at-once-analysis - the mappers for identical voxels should not differ too much as a function of the (sufficiently well-chosen) mask used.
Numerically, the mappers for shared voxels are quite different however. I haven’t checked the overlap in mapped time courses, so it could be that some of the perceived discrepancy is not as bad as it looks, but I was wondering about the following:
1) Am I wrong in assuming that the ‘local shape’ of the features in the output space is only affected by local features, i.e. the outcome of the estimation for a given output voxel should not depend on whether a sufficiently far away set of inpout voxels is included in the estimation mask?
2) If I am not wrong in 1), is it then theoretically allowed to pre-chunk the brain for the hyperalignment estimation?
3) If 2) is OK, does a principled way exist to combine the mappers that are close to the edges of the chunks?
I would be happy for any insights or pointers to so far missed online documentation or literature.
More information about the Pkg-ExpPsy-PyMVPA