[pymvpa] Memory Error when Loading fMRI dataset
Nick Oosterhof
nikolaas.oosterhof at unitn.it
Tue Jan 14 16:15:48 UTC 2014
On Jan 14, 2014, at 4:53 PM, Jason Ozubko wrote:
> The dataset I'm working with has dimensions of 79x95x68 for each functional run, after being pre-processed from an original scan of 96x96x37. There are 2151 functional runs in total across all 5 of my runs. I am actually however, masking my data and so I only end up including about 950 voxels from each functional run.
>
> To me the error message seemed to indicate that I was running out of memory when I was loading the full dataset before any masking had occurred. I wonder, if I load it in a piece meal fashion (1 run at a time) with the mask, after the data is loaded the masking should cut down on the number of voxels (and so overall memory signature) significantly, and so perhaps in the end I won't end up hitting a memory error once the data is loaded?
That sounds plausible, as indeed first the full image is loaded and then the mask is applied - which reduces the required memory significantly after garbage collection.
Thus indeed it may be better in your case to load the images run-wise, and then manually vstack them. Just be careful that the volume dimensions/positions match across the different runs, as with the manually stacking approach there is by default no check for this, unless you would use {h,v}stack with 'unique' as the second argument.
More information about the Pkg-ExpPsy-PyMVPA
mailing list