[pymvpa] Memory Error when Loading fMRI dataset

Jason Ozubko jozubko at research.baycrest.org
Tue Jan 14 15:53:20 UTC 2014


I am using the NeuroDebian virtual machine and I have given it 4GB of
memory to work with (the maximum it can handle as I understand, since it's
a 32-bit OS).

The dataset I'm working with has dimensions of 79x95x68 for each functional
run, after being pre-processed from an original scan of 96x96x37.  There
are 2151 functional runs in total across all 5 of my runs.  I am actually
however, masking my data and so I only end up including about 950 voxels
from each functional run.

To me the error message seemed to indicate that I was running out of memory
when I was loading the full dataset before any masking had occurred.  I
wonder, if I load it in a piece meal fashion (1 run at a time) with the
mask, after the data is loaded the masking should cut down on the number of
voxels (and so overall memory signature) significantly, and so perhaps in
the end I won't end up hitting a memory error once the data is loaded?

Thanks in advance for your thoughts

Cheers,
Jason






On Mon, Jan 13, 2014 at 7:40 PM, Yaroslav Halchenko
<debian at onerussian.com>wrote:

>
> On Mon, 13 Jan 2014, Jason Ozubko wrote:
>
> >    Hello,
> >    I'm currently trying to setup my first MVPA analysis after working
> through
> >    a few tutorials and I'm hitting a snag at the first stage, loading in
> the
> >    data! �I've got a rather long experiment (~80 minutes, 2.2 s TR,
> broken
> >    into about 5 sessions/runs). �I do some minimal preprocessing in SPM
> >    before outputting 4D nii files that contain the functional data that I
> >    want to use in my MVPA analyses.
> >    My initial idea was to concatenate all the runs into 1 big .nii and
> then
> >    load that in as a single dataset using fmri_dataset however when I try
> >    this, I get a memory error (see the attached txt file for a full
> output)
> >    I have since learned that python does not seem to crash if I try
> loading
> >    in the runs individually (i.e., if I have 5 separate .nii files, each
> >    containing functional data from a different run). �I am wondering, if
> I
> >    loaded in the runs individually, and then I concatenated the loaded
> >    datasets into one big dataset and then added the sa.targets and
> sa.chunks
> >    properties appropriately, could I then proceed to the next steps of my
> >    analysis and carry forward normally, or will there be issues with the
> fact
> >    that I have concatenated the datasets together?
>
> no -- concatenated datasets are as good.  But I have a concern about the
> memory limit you have experienced:  most probably you would keep hitting
> it again.  How much memory do you have available, and how big is the
> actual dataset? (we still do not know dimensions to do algebra
> ourselves here).  Also intersting is -- that big .nii, did it have the
> same original datatype?
>
> --
> Yaroslav O. Halchenko, Ph.D.
> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
> Senior Research Associate,     Psychological and Brain Sciences Dept.
> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
> Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
> WWW:   http://www.linkedin.com/in/yarik
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140114/a5278081/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list