[pymvpa] optimal way of loading the whole-brain data
Dmitry Smirnov
dmi.smirnov07 at gmail.com
Tue May 6 11:19:57 UTC 2014
Dear all,
I was wondering, what would be the most optimal solution for loading
massive data in PyMVPA.
Here is my case:
import time
import nibabel as nib
from mvpa2.suite import *
runs = 5
# Trim a number of slices in the end of the file
def trimImage(filename,cutoff):
tmp = nib.load(filename)
return nib.Nifti1Image(tmp.get_data()[:,:,:,0:cutoff],tmp.get_affine())
# Size of the files I'm working with
datatemp = nib.load(filename)
datatemp.shape
# (91, 109, 91, 350)
# Start timer
start = time.time()
# Load each of the 5 runs into the dataset
fds = fmri_dataset(samples=[trimImage(('run%i/epi.nii' % (r+1)),346) for r
in range(runs)],
targets=targets,
chunks=selector,
mask='/triton/becs/scratch/braindata/DSmirnov/HarvardOxford/MNI152_T1_2mm_brain_mask.nii')
# End timer
end = time.time()
print end - start
# 14704.9735432 : ~4 hours! Why?
If I want to load a dataset containing all my data at once, and I am not
constrained by RAM, what would be the best way to do so?
Thank you in advance!
BR, Dima
--
Dmitry Smirnov (MSc.)
PhD Candidate, Brain & Mind Laboratory <http://becs.aalto.fi/bml/>
BECS, Aalto University School of Science
00076 AALTO, FINLAND
mobile: +358 50 3015072
email: dmitry.smirnov at aalto.fi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140506/25c5c22a/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list