[pymvpa] Multiple datasets in one hdf5 file

Nick Oosterhof n.n.oosterhof at googlemail.com
Tue Jul 21 09:56:55 UTC 2015


> On 21 Jul 2015, at 08:32, 孔令军 <201421210014 at mail.bnu.edu.cn> wrote:
> 
> The accessory are my experiment data and the file of attributes(The lable of each timepoint,TR=2s)
> Can you help me to make the single file like the  ''hyperalignment_tutorial_data.hdf5.gz''
> I attemppted to use dcm2niigui.exe to transform every single subjects'  3D data to a single 4D file

In what format is the data you are trying to use? NIFTI or DICOM? If DICOM, you would have to convert it to a neuroimaging format, preferably NIFTI (dcm2niigui.exe may be able to do that). You can read NIFTI files in PyMVPA using “fmri_dataset". “vstack” can be used to join the volumes from several datasets into a single large dataset.

> , then use h5save to integration all subjects' data.

h5save will not integrate the subjects data; it will just save the data to a file, so that you can load it from that file later (using h5load).

> 
> >>> print ds_all[0]
> <Dataset: 384x131072 at float64, <sa: chunks,subject,targets,time_coords,time_indices>, <fa: voxel_indices>, <a: imghdr,imgtype,mapper,voxel_dim,voxel_eldim>>
> But when I input :
> >>>nruns = len(ds_all[0].UC)
> >>> nruns
> 1
> so I can't do the cross validation

Indeed, if all chunks have the same value, you cannot do cross-validation. With fMRI data, typically the chunks are assigned based on the acquisition run, so that data from run K has the corresponding chunks value set to K.
Thus, in order to use cross validation, you would have to set .sa.chunks appropriately. 


More information about the Pkg-ExpPsy-PyMVPA mailing list