[pymvpa] Dataset with multidimensional feature vector per voxel
Ulrike Kuhl
kuhl at cbs.mpg.de
Tue Nov 24 17:57:37 UTC 2015
Dear all,
thanks so much for your help so far! Unfortunately I've run into another issue in my analysis (sorry to be such a nuisance...):
When I started to run my script on real data I got an 'index out of bounds' exception during searchlighting.
I've figured out that it has to do with the usage of a mask - in my toy runs, I did not include a mask and everything was fine. When looking at real data I definitely want to use a mask to restrict analysis only to within brain voxels or specific ROIs.
It seems that the searchlight searched at voxels not included in the mask - hence yielding an 'IndexError' when it runs out of the mask. When I use a whole brain mask with a volume of 1399939 voxels I get:
Traceback (most recent call last):
File "AGLmvpa_small.py", line 154, in <module>
res = sl(DS)
File "/usr/lib/python2.7/dist-packages/mvpa2/base/learner.py", line 259, in __call__
return super(Learner, self).__call__(ds)
File "/usr/lib/python2.7/dist-packages/mvpa2/base/node.py", line 121, in __call__
result = self._call(ds)
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 143, in _call
results = self._sl_call(dataset, roi_ids, nproc)
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 371, in _sl_call
results=self.__handle_all_results(p_results))
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 207, in _concat_results
results = sum(results, [])
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 527, in __handle_all_results
for r in results:
File "/usr/lib/pymodules/python2.7/pprocess.py", line 764, in next
self.store()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 400, in store
self.store_data(channel)
File "/usr/lib/pymodules/python2.7/pprocess.py", line 747, in store_data
data = channel.receive()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 135, in receive
obj = self._receive()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 121, in _receive
raise obj
IndexError: index 2231037 out of bounds 0<=index<1399939
I was able to reproduce the error when using a mask (1736 voxels) on my toy data:
Traceback (most recent call last):
File "DummyMvpa_noisy.py", line 322, in <module>
res_noisy = sl(DS_noisy)
File "/usr/lib/python2.7/dist-packages/mvpa2/base/learner.py", line 259, in __call__
return super(Learner, self).__call__(ds)
File "/usr/lib/python2.7/dist-packages/mvpa2/base/node.py", line 121, in __call__
result = self._call(ds)
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 143, in _call
results = self._sl_call(dataset, roi_ids, nproc)
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 371, in _sl_call
results=self.__handle_all_results(p_results))
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 207, in _concat_results
results = sum(results, [])
File "/usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.py", line 527, in __handle_all_results
for r in results:
File "/usr/lib/pymodules/python2.7/pprocess.py", line 764, in next
self.store()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 400, in store
self.store_data(channel)
File "/usr/lib/pymodules/python2.7/pprocess.py", line 747, in store_data
data = channel.receive()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 135, in receive
obj = self._receive()
File "/usr/lib/pymodules/python2.7/pprocess.py", line 121, in _receive
raise obj
IndexError: index 1736 out of bounds 0<=index<1736
Why does the searchlight run out of the mask?
Thanks again for your help!
Ulrike
----- Original Message -----
From: "Bill Broderick" <billbrod at gmail.com>
To: "pkg-exppsy-pymvpa" <pkg-exppsy-pymvpa at lists.alioth.debian.org>
Sent: Thursday, 19 November, 2015 16:15:05
Subject: Re: [pymvpa] Dataset with multidimensional feature vector per voxel
No problem! I've only used the random majority under-sampling with replacement so far, but it has a whole lot of options.
On Thu, Nov 19, 2015 at 10:01 AM, Yaroslav Halchenko < debian at onerussian.com > wrote:
On Thu, 19 Nov 2015, Bill Broderick wrote:
> I ran into a similar issue with unbalanced classification and wanted
> to look at the individual partitions as well. I couldn't figure
> out how to do so just in PyMVPA, so I ended up using a separate Python module,
> UnbalanceDataset: https://github.com/fmfn/UnbalancedDataset . With
> that, I sub-sampled the more common group to balance the two groups,
> which created a new dataset. I was then able to investigate what was
> going on in that dataset and what each of the partitions look like as
> if it were a regular dataset.
That is a sweet little toolbox -- thanks for sharing!
--
Yaroslav O. Halchenko
Center for Open Neuroscience http://centerforopenneuroscience.org
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
--
Max Planck Institute for Human Cognitive and Brain Sciences
Department of Neuropsychology (A219)
Stephanstraße 1a
04103 Leipzig
Phone: +49 (0) 341 9940 2625
Mail: kuhl at cbs.mpg.de
Internet: http://www.cbs.mpg.de/staff/kuhl-12160
More information about the Pkg-ExpPsy-PyMVPA
mailing list