[pymvpa] searchlight analysis
Pegah Kassraian Fard
pegahkf at gmail.com
Tue Sep 12 13:38:07 UTC 2017
Many thanks for all the feedback. I get the highest accuracy at dimension
[15, 45, 65], not sure if everything is correct - logically in the code
now. I am yet classifying whole brain, just as a check.
Also, the plotting seems completely off...and the output either
non-existent or really weird. Is there any chance anyone could quickly run
the code? Or would you have any other suggestions?
Best regards,
Pegah
*- Newest code below*
*- Data is here:
**https://www.dropbox.com/sh/6qnrt2l2othc83g/AADBpc-eJSK5893Vrz_A8BS1a?dl=0
<https://www.dropbox.com/sh/6qnrt2l2othc83g/AADBpc-eJSK5893Vrz_A8BS1a?dl=0>*
Code:
#
from glob import glob
import os
import numpy as np
from mvpa2.suite import *
%matplotlib inline
# enable debug output for searchlight call
if __debug__:
debug.active += ["SLC"]
# change working HERE!
os.chdir('mypath/S1')
# use glob to get the filenames of .nii data into a list
nii_fns = glob('beta*.nii')
# read data
labels = [
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
7, 7, 7, 7, 7, 7,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
7, 7, 7, 7, 7, 7, 7, 7
]
grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks`
sprefix_ = 'vxl'
sprefix_indices_key = '_'.join([sprefix_, 'indices'])
tprefix_ = 'tpref'
db = mvpa2.datasets.mri.fmri_dataset(
nii_fns, targets=labels, chunks=grps, mask=None, sprefix=sprefix_,
tprefix=tprefix_, add_fa=None
)
# use only the samples of which labels are 1 or 2
db12 = db.select(sadict={'targets': [1,2]})
# in-place z-score normalization
zscore(db12)
# choose classifier
clf = LinearNuSVMC()
# setup measure to be computed by Searchlight
# cross-validated mean transfer using an N-fold dataset splitter
cv = CrossValidation(clf, NFoldPartitioner())
# define searchlight methods
radius_ = 3
sl = sphere_searchlight(
cv, radius=radius_, space=sprefix_indices_key,
postproc=mean_sample()
)
# stripping all attributes from the dataset that are not required for the
searchlight analysis
db12s = db12.copy(
deep=False,
sa=['targets', 'chunks'],
fa=[sprefix_indices_key],
a=['mapper']
)
# run searchlight
sl_map = sl(db12s)
# toggle between error rate (searchlight output) and accuracy (for plotting)
sl_map.samples *= -1
sl_map.samples += 1
# The result dataset is fully aware of the original dataspace.
# Using this information we can map the 1D accuracy maps back into
“brain-space” (using NIfTI image header information from the original input
timeseries.
niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr)
# find the 3-d coordinates of extrema of error rate or accuracy
extremum_i = np.argmax(sl_map.samples[0]) # max
extremum_i = np.argmin(sl_map.samples[0]) # min
coord = db12s.fa[sprefix_indices_key][extremum_i]
# plotting
plot_args = {
'background' : 'Struct.nii',
'background_mask' : 'brain.nii',
'overlay_mask' : 'S1_mask.nii',
'do_stretch_colors' : False,
'cmap_bg' : 'gray',
'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn
'interactive' : cfg.getboolean('examples', 'interactive', True)
}
fig = pl.figure(figsize=(12, 4), facecolor='white')
subfig = plot_lightbox(
overlay=niftiresults,
vlim=(0.5, None), slices=range(23,31),
fig=fig,
background=Struct.nii',
background_mask='brain.nii',
overlay_mask='S1_mask.nii',
**plot_args
)
pl.title('Accuracy distribution for radius %i' % radius_)
#
On Sat, Sep 9, 2017 at 5:29 PM, Pegah Kassraian Fard <pegahkf at gmail.com>
wrote:
> Many thanks for all the feedback. I get the highest accuracy at dimension
> [15, 45, 65], not sure if everything is correct - logically in the code now
> - the plotting seems completely off. Is there any chance anyone could
> quickly run the code? Or would you have any other suggestions?
> Best regards,
> Pegah
>
> *Attached:*
> *- Newest code*
> *- Output*
> *- Plot*
>
>
> Code also here:
>
> #
>
> from glob import glob
> import os
> import numpy as np
>
> from mvpa2.suite import *
>
> %matplotlib inline
>
>
> # enable debug output for searchlight call
> if __debug__:
> debug.active += ["SLC"]
>
>
> # change working directory to 'S1'
> os.chdir('mypath/S1')
>
> # use glob to get the filenames of .nii data into a list
> nii_fns = glob('beta*.nii')
>
>
> # read data
>
> labels = [
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
> 7, 7, 7, 7, 7, 7,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
> 7, 7, 7, 7, 7, 7, 7, 7
> ]
> grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks`
>
> sprefix_ = 'vxl'
> sprefix_indices_key = '_'.join([sprefix_, 'indices'])
> tprefix_ = 'tpref'
> db = mvpa2.datasets.mri.fmri_dataset(
> nii_fns, targets=labels, chunks=grps, mask=None, sprefix=sprefix_,
> tprefix=tprefix_, add_fa=None
> )
>
>
> # use only the samples of which labels are 1 or 2
> db12 = db.select(sadict={'targets': [1,2]})
>
> # in-place z-score normalization
> zscore(db12)
>
>
> # choose classifier
> clf = LinearNuSVMC()
>
> # setup measure to be computed by Searchlight
> # cross-validated mean transfer using an N-fold dataset splitter
> cv = CrossValidation(clf, NFoldPartitioner())
>
> # define searchlight methods
> radius_ = 3
> sl = sphere_searchlight(
> cv, radius=radius_, space=sprefix_indices_key,
> postproc=mean_sample()
> )
>
> # stripping all attributes from the dataset that are not required for the
> searchlight analysis
> db12s = db12.copy(
> deep=False,
> sa=['targets', 'chunks'],
> fa=[sprefix_indices_key],
> a=['mapper']
> )
>
> # run searchlight
> sl_map = sl(db12s)
>
>
> # toggle between error rate (searchlight output) and accuracy (for
> plotting)
> sl_map.samples *= -1
> sl_map.samples += 1
>
> # The result dataset is fully aware of the original dataspace.
> # Using this information we can map the 1D accuracy maps back into
> “brain-space” (using NIfTI image header information from the original input
> timeseries.
> niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr)
>
>
> # find the 3-d coordinates of extrema of error rate or accuracy
> extremum_i = np.argmax(sl_map.samples[0]) # max
> extremum_i = np.argmin(sl_map.samples[0]) # min
> coord = db12s.fa[sprefix_indices_key][extremum_i]
>
>
> # plotting
>
> plot_args = {
> # 'background' : 'Subject2_Struct.nii',
> # 'background_mask' : 'brain.nii.gz',
> # 'overlay_mask' : 'S1.nii',
> 'do_stretch_colors' : False,
> 'cmap_bg' : 'gray',
> 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn
> 'interactive' : cfg.getboolean('examples', 'interactive', True)
> }
>
> fig = pl.figure(figsize=(12, 4), facecolor='white')
>
> subfig = plot_lightbox(
> overlay=niftiresults,
> vlim=(0.5, None), slices=range(23,31),
> fig=fig,
> background='Subject2_Struct.nii',
> # background_mask='brain.nii',
> overlay_mask='S1_mask.nii',
> **plot_args
> )
>
> pl.title('Accuracy distribution for radius %i' % radius_)
>
>
>
>
> #
>
>
> On Fri, Sep 8, 2017 at 6:26 PM, Yaroslav Halchenko <debian at onerussian.com>
> wrote:
>
>>
>> On Fri, 08 Sep 2017, Pegah Kassraian Fard wrote:
>>
>>
>>
>>
>> > from glob import glob
>> > import os
>> > import numpy as np
>>
>> > from mvpa2.suite import *
>>
>> > %matplotlib inline
>>
>>
>> > # enable debug output for searchlight call
>> > if __debug__:
>> > debug.active += ["SLC"]
>>
>>
>> > # change working directory to 'WB'
>> > os.chdir('mypath/WB')
>>
>> > # use glob to get the filenames of .nii data into a list
>> > nii_fns = glob('beta*.nii')
>>
>> glob order iirc might not be guaranteed, so I would sort it to make sure
>>
>> nii_fns = sorted(glob('beta*.nii'))
>>
>> > # read data
>>
>> > labels = [
>> > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
>> > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
>> > 7, 7, 7, 7, 7, 7, 7,
>> > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
>> > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
>> > 7, 7, 7, 7, 7, 7, 7
>> > ]
>>
>> and make sure that above order of the volumes correspond to those labels
>>
>> > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks`
>>
>> > db = mvpa2.datasets.mri.fmri_dataset(
>> > nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl',
>> tprefix='tpref', add_fa=None
>> > )
>>
>> > # use only the samples of which labels are 1 or 2
>> > db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')]
>>
>> more concise way
>>
>> db12 = db.select(sadict={'targets': [1,2]})
>>
>> > # in-place z-score normalization
>> > zscore(db12)
>>
>> > # choose classifier
>> > clf = LinearNuSVMC()
>>
>> > # setup measure to be computed by Searchlight
>> > # cross-validated mean transfer using an N-fold dataset splitter
>> > cv = CrossValidation(clf, NFoldPartitioner())
>>
>> > # define searchlight methods
>> > radius_ = 1
>> > sl = sphere_searchlight(
>> > cv, radius=radius_, space='vxl_indices',
>> > postproc=mean_sample()
>> > )
>>
>> > # stripping all attributes from the dataset that are not required for
>> the searchlight analysis
>> > db12s = db12.copy(
>> > deep=False,
>> > sa=['targets', 'chunks'],
>> > fa=['vxl_indices'],
>> > a=['mapper']
>> > )
>>
>> > # run searchlight
>> > sl_map = sl(db12s)
>>
>> > # toggle between error rate (searchlight output) and accuracy (for
>> plotting)
>> > sl_map.samples *= -1
>> > sl_map.samples += 1
>>
>> > # The result dataset is fully aware of the original dataspace.
>> > # Using this information we can map the 1D accuracy maps back into
>> “brain-space” (using NIfTI image header information from the original input
>> timeseries.
>> > niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr)
>>
>> > # find the 3-d coordinates of extrema of error rate or accuracy
>> > extremum_i = np.argmax(sl_map.samples[0]) # max
>> > extremum_i = np.argmin(sl_map.samples[0]) # min
>>
>> note that above you override the same extremum_i with 'min' (minimal
>> accuracy ;)) !!!
>>
>> > coord = db12s.fa[list(db.fa.keys())[0]][extremum_i]
>>
>> .keys() order is also not guaranteed... why not to specify
>> 'vxl_indices'?
>>
>> > # plotting
>> > plot_args = {
>> > # 'background' : 'Subject2_Struct.nii',
>> > # 'background_mask' : 'brain.nii.gz',
>> > # 'overlay_mask' : 'S1_mask.nii',
>> > 'do_stretch_colors' : False,
>> > 'cmap_bg' : 'gray',
>> > 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn
>> > 'interactive' : cfg.getboolean('examples', 'interactive', True)
>> > }
>> > fig = pl.figure(figsize=(12, 4), facecolor='white')
>> > subfig = plot_lightbox(
>> > overlay=niftiresults,
>> > vlim=(0.5, None), slices=range(23,31),
>> > fig=fig,
>> > background='Subject2_Struct.nii',
>> > # background_mask='brain.nii.gz',
>> > # overlay_mask='S1_mask.nii',
>> > **plot_args
>> > )
>> > pl.title('Accuracy distribution for radius %i' % radius_)
>>
>>
>>
>>
>>
>>
>>
>> --
>> Yaroslav O. Halchenko
>> Center for Open Neuroscience http://centerforopenneuroscience.org
>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
>> WWW: http://www.linkedin.com/in/yarik
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20170912/488471c8/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list