[pymvpa] searchlight analysis
Nick Oosterhof
n.n.oosterhof at googlemail.com
Fri Sep 8 16:07:01 UTC 2017
Some minor comments inserted:
> On 8 Sep 2017, at 17:52, Pegah Kassraian Fard <pegahkf at gmail.com> wrote:
>
>
> from glob import glob
> import os
> import numpy as np
>
> from mvpa2.suite import *
>
> %matplotlib inline
>
>
> # enable debug output for searchlight call
> if __debug__:
> debug.active += ["SLC"]
>
>
> # change working directory to 'WB'
> os.chdir('mypath/WB')
>
> # use glob to get the filenames of .nii data into a list
> nii_fns = glob('beta*.nii')
>
> # read data
>
> labels = [
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
> 7, 7, 7, 7, 7, 7, 7,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
> 7, 7, 7, 7, 7, 7, 7
> ]
> grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks`
>
> db = mvpa2.datasets.mri.fmri_dataset(
> nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', tprefix='tpref', add_fa=None
> )
Is there a reason not to use a mask? At least a brain mask to avoid stuff stuff like skull and air?
>
> # use only the samples of which labels are 1 or 2
> db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')]
>
> # in-place z-score normalization
> zscore(db12)
>
> # choose classifier
> clf = LinearNuSVMC()
Have you tried a different classifier, for example Naive Bayes? That one is simpler (though usually a bit less sensitive than SVM / LDA in my experience)?
>
> # setup measure to be computed by Searchlight
> # cross-validated mean transfer using an N-fold dataset splitter
> cv = CrossValidation(clf, NFoldPartitioner())
>
> # define searchlight methods
> radius_ = 1
That's a tiny radius - why not use something like 3?
More information about the Pkg-ExpPsy-PyMVPA
mailing list