[pymvpa] Matrices per Searchlight

Yaroslav Halchenko debian at onerussian.com
Mon Nov 25 20:15:16 UTC 2013


On Mon, 25 Nov 2013, Hanson, Gavin Keith wrote:

> Hi:
> I am reasonably new to PyMVPA, but have not had any trouble with it so far.
> However, I am interested in running an analysis similar to that performed in Connolly, 2012, where dissimilarity matrices are computed per searchlight. I would like to know if anyone knows of a straightforward way to do with within PyMVPA - that is, get the simple confusion matrix per searchlight. Even better, I would like to be able to look at the results of the BayesConfusionHypothesis node at a per-searchlight level, as our hypothesis has to do with how the specificity of information encoding changes across the brain, and that tool seems perfect for that. Results with ROIs have been promising, but I'd like to see how this plays out across the entire brain, and the ability to use the searchlight tool in conjunction with the Bayes Confusion node would be very helpful. If anyone could help me with this, I'd much appreciate it!

From your description it sounds you are looking for this recipe present
in our unittests:

mvpa2/tests/test_transerror.py

   602  def test_confusion_as_node():
   603      from mvpa2.misc.data_generators import normal_feature_dataset
   604      from mvpa2.clfs.gnb import GNB
   605      from mvpa2.clfs.transerror import Confusion
   606      ds = normal_feature_dataset(snr=2.0, perlabel=42, nchunks=3,
   607                                  nonbogus_features=[0,1], nfeatures=2)
   608      clf = GNB()
   609      cv = CrossValidation(
   610          clf, NFoldPartitioner(),
   611          errorfx=None,
   612          postproc=Confusion(labels=ds.UT),
   613          enable_ca=['stats'])
   614      res = cv(ds)
   615      # needs to be identical to CA
   616      assert_array_equal(res.samples, cv.ca.stats.matrix)
   617      assert_array_equal(res.sa.predictions, ds.UT)
   618      assert_array_equal(res.fa.targets, ds.UT)
       
   619      skip_if_no_external('scipy')
       
   620      from mvpa2.clfs.transerror import BayesConfusionHypothesis
   621      from mvpa2.base.node import ChainNode
   622      # same again, but this time with Bayesian hypothesis testing at the end
   623      cv = CrossValidation(
   624          clf, NFoldPartitioner(),
   625          errorfx=None,
   626          postproc=ChainNode((Confusion(labels=ds.UT),
   627                              BayesConfusionHypothesis())))
   628      res = cv(ds)
   629      # only two possible hypothesis with two classes
   630      assert_equals(len(res), 2)
   631      # the first hypothesis is the can't discriminate anything
   632      assert_equal(len(res.sa.hypothesis[0]), 1)
   633      assert_equal(len(res.sa.hypothesis[0][0]), 2)
   634      # and the hypothesis is actually less likely than the other one
   635      # (both classes can be distinguished)
   636      assert(np.e**res.samples[0,0] < np.e**res.samples[1,0])
 ....

so by using such a CrossValidation construct as input measure for your
Searchlight construct should achieve what you aim, or am I wrong?

-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate,     Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik        



More information about the Pkg-ExpPsy-PyMVPA mailing list