[pymvpa] Matrices per Searchlight
Hanson, Gavin Keith
ghanson0 at ku.edu
Mon Nov 25 21:47:18 UTC 2013
@ Dr. Halchenko
I already tried to do what you suggest: that is, set up CrossValidation with BayesConfusionHypothesis, as you show below, and use that as the data measure within a sphere_searchlight. However, I do not get the results on a per-feature level. It's possible that it's there, and I cannot figure out how to access it, but a look at res.sa.hypothesis doesn't give me results per feature, which is what I'm after.
@ Susanne
I'm working in 2.2.0, inside neurodebian.
Our experiment is focused around object properties. We have participants make judgments on how similar two concepts are along a given dimension. We have broken those dimensions into abstract (thematic context, function) and perceptual/concrete features (color/shape). What I've done so far is look at regions that we expect to be able to support a 4-way classification, and we've been quite successful. However, we're interested in finding regions that might be able to encode abstract versus concrete, without being able to separate between two abstract or concrete features - that is [[0, 1], [2, 3]] versus [[0], [1], [2], [3]]. We have already identified some such regions within some a priori ROIs, but while we have the data, we'd like to get a better idea about how this plays out throughout the brain, which is where this searchlight/bayes system comes in.
As I mentioned above, when I plug a cross validation w/ the BayesConfusionHypothesis node properly set up into a searchlight, it doesn't return that nice bayesian hypothesis test result for each feature, which is what I'm after.
Thank you both for getting back to me so quickly!
- Gavin
On Nov 25, 2013, at 2:15 PM, Yaroslav Halchenko wrote:
>
> On Mon, 25 Nov 2013, Hanson, Gavin Keith wrote:
>
>> Hi:
>> I am reasonably new to PyMVPA, but have not had any trouble with it so far.
>> However, I am interested in running an analysis similar to that performed in Connolly, 2012, where dissimilarity matrices are computed per searchlight. I would like to know if anyone knows of a straightforward way to do with within PyMVPA - that is, get the simple confusion matrix per searchlight. Even better, I would like to be able to look at the results of the BayesConfusionHypothesis node at a per-searchlight level, as our hypothesis has to do with how the specificity of information encoding changes across the brain, and that tool seems perfect for that. Results with ROIs have been promising, but I'd like to see how this plays out across the entire brain, and the ability to use the searchlight tool in conjunction with the Bayes Confusion node would be very helpful. If anyone could help me with this, I'd much appreciate it!
>
> From your description it sounds you are looking for this recipe present
> in our unittests:
>
> mvpa2/tests/test_transerror.py
>
> 602 def test_confusion_as_node():
> 603 from mvpa2.misc.data_generators import normal_feature_dataset
> 604 from mvpa2.clfs.gnb import GNB
> 605 from mvpa2.clfs.transerror import Confusion
> 606 ds = normal_feature_dataset(snr=2.0, perlabel=42, nchunks=3,
> 607 nonbogus_features=[0,1], nfeatures=2)
> 608 clf = GNB()
> 609 cv = CrossValidation(
> 610 clf, NFoldPartitioner(),
> 611 errorfx=None,
> 612 postproc=Confusion(labels=ds.UT),
> 613 enable_ca=['stats'])
> 614 res = cv(ds)
> 615 # needs to be identical to CA
> 616 assert_array_equal(res.samples, cv.ca.stats.matrix)
> 617 assert_array_equal(res.sa.predictions, ds.UT)
> 618 assert_array_equal(res.fa.targets, ds.UT)
>
> 619 skip_if_no_external('scipy')
>
> 620 from mvpa2.clfs.transerror import BayesConfusionHypothesis
> 621 from mvpa2.base.node import ChainNode
> 622 # same again, but this time with Bayesian hypothesis testing at the end
> 623 cv = CrossValidation(
> 624 clf, NFoldPartitioner(),
> 625 errorfx=None,
> 626 postproc=ChainNode((Confusion(labels=ds.UT),
> 627 BayesConfusionHypothesis())))
> 628 res = cv(ds)
> 629 # only two possible hypothesis with two classes
> 630 assert_equals(len(res), 2)
> 631 # the first hypothesis is the can't discriminate anything
> 632 assert_equal(len(res.sa.hypothesis[0]), 1)
> 633 assert_equal(len(res.sa.hypothesis[0][0]), 2)
> 634 # and the hypothesis is actually less likely than the other one
> 635 # (both classes can be distinguished)
> 636 assert(np.e**res.samples[0,0] < np.e**res.samples[1,0])
> ....
>
> so by using such a CrossValidation construct as input measure for your
> Searchlight construct should achieve what you aim, or am I wrong?
>
> --
> Yaroslav O. Halchenko, Ph.D.
> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
> Senior Research Associate, Psychological and Brain Sciences Dept.
> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
> WWW: http://www.linkedin.com/in/yarik
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
More information about the Pkg-ExpPsy-PyMVPA
mailing list