[pymvpa] Searchlight statistical inference
Roni Maimon
ronimaimon at gmail.com
Wed Aug 12 14:36:44 UTC 2015
Yaroslav, Thank you very much for the input.
Richard, in the code you referred to it is stated:
"The values mapped onto each voxel represent the mean accuracy across all
classification (spheres)
a voxel was included in."
How is this achieved? I scanned the code and nothing popped out but I must
be missing something.
Thanks!
On Wed, Aug 12, 2015 at 3:05 AM, Roni Maimon <ronimaimon at gmail.com> wrote:
> So the full design is I have 4 conditions in 8 runs. 5 blocks of each
> condition in each run.
> All runs have all the conditions but I'm interested only in two
> classifications and the differences between these classifications.
> The order of trials is different across runs.
> Some recommend I only permute the labels within runs, is this what you're
> referring to? Is there a quick way to do that in pyMVPA?
>
> On Wed, Aug 12, 2015 at 2:14 AM, Roni Maimon <ronimaimon at gmail.com> wrote:
>
>> Hi,
>>
>> Yaroslav and Richard, thank you so much for the quick and very helpful
>> reply!
>>
>> Though I only received it through the daily summary, so I am sure this is
>> the wrong way to reply.
>>
>> Yaroslav, regarding the permutator "dance", is it necessary in cases
>> where I have several betas in each run?
>>
>> Thanks again for all the help.
>>
>> On Tue, Aug 11, 2015 at 8:18 PM, Roni Maimon <ronimaimon at gmail.com>
>> wrote:
>>
>>> Hi all,
>>> I'm rather new to pyMVPA and I would love to get your help and feedback.
>>> I'm trying do understand the different procedures of statistical
>>> inference, I can achieve for whole brain searchlight analysis, using pyMVPA.
>>>
>>> I started by implementing the inference at the subject level (attaching
>>> the code). Is this how I'm supposed to evaluate the p values of the
>>> classifications for a single subject? What is the differences between
>>> adding the null_dist to the sl level and the cross validation level?
>>> My code:
>>> clf = LinearCSVMC()
>>> splt = NFoldPartitioner(attr='chunks')
>>>
>>> repeater = Repeater(count=100)
>>> permutator = AttributePermutator('targets', limit={'partitions': 1},
>>> count=1)
>>> null_cv = CrossValidation(clf, ChainNode([splt,
>>> permutator],space=splt.get_space()),
>>> postproc=mean_sample())
>>> null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices',
>>> enable_ca=['roi_sizes'])
>>> distr_est = MCNullDist(repeater,tail='left', measure=null_sl,
>>> enable_ca=['dist_samples'])
>>>
>>> cv = CrossValidation(clf,splt,
>>> enable_ca=['stats'], postproc=mean_sample() )
>>> sl = sphere_searchlight(cv, radius=3, space='voxel_indices',
>>> null_dist=distr_est,
>>> enable_ca=['roi_sizes'])
>>> ds = glm_dataset.copy(deep=False,
>>> sa=['targets','chunks'],
>>> fa=['voxel_indices'],
>>> a=['mapper'])
>>> sl_map = sl(ds)
>>> p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY??
>>>
>>> Is there a way to make sure the permutations are exhaustive?
>>> In order to make an inference on the group level I understand I can
>>> use GroupClusterThreshold.
>>> Does anyone have a code sample for that? Do I use the MCNullDist's
>>> created at the subject level?
>>>
>>> Thanks,
>>> Roni.
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20150812/784106a9/attachment.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list