[pymvpa] classification based on individual parameter estimates from FSL

David Soto d.soto.b at gmail.com
Mon Aug 4 10:44:10 UTC 2014


Thanks Nick and Gavin for the helpful feedback!

for some reason the maptonifti works fine with the output of the
searchlight (eg img=map2nifti(sl_res) gives the same as
img=map2nifti(dataset=ds, data=sl_res).

I think this may be the header info was incorporated in the fmri_dataset
call by adding the mask and the add_fa
ds = fmri_dataset(samples=os.path.join(datapath1, 'predbothsi.nii.gz'),
                  targets=attr.targets, chunks=attr.chunks,
                  mask=os.path.join(datapath1, 'mask.nii.gz'),#based on
FEAT analyses
                  add_fa={'unmbral_glm': os.path.join(datapath1,
'mask.nii.gz')}) .

What is intriguing is that the output of the FEAT GLM gives a robust
univariate signal in visual cortex for the contrast a vs. b in task 1 and
for a vs. b in task 2.
Yet I tried with different searchlight radius and only get near chance
classification
from task1 to task 2.....I guess this could simply mean the patterns of
responses in visual cortex are very different across task contexts, yet the
signal associated with activation level is picked up by the GLM?

if so would it be reasonable to investigate this further for instance by
deriving similarity matrices across individual
parameter estimates for task 1 and task 2?

cheers

ds




On Fri, Aug 1, 2014 at 10:35 PM, Hanson, Gavin Keith <ghanson0 at ku.edu>
wrote:

>  Hi David,
>
>  When you use map2nifti to get an image of your searchlight, if requires
> 2 inputs.
> img=map2nifti(dataset=ds, data=sl_res))
> img.to_filename(‘foo.nii.gz’)
> So once you run your searchlight like you have it set up (though I’m not
> sure that what you’re doing with those center_ids is necessary), just pass
> your original dataset to the map2nifti function along with your res dataset
> to the ‘data=' argument.
>
>  Only the original dataset contains the mapper info that allows you to
> recapitulate the 3D image, while the second argument, data, will take a
> result from a searchlight (or any vector/matrix of the right shape).
> Hopefully that’ll help you get your searchlight into a form you can
> visualize.
> - Gavin
>
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Gavin Hanson, B.S.
> Research Assistant
> Department of Psychology
> University of Kansas
> 1415 Jayhawk Blvd., 534 Fraser Hall
> Lawrence, KS 66045
>
>  On Aug 1, 2014, at 4:29 AM, David Soto <d.soto.b at gmail.com> wrote:
>
>   Thanks for the response, I have not managed to extract the whole-brain
> classification map...following the 1st example code below, the output from
> the crossvalidation is
> Dataset(array([[ 0.35526316],
>        [ 0.35855263]]),
> sa=SampleAttributesCollection(items=[ArrayCollectable(name='cvfolds',
> doc=None, value=array([0, 1]), length=2)]),
> fa=FeatureAttributesCollection(items=[]),
> a=DatasetAttributesCollection(items=[]))
>
>  How can i extract the whole brain classification map? Using niftires
> does not work either
> niftires = map2nifti(res)
>
>
> niftires.to_filename('/home/dsoto/Documents/fmri/wholebrainsearchlight_results.nii.gz')
>
>  Cheers
> ds
>
>
>
>
> On Fri, Aug 1, 2014 at 9:41 AM, Nick Oosterhof <
> nikolaas.oosterhof at unitn.it> wrote:
>
>>
>> On Jul 31, 2014, at 10:49 PM, David Soto <d.soto.b at gmail.com> wrote:
>>
>> > Hi, I keep plugging away with this pretty basic classification
>>  > [...]
>> > I get a whole-brain classification accuracy of around 68%
>> > (though did not assess significance)
>> > Then I run a searchlight analyses and looking at the classification
>> accuracy maps it appears like a chance distribution with mean 50% and the
>> max classification accuracy
>> > around 56%- I wonder how it be that none of the searchlights reaches
>> the level of wholebrain classification ? and if this is the case then can
>> it be the wholebrain classification meaningful at all?
>>
>>  That is quite possible because the whole-brain classification uses many
>> more features than each searchlight.
>>
>> Assuming there is sufficient signal in the data (which there seems to be
>> in your case) which is not limited to a small subset of features (voxels),
>> generally one sees better classification with more features. This was
>> already reported by Cox et al 2003, and later by e.g. [disclaimer:
>> shameless self promotion] Oosterhof et al 2011. (there are some cases where
>> this might not be true)
>>
>> There's often tradeoff between spatial selectivity and classification
>> accuracy. In one extreme you use all features for a single classification
>> analysis (i.e. your whole-brain classification), in the other extreme you
>> use one feature at a time (i.e., univariate analysis). A searchlight
>> analysis is somewhere in between, finding a compromise between getting high
>> classification accuracy and good spatial selectivity. But also for a
>> searchlight it holds that neighborhood (sphere or disc) size can affect
>> both classification accuracy and spatial selectivity.
>>
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
>
> --
> http://www1.imperial.ac.uk/medicine/people/d.soto/
>  _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>



-- 
http://www1.imperial.ac.uk/medicine/people/d.soto/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140804/7d0c3723/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list