[pymvpa] Visualization of the sensitivity map

Maria Hakonen maria.hakonen at gmail.com
Mon Jan 25 16:25:25 UTC 2016


Hi,

Many thanks again! I will read this material and try the analysis. I also
computed a mean accuracy map. "Bolbs" were not very clear which may be
because I have now only four subjects. However, I think that the areas
where the classification accuracy is highest are somewhat reasonable. I
also noticed that you have a code for surface based searchlight. Probably
that would work better in my case. As far as I understand,
GroupClusterThreshold algorithm works with it as well.

I have done the preprocessing as follows:

poly_detrend(ds, polyord=1, chunks_attr='chunks')
ds2 = ds[ds.sa.targets != 0] # Here I just remove the data that I don't
want to classify.
ds2 = ds2[ds2.sa.targets != 4]
ds2 = ds2[ds2.sa.targets != 2]
zscore(ds2)

So, I done z-scoring but still got the warning about the scaling of C.
Could the warning be related to that I use a grey matter mask? However, if
I use remove_invariant_features() I don't get warnings anymore.

Regards,
Maria

2016-01-24 20:30 GMT+02:00 Richard Dinga <dinga92 at gmail.com>:

> > Many thanks! I removed the invariant features and now the script gives
> no warnings.
> Great. BTW invariant features are problem during the z-scoring, since you
> got an error during the classification, I assume you didn't z-score.
> Depending on a classifier, it can make a big difference, you should
> consider doing it.
>
> > There seems to be an algorithm "GroupClusterThreshold" for evaluation
> the group level accuracy maps. Is there any example script of using that
> algorithm?
> We used it in this data paper http://f1000research.com/articles/4-174/v1
> and publish whole analysis pipeline here
> https://github.com/psychoinformatics-de/paper-f1000_pandora_data and you
> should also check out this whole thread for more verbose explanation of the
> code
> http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2015q3/003200.html
>
> > Is there any way to evaluate whether the results look reasonable for
> individual subjects? I have now just thresholded the accuracy maps with
> different thresholds (e.g 80%, 85% and 90%) and viewed the results.
> As a quick quality check that is exactly what I would do. You can also use
> lower threshold and make a mean map accuracy map of all your subjects. As a
> rule of thumb you should have "blobs" at least in those areas where you
> have them with GLM. As a sanity check you can also try to predict something
> easy like rest vs. condition, left vs right button press etc.
>
> Best wishes,
> Richard
>
> On Sun, Jan 24, 2016 at 5:43 PM, Maria Hakonen <maria.hakonen at gmail.com>
> wrote:
>
>> Many thanks! I removed the invariant features and now the script gives no
>> warnings.
>> I have calculated sensitivity maps, mapped them back to the original
>> space and saved them as .nifti files. There seems to be an algorithm
>> "GroupClusterThreshold" for evaluation the group level accuracy maps. Is
>> there any example script of using that algorithm? Is there any way to
>> evaluate whether the results look reasonable for individual subjects? I
>> have now just thresholded the accuracy maps with different thresholds (e.g
>> 80%, 85% and 90%) and viewed the results.
>>
>> -Maria
>>
>> 2016-01-23 20:31 GMT+02:00 Richard Dinga <dinga92 at gmail.com>:
>>
>>> I might be wrong, but it sounds like you have invariant features in your
>>> data. U can get a better mask or just remove them with
>>> remove_invariant_features()
>>>
>>>
>>> On Sat, Jan 23, 2016 at 5:37 PM, Maria Hakonen <maria.hakonen at gmail.com>
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > Many thanks for your answers!
>>> > I would like to identify brain regions sensitive to speech
>>> intelligibility. I have already done this with GLM by comparing responses
>>> to blocks of intelligible and unintelligible sentences. However, I would
>>> also like to try if MVPA finds some other regions since I have understood
>>> that it is more sensitive. Perhaps this could be done by running
>>> searchlight analysis on the full brain and then analyzing the clusters as
>>> introduced in Etzel et.al. (2013, i.e. the link in the previous
>>> message).
>>> >
>>> > I tried searchlight but it gives me the following warning:
>>> >
>>> > WARNING: Obtained degenerate data with zero norm for training of
>>> <LinearCSVMC>.  Scaling of C cannot be done.
>>> >
>>> > I wonder if you have any advice how to solve this problem?
>>> >
>>> > Regards,
>>> > Maria
>>> >
>>> > 2016-01-21 17:02 GMT+02:00 Jo Etzel <jetzel at wustl.edu>:
>>> >>
>>> >> I quite agree with Nick's "quite tricky": about the only way in which
>>> averaging the weights over 18 the cross-validation folds will give you a
>>> correct impression of the "important" voxels is if most of the voxels in
>>> your ROI have no information at all, and the remaining are uniquely
>>> informative (each distinguishes the classes, but not correlated with each
>>> other). Needless to say, this scenario is not exactly common for fMRI
>>> datasets. (and even more complicated if multiple people are being analyzed.)
>>> >>
>>> >> Searchlights can give a decent reflection of where *local*
>>> information occurs, though there are many caveats (to cite myself, see
>>> http://www.ncbi.nlm.nih.gov/pubmed/23558106).
>>> >>
>>> >> I generally suggest tailoring the analysis to the hypothesis. If
>>> you're really interested in the activity in individual voxels, some sort of
>>> mass-univariate analysis is probably best. If you're interested in ROIs,
>>> ROI-based MVPA can work very well. But trying to interpret *voxels* from a
>>> *ROI-based* analysis is problematic at best.
>>> >>
>>> >> Jo
>>> >>
>>> >>
>>> >>
>>> >> On 1/21/2016 8:27 AM, Nick Oosterhof wrote:
>>> >>>
>>> >>>
>>> >>>> On 21 Jan 2016, at 15:18, Maria Hakonen <maria.hakonen at gmail.com>
>>> >>>> wrote:
>>> >>>>
>>> >>>> I am working on my first fMRI data and would like to try MVPA
>>> >>>> analysis. I have two classes that I have classified with linear
>>> >>>> SVM. I would like to determine which voxels contribute most to the
>>> >>>> clasifier’s successful discrimination of the classes. As far as
>>> >>>> understand, the absolute value of the SVM weights directly reflect
>>> >>>> the importance of a feature (voxel) in discriminating the two
>>> >>>> classes.
>>> >>>
>>> >>>
>>> >>> Interpretation of SVM weights is quite tricky, see for example Haufe
>>> >>> et al 2015 Neuroimage, doi:10.1016/j.neuroimage.2013.10.067.
>>> >>>
>>> >>> If you want to make inferences about the spatial location of
>>> >>> multivariate discrimination, you may want to consider using a
>>> >>> searchlight analysis instead.
>>> >>>
>>> >>>> I would like to average the SVM weights across all 18
>>> >>>> cross-validation folds for each voxel and wrap the resulting map
>>> >>>> into the standard space in order to display a map of the resulting
>>> >>>> overlap.
>>> >>>
>>> >>>
>>> >>> Even if one would be confident that SVM weights were interpretable,
>>> >>> why take the absolute value? It would seem that this makes it much
>>> >>> more difficult to do any stats or interpret the results. In
>>> >>> particular, lack of signal but difference in variance of weights
>>> >>> across regions may then yield differences in average absolute
>>> >>> values.
>>> >>
>>> >> >
>>> >>>
>>> >>> _______________________________________________
>>> >>> Pkg-ExpPsy-PyMVPA mailing list
>>> >>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> >>>
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>> >>>
>>> >>>
>>> >> --
>>> >> Joset A. Etzel, Ph.D.
>>> >> Research Analyst
>>> >> Cognitive Control & Psychopathology Lab
>>> >> Washington University in St. Louis
>>> >> http://mvpa.blogspot.com/
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Pkg-ExpPsy-PyMVPA mailing list
>>> >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> >>
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Pkg-ExpPsy-PyMVPA mailing list
>>> > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> >
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>>
>>> _______________________________________________
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160125/d48a8427/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list