[pymvpa] Training and testing on only 1 run (no cross validation)

Nick Oosterhof n.n.oosterhof at googlemail.com
Sun Nov 12 17:50:43 UTC 2017


On 12 November 2017 at 02:33, Lynda Lin <llin90 at illinois.edu> wrote:

> Thanks so much for responding to my question - I used the
> CustomPartitioner option and it worked! (I'm so sorry it took so long to
> reply, wanted to wait til I ran the searchlight analyses, which I just
> finished, to have a more coherent answer). I had 2 follow up questions
> regarding interpretation of results...
>
> 1) if my searchlight script doesn't include the "sl_map.samples - 1" line,
> does that mean that all my values when I visualize them on FSL for example
> will be error rates? (instead of accuracy values?) Here's my script:
>
> *results = sl(dataset)*
> *niftiresults = map2nifti(results, imghdr=dataset.a.imghdr)*
>
> Do I need to add the following lines to my script to get accuracy rates?
> *sphere_errors = results.samples[0]*
> *map2nifti(1-sphere_errors)*
>

It really depends on what measure and arguments you use. For example
consider the searchlight script here:

http://www.pymvpa.org/examples/searchlight.html

By default the error rates are returned; these are transformed to
accuracies by

     sl_map = sl(ds)
     sl_map.samples *= -1
     sl_map.samples += 1


>
> 2) I ran a one-sample t-test on SPM using the .nii images I got for each
> participant from the searchlight analyses (after converting them to
> standard space) but I'm not sure how to threshold the image to correct for
> multiple comparisons? Or is this something that the program does?
>

No, these maps are not corrected for multiple comparisons. Consider:
http://www.pymvpa.org/generated/mvpa2.algorithms.group_clusterthr.GroupClusterThreshold.html


> For example, when I open the group contrast (or beta) image after running
> the one-sample t-test on FSL and specify min=0.5, max=1
>

What do these values represent? Intensity?

Did you subtract chance level from the maps before running the one-sample
t-test (hint: you should)? Generally it's quite [*] valid to do such a test
for the group analysis, assuming that your design is balanced (equal number
of classes over folds).

* there is some work in the literature that argues a t-test is not the best
option.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20171112/15e971e6/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list