[pymvpa] effect size (in lieu of zscore)
J.A. Etzel
jetzel at artsci.wustl.edu
Wed Jan 4 22:46:23 UTC 2012
On 1/4/2012 3:20 PM, Mike E. Klein wrote:
> I have toyed with a bit of ROI MVPA: found some accuracies that were
> above-chance, though I'm not sure if they were convincingly so. You're
> suggesting that it should run an analysis with permuted labels on, for
> example A1 and another area, and then look at the distribution of the
> accuracies?
No, that's not what I meant. What I suggest you try is aimed at
understanding the real data using a ROI-based approach before doing a
searchlight analysis. The logic here is that it's a lot easier to design
and interpret "sanity checks" with a ROI-based method. This can let you
check your analysis steps (e.g. averaging the trials, normalizing to MNI
space or not, partitioning method) when you know what the results should
look like.
Basically, you would need to define at least two anatomical regions: one
that you think should classify some aspect of the stimuli very well, and
one that should not (ideally the regions should have a roughly similar
number of voxels). Then you run the ROI-based analysis on these two ROIs
and see if the one that you think should classify well does, and the
other near chance. If so, you have some reason to think that your
initial steps are at least roughly appropriate.
For example, say the subjects had to push response buttons with
different fingers during the task. You could classify which button was
pressed using a motor/somatosensory ROI (which should classify very
well) and a frontal/visual ROI (which might do much worse, depending on
your situation). This gives you something easy to understand for
troubleshooting and optimizing that's not your target analysis.
I described some of these ideas in Etzel, J.A., V. Gazzola, and C.
Keysers. (2009) An introduction to anatomical ROI-based fMRI
classification analysis. Brain Research, 1282: p. 114-125.
>
> Also, to update on my weird results:
>
> - This doesn't seem to be due to averaging... I just reran with some
> non-averaged data and still get the slightly-negative shift.
I'm not sure that that's the best way to try to evaluate the effect of
averaging. If you're partitioning on the runs you should be fine, but
when setting up future analyses you need to ensure that averages from
the same run are not split between the training and testing sets.
> - I'm as confident as I can be that this isn't a balancing issue... I'm
> running these analyses after stripping away all but 2 conditions. Each
> of these conditions have an identical N in each of the experimental runs.
> - I'm currently running without averaging /and/ without any z-scoring,
> although I think I've seen this shift sans zscore, so expect it to still
> be there.
> - I'm wondering now if poly_detrend could be the culprit somehow. I
> used: poly_detrend(dataset, polyord=1, chunks_attr='chunks') .
> - My pre-pymvpa preprocessing is pretty normal, I think. I ran mnc2nii
> on the data, used fslsplit to get the files into 3d, motion-corrected in
> SPM, and used fslmerge to get the data back in 4d for python.
I don't use fsl or pymvpa enough to give any advice on these particulars.
Jo
More information about the Pkg-ExpPsy-PyMVPA
mailing list