[pymvpa] effect size (in lieu of zscore)
J.A. Etzel
jetzel at artsci.wustl.edu
Wed Jan 4 23:04:30 UTC 2012
> I had similar feeling -- performance distributions should be pretty
> much a mixture of two: chance distribution (centered at chance level
> for that task) and some "interesting" one in the right tail, e.g. as we
> have shown in a toy example in
> http://www.pymvpa.org/examples/curvefitting.html#searchlight-accuracy-distributions
That is a pretty figure! But certainly not guaranteed with searchlight
fMRI MVPA.
> indeed that is most often the case, BUT as you have mentioned --
> not always. Some times "negative preference" becomes too prominent, thus
> giving a histogram the peak below chance. As you have discussed -
> reasons could be various, but I think that it might also be due to the
> same fact -- samples are not independent!
Non-independence is definitely a big issue (not the only one,
unfortunately).
> So in turn it might also amplify those confounds you were talking
> about leading to anti-learner effects.
I have datasets (high-level cognitive tasks) in which most people have
great classification but a few classify well below-chance, both at the
level of a ROI and in searchlight analyses. This can cause spurious
results, particularly in the searchlight analysis (because of the
amplification). It doesn't seem unusual to have a few subjects with
below-chance classification in large numbers of voxels; I don't think
the current methods deal with this (or even explain it) very well.
Jo
More information about the Pkg-ExpPsy-PyMVPA
mailing list