[pymvpa] effect size (in lieu of zscore)

J.A. Etzel jetzel at artsci.wustl.edu
Tue Jan 3 18:05:16 UTC 2012

On 1/2/2012 12:38 PM, Jonas Kaplan wrote:
>>     1. For this one particular subject, I'm still seeing the strange
>>     negative peak to the chance distribution, even without any
>>     z-scoring. The shape looks remarkably similar with or without
>>     zscoring (whether I use the raw values or the effect sizes as
>>     input). I think my confusion here is, even if I did several things
>>     wrong in my code, I'd expect no worse than a regular-old looking
>>     chance distribution (centered on 0). There's about 40,000 3.5
>>     isotrophic voxels in that subject's brain mask, so plenty of
>>     observations. Just eyeballing, the peak is centered at about -8%,
>>     and the bulk (95%) of observations fall between about -32% and
>>     +22% … so it's a notable shift.
> I don't know how exactly these permutations are done nowadays, but
> couldn't something like this happen if the permutations resulted in
> unequal number of trials of each type in the new chunks? I got
> distributions like this before I enabled the perchunk option in the old
> v.4 permuteLabels function to ensure trials were balanced.

Something is definitely odd to get such a bias in the permutation 
distribution. Are you only seeing this in some subjects, or everyone?

If everyone, I'd be tempted to test the entire processing stream with 
data guaranteed to be random. Basically, make a set of fake .nii full of 
random numbers, the same size, number of images, etc. as your real data. 
If these random images also result in skewed distributions, you'll know 
something in the processing is introducing the bias (e.g. scaling the 
classes separately).

I usually make histograms based on accuracy, so the range is 0 to 1 
(centered on 0.5 for 2-class). What exactly did you plot here to get -50 
to 50?


More information about the Pkg-ExpPsy-PyMVPA mailing list