[pymvpa] Justification for trial averaging?

J.A. Etzel jetzel at artsci.wustl.edu
Thu Jan 23 20:06:13 UTC 2014


I also agree, and will "toss in" a few more ideas:

>> But forming decisions boundaries over features is exactly what a
>> classifier is meant to do, so why not just throw all these
>> different exemplars into the mix, and let the classifier figure out
>> its own notion of prototypicality?
I think because of power, particularly the lack of it. Our datasets are
usually massively out of balance (way more dimensions than examples),
making learning quite difficult. It often just isn't possible to further
subdivide the data to let the classifier learn the exemplars as well.

>> And if you’re going to pre-classify, why pick the average
>> response? Why not take some kind of lower-dimensional input; the
>> first several eigenvectors or something, or something else?
Probably the most common technique other than averaging is creating
"parameter estimate images": taking the beta weights that result from
fitting a linear model to the inputs, convolved with a hemodynamic
response function. This can be done in programs such as SPM, and is a
bit closer to the first level analyses done for mass-univariate analysis.

> It seems weird to average the regressor weights, but maybe it
> shouldn't. Is that something that's done, or is the averaging process
> only used with raw voxel activity?
It does seem a bit weird, but I've done it. It feels "cleaner" to 
generate a single set of parameter estimates than to generate one per 
example (or run) then average, but I don't know if it is actually 
mathematically different.

Jo


-- 
Joset A. Etzel, Ph.D.
Research Analyst
Cognitive Control & Psychopathology Lab
Washington University in St. Louis
http://mvpa.blogspot.com/



More information about the Pkg-ExpPsy-PyMVPA mailing list