[pymvpa] Justification for trial averaging?

MS Al-Rawi rawi707 at yahoo.com
Fri Jan 24 10:39:40 UTC 2014


I apologies for multiple postings, there might be some problem with email server!
-Rawi




On Friday, January 24, 2014 2:48 AM, MS Al-Rawi <rawi707 at yahoo.com> wrote:


>
>I think a correlation classifier/method was used in Haxby's et al 2001 work, and it gave high classification accuracy using the averages. 
>One might argue that, although not sure about this, assigning a volume/exemplar to a single label/condition is problematic, thus, averaging is a good option. 
>
>
>Using a correlation-based classifier on exemplars/volumes would give less accuracy than the using the averages, but other powerful classifiers, e.g. SVMs, LR, ANN, RIDGE-LR will do well. 
>
>-Rawi
>
>
>
>> On Thursday, January 23, 2014 8:06 PM, J.A. Etzel <jetzel at artsci.wustl.edu> wrote:
>> > I also agree, and will "toss in" a few more ideas:
>> 
>>>>  But forming decisions boundaries over features is exactly what a
>>>>  classifier is meant to do, so why not just throw all these
>>>>  different exemplars into the mix, and let the classifier figure out
>>>>  its own notion of prototypicality?
>> I think because of power, particularly the lack of it. Our datasets are
>> usually massively out of balance (way more dimensions than examples),
>> making learning quite difficult. It often just isn't possible to further
>> subdivide the data to let the classifier learn the exemplars as well.
>> 
>>>>  And if you’re going to pre-classify, why pick the average
>>>>  response? Why not take some kind of lower-dimensional input; the
>>>>  first several eigenvectors or something, or something else?
>> Probably the most common technique other than averaging is creating
>> "parameter estimate images": taking the beta weights that result from
>> fitting a linear model to the inputs, convolved with a hemodynamic
>> response function. This can be done in programs such as SPM, and is a
>> bit closer to the first level analyses done for mass-univariate analysis.
>> 
>>>  It seems weird to average the regressor weights, but maybe it
>>>  shouldn't. Is that something that's done, or is the averaging 
>> process
>>>  only used with raw voxel activity?
>> It does seem a bit weird, but I've done it. It feels "cleaner" to 
>> generate a single set of parameter estimates than to generate one per 
>> example (or run) then average, but I don't know if it is actually 
>> mathematically different.
>> 
>> Jo
>> 
>> 
>> -- 
>> Joset A. Etzel, Ph.D.
>> Research Analyst
>> Cognitive Control & Psychopathology Lab
>> Washington University in St. Louis
>> http://mvpa.blogspot.com/
>> 
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>> 
>
>_______________________________________________
>Pkg-ExpPsy-PyMVPA mailing list
>Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>



More information about the Pkg-ExpPsy-PyMVPA mailing list