[pymvpa] Cross-participant MVPA and controlling for traits of no interest
john.clithero at gmail.com
Tue Jul 12 16:26:49 UTC 2011
Hi Jo -
Thanks for your response. The features are structural data, so yes, they are
It just so happens that a behavioral trait is also predictive of whether or
not participants are in Group A or Group B. Since the behavioral trait is
not of the same "type" as the features, it seems incorrect to simply add it
to the feature space. Still, though, I would like to "control" for the
predictability of that trait in the MVPA.
Does that make more sense?
On Tue, Jul 12, 2011 at 11:25 AM, J.A. Etzel <jetzel at artsci.wustl.edu>wrote:
> To clarify a bit: what are your features for the classification? I take it
> they're not voxel values/imaging data but rather some sort of behavioral
> As a general strategy I'd work hard to make sure the feature you don't want
> driving the classification is not present in the training data, rather than
> trying to adjust for it afterwords.
> On 7/11/2011 3:39 PM, John Clithero wrote:
>> Hi PyMVPAers -
>> I have a bit of a thought problem (but also hopefully an implementation
>> I am performing cross-participant classification (do they belong to
>> group A or group B?) and that classification works quite well (I've
>> tried several different algorithms and the leave-one-out CVs are all
>> significant). However, there is a trait X that we wish to control for
>> (using the trait X - which is something we are not interested in and
>> would prefer to have no effect on prediction - as a univariate
>> predictor, it also performs significantly well for predicting group A or
>> group B in a simple logistic regression), and I am hoping for some help
>> in determining the best option.
>> One option that I've thought of involves running SVM regression on the
>> residuals from the logistic regression (so, instead of SVM on 0s and 1s,
>> give it the continuous variable of the residuals and run SVM
>> regression). This would (I think) effectively ask if a multivariate
>> analysis can predict the variance that remains in individual binary
>> classification after we have accounted for trait X. Does this sound
>> reasonable, or can an option be thought of to adjust CV post-hoc that
>> takes trait X into account?
>> And, if that does sound reasonable, is there a straightforward way to
>> implement this test in PyMVPA?
>> Thanks for humoring me and my thought problem.
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.**alioth.debian.org<Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org>
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.**alioth.debian.org<Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pkg-ExpPsy-PyMVPA