[pymvpa] Pattern localization
Matthias Ekman
matthias.ekman at googlemail.com
Fri Apr 24 17:12:22 UTC 2009
>>> so -- either those raw (mean) sensitivities, or their stability assessment
>>> across splits (just make sure that you normalized your features prior doing
>>> that, with zscore or smth else). If there is no feature selection, than,
>>> permutation testing F- or converted z-scores might be also a nice one.
>>> In both cases though you need to decide on the thresholding scheme ;)
>> I looked through some papers and it seems to me, that this point is
>> somehow arbitrary. Some guys are taking the "best" 20% features, some
>> others discard those features with SD > 2, others show those features
>> which have an overlap with other subjects features... those were noise
>> perturbation shows an min. effect of 30% ... and so on.
>
> yeah... there is also yet another way, which is related to
> null-hypothesis, permutation testing, false discovery rate... it is not published
> anywhere and we had not a chance/time to elaborate on it, but you might
> look into
> mvpa/misc/transformers.py : DistPValue
Cool! Thanks again for the hint.
from the source code:
"WARNING: Highly experimental/slow/etc: no theoretical grounds have been
presented in any paper, nor proven"
Sounds like: If you use it (right now), you must be total idiot ;-)
But, would be very nice to have some p-values and map it with a
threshold using a multiple comparison correction criterion. I'm really
looking forward to this transformation.
>> In your paper (Hanson & Halchenko, 2008) you used a recursive feature
>> elimination algorithm to sequentially eliminated features with the
>> smallest squared values of the separating plane normal coefficients.
>> Then you derived weights for the FACE class by using only FACE class SVs
>> (and for the HOUSE class in the same way).
> correct... important to mention that in preprocessing data was standardized
> (zscored) against baseline condition. Otherwise, taking per class means
> would have little to no sense
OK! I assume with "baseline condition" you mean the explicit baseline
("rest") and not the implicit baseline which would be everything else,
except FACE and HOUSE (like e.g. FSL does), right?
>> Is there also some supplemental (may be I forgot about) where it is
>> shown how to extract only special class SVs? Or would it be possible to
>> explain this procedure?
> all that analysis was done prior we started massive PyMVPA movement and
> advocation of code as supplementals ;) I can dig out some of that code
> which was crafted around stand-alone patched lightsvm but I guess it
> would make it easier just to explain and actually patch LinearSVMWeights
> to do this procedure, which is quite simple one: for each of 2 classes
> SVM assigns weights per each category -- positive for +1, negative for
> -1. So it is just to sum up corresponding SVs accordingly... let me
> simply actually patch libsvm's LinearSVMWeights for now... I will let
> you know whenever it is done
Oh great! Thank you very much for putting so much work into this!
Best Regards,
Matthias
More information about the Pkg-ExpPsy-PyMVPA
mailing list