[pymvpa] Pattern localization

Yaroslav Halchenko debian at onerussian.com
Fri Apr 24 15:23:21 UTC 2009


>> so -- either those raw (mean) sensitivities, or their stability assessment
>> across splits (just make sure that you normalized your features prior doing
>> that, with zscore or smth else).  If there is no feature selection, than,
>> permutation testing F- or converted z-scores might be also a nice one.
>> In both cases though you need to decide on the thresholding scheme ;)
> I looked through some papers and it seems to me, that this point is  
> somehow arbitrary. Some guys are taking the "best" 20% features, some  
> others discard those features with SD > 2, others show those features  
> which have an overlap with other subjects features... those were noise  
> perturbation shows an min. effect of 30% ... and so on.

yeah... there is also yet another way, which is related to
null-hypothesis, permutation testing, false discovery rate... it is not published
anywhere and we had not a chance/time to elaborate on it, but you might
look into
mvpa/misc/transformers.py : DistPValue

;)

> In your paper (Hanson & Halchenko, 2008) you used a recursive feature  
> elimination algorithm to sequentially eliminated features with the  
> smallest squared values of the separating plane normal coefficients.  
> Then you derived weights for the FACE class by using only FACE class SVs  
> (and for the HOUSE class in the same way).
correct... important to mention that in preprocessing data was standardized
(zscored) against baseline condition. Otherwise, taking per class means
would have little to no sense

> Is there also some supplemental (may be I forgot about) where it is  
> shown how to extract only special class SVs? Or would it be possible to  
> explain this procedure?
all that analysis was done prior we started massive PyMVPA movement and
advocation of code as supplementals ;)  I can dig out some of that code
which was crafted around stand-alone patched lightsvm but I guess it
would make it easier just to explain and actually patch LinearSVMWeights
to do this procedure, which is quite simple one:  for each of 2 classes
SVM assigns weights per each category -- positive for +1, negative for
-1. So it is just to sum up corresponding SVs accordingly... let me
simply actually patch libsvm's LinearSVMWeights for now... I will let
you know whenever it is done


-- 
Yaroslav Halchenko
Research Assistant, Psychology Department, Rutgers-Newark
Student  Ph.D. @ CS Dept. NJIT
Office: (973) 353-1412 | FWD: 82823 | Fax: (973) 353-1171
        101 Warren Str, Smith Hall, Rm 4-105, Newark NJ 07102
WWW:     http://www.linkedin.com/in/yarik        



More information about the Pkg-ExpPsy-PyMVPA mailing list