[pymvpa] Combinatorial MVPA
Richard Dinga
dinga92 at gmail.com
Thu Dec 10 00:10:43 UTC 2015
Bill Broderick wrote:
> However, to determine which timecourse is contributing the most to the
> classifiers performance,
> see which timecourses or which combination
> of time courses caused the greatest drop in performance when removed.
I wrote:
> You might take a look at Relief algorithm (also implemented in PyMVPA),
> that is less hacky approach to your feature weighting problem.
Yaroslav Halchenko wrote:
> there is yet another black hole of methods to assess contribution of
> each feature to performance of the classifier. The irelief, which was
> mentioned is one of them...
> So what is your classification performance if you just do
> classsification on all features? which one could you obtain if you do
> feature selection, e.g. with SplitRFE (which would eliminate features to
> attain best performance within each cv folds in nested cv)
I think there are (at least) 2 separate problems.
1. How to evaluate predictive power for every feature in order to interpret
data
2. How to evaluate importance of features for a classifier in order to
understand a model and possibly select set of features to get best
performance.
Feature selection methods like Lasso or RFE (as far as I know) would omit
most of redundant/higly correlated features, therefore making a 1.
impossible. It still might me a good idea for other reasons.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20151210/ddd14705/attachment.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list