[pymvpa] Cross validation model selection.

Roberto Guidotti robbenson18 at gmail.com
Fri Jul 4 14:28:24 UTC 2014


thank you for the quick response!

> It will be the classifier trained on the last fold.

> I don't think it would make much sense. "Best" is higher accuracy when
> predicting a particular test dataset compared to all others. I doesn;t
> necessarily mean that that particular model fit is really better, it
> could also be a "better/cleaner" test set.
> You could explore this with more complicated data-folding schemes and
> evaluate each trained classifier on multiple different test sets. But
> I am not sure what you would gain from doing so...

Ok, I was also thinking about that, it always depends on train/test data! I
mean if you always change the training set you're changing initial
conditions, so you cannot state that a classifier is better than others!

Is there an easy pymvpa-based ;) way to store each cross-validation
classifier or it's better to do it manually?

Thank you,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140704/2d822407/attachment.html>

More information about the Pkg-ExpPsy-PyMVPA mailing list