[pymvpa] Train and test on different classes from a dataset
Yaroslav Halchenko
debian at onerussian.com
Tue Feb 5 13:51:13 UTC 2013
Hi Francisco,
Great that you followed up -- could you please clarify for me if indeed
in one of your publications you did some power/ROC analysis of such
permutation scheme (keeping testing set labels assignment) against
"classical" (permute all independent assignments)? I have vague memory
that you did but I could be wrong.
NB A will argue with Michael in reply to his post ;)
On Tue, 05 Feb 2013, Francisco Pereira wrote:
> I'm catching up with this long thread and all I can say is I fully
> concur with Michael, in particular:
> On Tue, Feb 5, 2013 at 3:11 AM, Michael Hanke <mih at debian.org> wrote:
> > Why are we doing permutation analysis? Because we want to know how
> > likely it is to observe a specific prediction performance on a
> > particular dataset under the H0 hypothesis, i.e. how good can a
> > classifier get at predicting our empirical data when the training
> > did not contain the signal of interest -- aka chance performance.
> Permuting the test set might make sense, perhaps, if you wanted to
> make a statement about the result variability over all possible test
> sets of that size if H0 was true.
--
Yaroslav O. Halchenko
Postdoctoral Fellow, Department of Psychological and Brain Sciences
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
More information about the Pkg-ExpPsy-PyMVPA
mailing list