[pymvpa] On below-chance classification (Anti-learning, encore)
Yaroslav Halchenko
debian at onerussian.com
Thu Jan 31 14:18:32 UTC 2013
On Thu, 31 Jan 2013, Jacob Itzhacki wrote:
> Dear Rawi and fellow PyMVPAers,
> Thanks for your prompt response. Apologies once again for the difficulties
> I adscribe to finding this counterintuitive.
> That said, I have considered your suggestion and I have a couple of
> questions regarding it:
> - First off, what to do about about significant (p<0.01) classifications
> that hover around chance level?
be skeptical/cautious about
> In the case of 4 way cross validations
> (25% chance) there is a (seemingly) much improved chance that significance
> threshold is reached even as classification hovers or is exactly chance
> level.
> - Would we be able to treat the differring significance spectrum as
> individual datapoints or would it have to be a dicotomic statistic (eg.
> p<0.01, yes or no?)?
not exactly clear on where you are aiming... but let me paraphrase it --
is your scientific question is dicotomic (yes/no) or a "spectrum" ? ;)
> Moreover, going back to the original question, is it safe to say that in a
> below chance classification performance, even though the classifier is
> seemingly doing the opposite of what we are expecting, it is actually
> "learning" and hence there was information to learn from?
My fear is that indeed might be the case in some situations, but a. not
necessarily in yours (as MS Al-Rawi pointed out -- you can have
below-chance just by chance, and you said that you have only 1/3 below
chance which is "reasonable") nor I know any paper demonstrating
presence of such effects in fMRI
Cheers,
--
Yaroslav O. Halchenko
Postdoctoral Fellow, Department of Psychological and Brain Sciences
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
More information about the Pkg-ExpPsy-PyMVPA
mailing list