[pymvpa] Consistently bad accuracy?
kaustubh.patil at gmail.com
Mon Nov 26 12:34:27 GMT 2018
I suspect that there might be something wrong in the code/how the data is
If you 30% accuracy then that would mean that you will get 70% if you use a
simple rule to predict the "other class" after your classifier. This is a
sign that something is not right in data handling/evaluation.
On Mon, Nov 26, 2018 at 1:27 PM Raúl Hernández <raul at lafuentelab.org> wrote:
> No, it is balanced. It has the same number of observations for each class.
> On Mon, Nov 26, 2018 at 12:52 PM Kaustubh Patil <kaustubh.patil at gmail.com>
>> Just for clarification.
>> Is that data imbalanced, i.e. many more observations from one class?
>> On Mon, Nov 26, 2018 at 12:50 PM Raúl Hernández <raul at lafuentelab.org>
>>> Dear PyMVPA community,
>>> I'm doing classification in ROI's, I'm performing a simple 2 way
>>> classification using LSVM, and a leave-one-run-out cross-validation on 4
>>> acquisitions. On some ROI's, I get a good accuracy for the number of
>>> participants (60%), but in others I get consistently bad accuracy (30%). To
>>> test whether the performance is above chance, I use a one sample t test (I
>>> know that it is not the best test for this type of data, I just use it as
>>> quick overview). When I test the bad accuracies, those are also significant.
>>> What does it mean a consistently bad accuracy?
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at alioth-lists.debian.net
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at alioth-lists.debian.net
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at alioth-lists.debian.net
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pkg-ExpPsy-PyMVPA