[pymvpa] Question about classifiers
Serin Atiani, Dr
serin.atiani at mail.mcgill.ca
Sun Mar 29 07:50:50 UTC 2015
Hello,
I am doing a first brush analysis on my data using pymvpa. When I use a SVM classifier, which I think theoratically makes more sense to use with my data I get a strange cross validation confusion matrix with one row that has high numbers, and the rest is mostly zeros or ones. I have 17 different classes that I train the classifier on, and this is an example of the cross validation confusion matrix I get
[[ 2 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0]
[16 17 17 16 16 16 17 17 17 17 17 15 16 16 16 16 16]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
[ 0 0 0 2 0 0 1 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 2 0 1 0 1 0 0 0 1 0 1 0 1 1 0 0]
[ 0 0 0 0 2 1 2 0 0 0 0 0 0 0 0 0 1]
[ 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0 2 0 0 1 0 0 1 0 1]
[ 0 1 1 0 1 0 0 1 0 2 0 0 0 0 0 0 0]
[ 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 1 0]
[ 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 2 0]
[ 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0]
[ 0 0 0 0 0 1 0 0 0 0 0 1 1 3 0 0 0]
[ 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
[ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]
Reducing the number of features, makes things a bit better but I still get one row that has large numbers. I tried also to group my classes and train the SVM classifier on the two most distinguishable ones, Nearest neighbour gives a 80% accuracy, with SVM it is slightly above chance with a confusion matrix that looks like this again.
[[5 0 ]
[15 20]
It doesn't look right, anybody has any thoughts about this?
Serin
More information about the Pkg-ExpPsy-PyMVPA
mailing list