[pymvpa] Number of data points per condition: what are your guidelines?

Vadim Axel axel.vadim at gmail.com
Tue May 1 16:27:43 UTC 2012


Hi experts,

I am talking about basic pattern classification (e.g. no feature selection
etc). SVM algorithm (with built-in regularization).

1. A small number of data points with large dimension (ROI size)  can cause
overfitting, which is  high prediction on training set and bad test set.
Now, suppose, I have a beyond chance classification on test set, which was
validated using within subject permutation test and across subjects t-test
vs. chance. Can my results be still unreliable? If so, how can I test it?

2. Practically, is 10 independent data points (averaged block value or beta
values) with the ROI of 100 voxels is safe enough?

3. Do you know about any imaging papers which tested / discussed this issue?

Thanks for ideas,
Vadim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20120501/5764cad4/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list