[pymvpa] Alternatives to Cross-validation
jitzhacki at gmail.com
Wed Oct 24 13:38:38 UTC 2012
Thank you for your prompt response.
"not sure why cross-validation doesn't fit your needs here. Could you
elaborate a bit more on what you are trying to achieve."
I'll put it bluntly. I am trying to script a way for the algorithm to
create a comparison pattern based on one set of training-stimuli, then
disregard this set of stimuli and compare the pattern formed to a different
set of test-stimuli, which will also be of different kind, but should
produce similar activation patterns in the ROIs scrutinized.
"Do you mean that you want to train with all examples from one dataset,
then test with all examples from a different dataset?"
This is exactly it.
"You could always combine the two datasets as the two cross-validation
folds and get per-fold results, no?"
This is an option we have considered (and will take upon if we find no
better solution), to have all the stimuli fall under the same dataset to
make cross-validation viable. However we believe that the predictions given
on the confusion tables would not be as ideal as if the comparison could be
performed as described above.
Thanks again for any help you can offer us.
On Wed, Oct 24, 2012 at 3:16 PM, Francisco Pereira <
francisco.pereira at gmail.com> wrote:
> You could always combine the two datasets as the two cross-validation
> folds and get per-fold results, no?
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pkg-ExpPsy-PyMVPA