[pymvpa] significance

Scott Gorlin gorlins at MIT.EDU
Fri May 8 05:04:10 UTC 2009


>
> That's right they are just 8 separate scans, each a repetition of the 
> same thing.   Regarding independence, what I was thinking is that the 
> results of one cross-validation step are not entirely independent from 
> another since the training occurs on 6/7th of the same data points.
>
The idea is that the errors on this fold will be independent of the 
errors on every other fold (ie the test sets are independent of each 
other, even if the training sets are not).

in the limit of infinite data the decision boundaries will be the same, 
but the CV accuracy will still accurately reflect the empirical loss on 
new data assuming it is iid from the training set.  ie all that happens 
is that the empirical loss approaches the true loss of the decision 
function.



More information about the Pkg-ExpPsy-PyMVPA mailing list