[pymvpa] Papers discussing relationship between scanning parameters and MVPA performance?
rawi707 at yahoo.com
Wed Mar 27 13:45:19 UTC 2013
I am not sure if this could help a bit,
'Ten ironic rules for non-statistical reviewers' by: Karl Friston
> From: Gilles de Hollander <gilles.de.hollander at gmail.com>
>To: pkg-exppsy-pymvpa <pkg-exppsy-pymvpa at lists.alioth.debian.org>
>Sent: Wednesday, March 27, 2013 1:35 PM
>Subject: [pymvpa] Papers discussing relationship between scanning parameters and MVPA performance?
>I have been quite reading a bit about MVPA the past year, but found little papers that focus on the very practical sides of MVPA. More specifically, I'm looking for any literature that discusses the influence of different scanning parameters, number of trials, number of participants and Signal-to-Noise ratios on experimental power/classifier performance. Does anyone know such a paper?
>I have a paper in review right now and my reviewer says that we should have looked into this. I think this sounds nice, but a bit naive: not thatmuch studies have been done and researchers don't publish stuff that doesn't show significant effects. Also, I guess the influence of all these parameters covary heavily with the task and ROIs at hand. To say you need n subjects with m trials, depending on the SNR by factor x is not really possible would be my hunch. Or am I missing something here? I'm glad to hear your opinion about this.
>Gilles de Hollander
>PhD candidate at the Cognitive Science Center Amsterdam
>Pkg-ExpPsy-PyMVPA mailing list
>Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
More information about the Pkg-ExpPsy-PyMVPA