[pymvpa] Papers discussing relationship between scanning parameters and MVPA performance?

MS Al-Rawi rawi707 at yahoo.com
Wed Mar 27 13:45:19 UTC 2013


I am not sure if this could help a bit,

http://dx.doi.org/10.1016/j.neuroimage.2012.04.018, 

'Ten ironic rules for non-statistical reviewers' by: Karl Friston

Cheers,
-Rawi


>________________________________
> From: Gilles de Hollander <gilles.de.hollander at gmail.com>
>To: pkg-exppsy-pymvpa <pkg-exppsy-pymvpa at lists.alioth.debian.org> 
>Sent: Wednesday, March 27, 2013 1:35 PM
>Subject: [pymvpa] Papers discussing relationship between scanning parameters and MVPA performance?
> 
>
>Hi there,
>
>
>I have been quite reading a bit about MVPA the past year, but found little papers that focus on the very practical sides of MVPA. More specifically, I'm looking for any literature that discusses the influence of different scanning parameters, number of trials, number of participants and Signal-to-Noise ratios on experimental power/classifier performance. Does anyone know such a paper?
>
>
>I have a paper in review right now and my reviewer says that we should have looked into this. I think this sounds nice, but a bit naive: not thatmuch studies have been done and researchers don't publish stuff that doesn't show significant effects. Also, I guess the influence of all these parameters covary heavily with the task and ROIs at hand. To say you need n subjects with m trials, depending on the SNR by factor x is not really possible would be my hunch. Or am I missing something here? I'm glad to hear your opinion about this.
>
>
>Thanks,
>Gilles de Hollander
>PhD candidate at the Cognitive Science Center Amsterdam
>_______________________________________________
>Pkg-ExpPsy-PyMVPA mailing list
>Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>



More information about the Pkg-ExpPsy-PyMVPA mailing list