[pymvpa] classification based on individual parameter estimates from FSL
David Soto
d.soto.b at gmail.com
Mon Jun 30 23:25:40 UTC 2014
Hi Michael, indeed ..well done for germany today! :).
Thanks for the reply and the suggestion on KNN
I should have been more clear that for each subject I have the
following *block
*sequences
ababbaabbaabbaba in TASK 1
ababbaabbaabbaba in TASK 2
this explains that I have 8 a-betas and 8 b-betas for each task
AND for each subject..so if i concatenate & normalize all the beta data
across subjects I will have 8 x 19 (subjects)= 152 beta images for class a
and the same for class b
then could I use SVM searchlight trained to discriminate a from b in task1
betas and tested in the task2 betas?
cheers
ds
Hey,
sorry for the delay... Aren't you watching the world cup? ;-)
On Thu, Jun 26, 2014 at 02:11:00PM +0100, David Soto wrote:
>* The design is simple, basically I have 2 tasks, S and I and each task has 2
*>* conditions: a and b
*> >* Each task occurs on a separate fMRI run and the conditions a & b are
*>* blocked such as 'ababbaabbaabbaba' (each block is 4 trials each
*> >* Data has been preprocessed in FSL (as part of univariate-based analyses),
*>* including a 5 mm smoothing. I have derived parameter estimates for each
*>* task condition a & b....so have 8 betas per subject per condition.
*
I don't fully understand how two conditions time two tasks make 8
betas...
>* Basically I would like to train a SVM classifier to discriminate
*>* conditions a & b in task S and then test it on the independent dataset
*>* from the different task I.
*> >* For this I thought to normalise to MNI and concatenate all the arameter
*>* estimates for a & b for task S across all subjects and in principle use
*>* whole-brain classification, with the intention of trying searchligh
*>* analyses later on...
*> >* Does this make sense? or would it be better to do it differently? Any
*>* advise or pointers would be much appreciated!
*
The general approach is sane. However, I don't know if that SVM can be
trained properly with 8 training samples. Doing it in a searchlight
brings the number of features closer to the number of samples. You could
also consider a simple k-nearest-neighbor approach (prediction
determined by the closest (eucl./corr-distane) training dataset sample).
However, the latter is not really applicable in the full-brain case, as
the distance measure will be dominated/contaminated by thousands of
noise voxels...
HTH,
Michael
--
J.-Prof. Dr. Michael Hanke
Psychoinformatik Labor, Institut für Psychologie II
Otto-von-Guericke-Universität Magdeburg, Universitätsplatz 2, Geb.24
Tel.: +49(0)391-67-18481 Fax: +49(0)391-67-11947 GPG: 4096R/7FFB9E9B
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140701/cba9709a/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list