[pymvpa] SVD did not converge
andrea bertana
andrea.bertana1 at gmail.com
Wed Apr 2 15:44:55 UTC 2014
Dear PyMVPA experts,
I'm trying to use Hyperalignment() procedure to align different subjects'
brain.
I am mainly referring to the example described in this webpage -
http://dev.pymvpa.org/examples/hyperalignment.html
However, when I try to compute the common space on the training set (10
participant, single - participant matrix: 301 time-point x 1000 voxels ) I
get the following message:
LinAlgError, 'SVD did not converge'
To the best of my knowledge, singular value decomposition could fail for
several reasons, here is what I checked:
-
Matrices have no Nan's.
-
Standard deviation for each voxel over time is > 0, meaning that there
are no zero or constant voxels.
-
By aligning each pair of participants independently, I couldn't find any
specifically 'compromised' participant, as far as no clear pattern emerged
(depending on the pairs the SVD could-couldn't converge).
Further informations:
-
Data come from Human Connectome Project (HCP)- Working memory task
- We tried also with another task (motor) that comes from HCP using the
same subjects but we get a same error
-
The ds_all container with participants' information (samples and
features):
http://nilab.cimec.unitn.it/people/vittorioiacovella/ds_all.pickle
-
Please, find below the code I am using for this task.
import numpy as np
from mvpa2.suite import *
import pickle as pk
import sys
#Insert all the part of loading ds_all
path=sys.argv[1]
ds_all = pk.load(open(path))
### DEFINE THE CLASSIFIER
## use same linear support vector machine
clf = LinearCSVMC()
# STARTING THE HYPERALIGNMENT
verbose(2, "between-subject (hyperaligned)...", cr=False, lf=False)
hyper_start_time = time.time()
bsc_hyper_results = []
#cross-validation over subjects
cv = CrossValidation(clf, NFoldPartitioner(attr='subject'),
errorfx=mean_match_accuracy)
# HYPERALIGNMENT
# - Leave-one-run-out for hyperalignment training
nruns = 1,2
for test_run in nruns:
#Split in training and testing set:
# For leave one run out, use this:
ds_train = [sd[sd.sa.chunks != test_run,:] for sd in ds_all]
ds_test = [sd[sd.sa.chunks == test_run,:] for sd in ds_all]
#Defining hyper-function and computing hyperalignment parameters.
hyper = Hyperalignment()
hypmaps = hyper(ds_train)
#Applying hyperalignment parameters on the test set (the run left out)
ds_hyper = [ hypmaps[i].forward(sd) for i, sd in enumerate(ds_test)]
# zscore each subject individually after transformation for optimal
performance
zscore(ds_hyper, chunks_attr='subject')
#Encoding by simple avaraging each block after all subjects data are in
common space.
averager = mean_group_sample(['targets', 'block', 'chunks'])
ds_hyper_encoded = [sd.get_mapped(averager) for sd in ds_hyper]
ds_hyper = vstack(ds_hyper_encoded)
#Store results of classification for different partitions.
res_cv = cv(ds_hyper)
#Append results on a list (each element comes from one run)
bsc_hyper_results.append(res_cv)
#Final results
bsc_hyper_results = hstack(bsc_hyper_results)
#Showing final results.
verbose(2, "done in %.1f seconds" , (time.time() - hyper_start_time,))
verbose(2, "between-subject (hyperaligned): %.2f +/-%.3f" \
, (np.mean(bsc_hyper_results),
np.std(np.mean(bsc_hyper_results, axis=1)) / np.sqrt(nsubjs - 1)))
Thank you.
Best,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140402/aaf65784/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list