[pymvpa] Beta images and attributes

Jeffrey Bloch jeffreybloch1 at gmail.com
Thu Apr 16 15:27:30 UTC 2015


Hello!  I was hoping to get a little push in the right direction.  Thanks
in advance for all of your help!

I'm now starting a basic analysis where instead of looking at an entire
time series, I will be using beta images for each condition (per run).
There are 6 runs, so for each condition (e.g., "monkey") there is a beta
image, and I just want to begin by doing an odd/even comparison (runs 0,2,4
vs runs 1,3,5):

beta_0001.nii
beta_0002.nii
beta_0003.nii
beta_0004.nii
beta_0005.nii
beta_0006.nii

When building this dataset, should I just concatenate these beta images
(using vstack)?  I only ask because I'm concerned about a quick sanity
check I did, in which I'm getting perfect classification (1.0) and zero
error on the cross-validation (0.0000).  Here is the shape of my dataset
after v-stacking:

>>> detrended_orig_mds.shape
(6, 510340)

I assume this is the 6 beta image as rows, with voxels as columns in this
matrix.  I've setup an attributes file for each beta image that has
targets, chunks, etc (a 1x4 array for each beta).  My question: is the
classifier (based on "chunk") actually getting access to the (individual)
voxel data within the beta images in this scenario?  Or, is it classifying
simply on the target category (which is the same for each beta). It seems
that maybe I should be giving my attributes to each voxel, but I'm not
quite sure how to do this (each voxel in a beta would be assigned to a
particular chunk).  Here's some relevant code:


  >>> print mds.summary()

Dataset: 6x510340 at float64, <sa: chunks,targets,time_coords,
time_indices>, <fa: voxel_indices>, <a:
imghdr,imgtype,mapper,voxel_dim,voxel_eldim>
stats: mean=-9.18523e-19 std=1.57984e-16 var=2.49591e-32 min=-1.11022e-15
max=1.11022e-15

Counts of targets in each chunk:
  chunks\targets animal
                   ---
        0           1
        1           1
        2           1
        3           1
        4           1
        5           1

Summary for targets across chunks
  targets mean std min max #chunks
  animal    1   0   1   1     6

Summary for chunks across targets
  chunks mean std min max #targets
    0      1   0   1   1      1
    1      1   0   1   1      1
    2      1   0   1   1      1
    3      1   0   1   1      1
    4      1   0   1   1      1
    5      1   0   1   1      1
Sequence statistics for 6 entries from set ['animal']
Counter-balance table for orders up to 2:
Targets/Order O1  |  O2  |
   animal:     5  |   4  |
Correlations: min=nan max=nan mean=nan sum(abs)=nan


>>> detrender = PolyDetrendMapper(polyord=1, chunks_attr='chunks')
>>> detrended_mds = mds.get_mapped(detrender)
>>> zscore(mds, chunks_attr=None)
>>> clf = kNN(k=1, dfx=one_minus_correlation, voting='majority')
>>> cv = CrossValidation(clf, NFoldPartitioner(attr='chunks'))
>>> cv_glm = cv(detrended_orig_mds)
>>> print '%.2f' % np.mean(cv_glm)
0.00


>>> print detrended_orig_mds.sa.int
['even' 'odd' 'even' 'odd' 'even' 'odd']
>>> detrended_orig_mds_split1 = detrended_orig_mds[detrended_orig_mds.sa.int
== 'even']
>>> len(detrended_orig_mds_split1)
3
>>> detrended_orig_mds_split2 = detrended_orig_mds[detrended_orig_mds.sa.int
== 'odd']
>>>
len(detrended_orig_mds_split2)

3
>>>
clf.train(detrended_orig_mds_split1)

>>> predictions = clf.predict(detrended_orig_mds_split2.samples)
>>> clf.set_postproc(BinaryFxNode(mean_mismatch_error, 'targets'))
>>> clf.train(detrended_orig_mds_split2)
>>> err = clf(detrended_orig_mds_split1)
>>> print np.asscalar(err)
0.0


mds = fmri_dataset(samples=bold_fname, targets=attr.cat, chunks=attr.chunk)
>>> poly_detrend(mds, polyord=1, chunks_attr='chunks')
>>> mds = mds[np.array([l in ['animal'] for l in mds.sa.targets],
dtype='bool')]
>>> cv = CrossValidation(SMLR(), OddEvenPartitioner(),
errorfx=mean_mismatch_error)
>>> error = cv(mds)
mvpa2/clfs/smlr.py:375: RuntimeWarning: divide by zero encountered in divide
  lambda_over_2_auto_corr = (self.params.lm/2.)/auto_corr
mvpa2/clfs/smlr.py:217: RuntimeWarning: invalid value encountered in
double_scalars
  w_new = w_old + grad/auto_corr[basis]
WARNING: SMLR: detected ties in categories ['animal'].  Small amount of
noise will be injected into result estimates upon prediction to break the
ties
>>> print "Error for %i-fold cross-validation on %i-class problem: %f" %
(len(mds.UC), len(mds.UT), np.mean(error))
Error for 6-fold cross-validation on 1-class problem: 0.000000



>>> detrended_orig_mds.sa['int'] = [ 'even', 'odd', 'even', 'odd', 'even',
'odd' ]
>>> clf = kNN(k=1, dfx=one_minus_correlation, voting='majority')
>>> clf.train(detrended_orig_mds)
>>> predictions = clf.predict(detrended_orig_mds.samples)
>>> np.mean(predictions == detrended_orig_mds.sa.targets)
1.0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20150416/a28cfbbf/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list