[pymvpa] Across-subjects surface query engine?
Beau R. Sievers
Beau.R.Sievers.GR at dartmouth.edu
Sat May 27 20:15:19 UTC 2017
Following up:
I worked out a possible way to perform my noise ceiling analysis using a surface-based searchlight: I am creating a separate query engine for each subject, then iterating over the list of subject-specific query engines and datasets to build a group dataset (code below).
However, I am still having issues. The noise ceiling is much lower using a surface searchlight—the max noise ceiling of ~.10 when using a surface-based searchlight, vs ~.45 when using a volumetric searchlight (the same measure is used in both cases).
Can anybody think of a reason why across-subjects surface-based searchlight analyses would return such dramatically different results?
I wonder if this is because the voxel indices returned by the query engines are not anatomically comparable across subjects?
I am also a bit worried about how the query engines only return _approximately_ the number of voxels requested (e.g., asking for 30 voxels, you might get anywhere from 29–31, and this number might differ across subjects for a single node ID). My current approach to dealing with this is truncating each query so they all have the same number of voxels, but I wonder if this could somehow mess up the analysis.
If anybody has any insight into this issue, it would be greatly appreciated.
Thank you.
Bests,
Beau Sievers
—
def multi_qe_searchlight(dss, qes, measure,
chunks_attr='chunks', truncate=True):
"""Run a multi-subject searchlight analysis.
Given a list of datasets and a list of query engines, create a multi-
subject dataset at each node index and evaluate them with a measure. The
measure should perform an across-subjects measurement using chunks_attr to
identify subjets. Return a single aggregated dataset.
"""
# query engines should have the same list of node IDs
id_lists = [qe.ids for qe in qes]
assert(all(id_lists[0] == id_list for id_list in id_lists))
res = []
for node_id in qes[0].ids:
datasets = [ds[:, qe.query_byid(node_id)] for ds, qe in zip(dss, qes)]
if truncate:
n_features = [ds.shape[1] for ds in datasets]
datasets = [ds[:, 0:np.min(n_features)] for ds in datasets]
group_ds = mv.vstack(datasets)
n_samples = dss[0].shape[0]
group_ds.sa[chunks_attr] = np.repeat(range(len(qes)), n_samples)
res.append(measure(group_ds))
res = mv.hstack(res)
res.a['roi_center_ids'] = qes[0].ids
return res
> On May 22, 2017, at 12:51 PM, Beau R. Sievers <Beau.R.Sievers.GR at dartmouth.edu> wrote:
>
> Hi there,
>
> How does one perform across-subjects analysis on the surface using PyMVPA?
>
> I am currently estimating the noise ceiling on my data (as described by Nili et al., 2014). This requires anatomically-aligned data from all of the subjects in the experiment. I'm using the following approach: create a single large PyMVPA dataset including data from all template-aligned subjects, and assign a chunks sample attribute corresponding to subject ID, then use a searchlight with a custom noise ceiling estimation measure (that uses the chunks sample attribute to split the samples appropriately).
>
> This works great, but I would like to move my analysis to the surface. So for each searchlight center _on the surface template_ I would need to get surrounding voxels _from all individual subjects._
>
> It is difficult for me to tell whether this is possible based on reading the query engine documentation and the surface searchlight example.
>
> Thanks!
>
> Bests,
> Beau Sievers
>
> —
>
> Refs:
>
> Nili, H., Wingfield, C., Walther, A., Su, L., Marslen-Wilson, W., & Kriegeskorte, N. (2014). A toolbox for representational similarity analysis. PLoS Comput Biol, 10(4), e1003553.
More information about the Pkg-ExpPsy-PyMVPA
mailing list