jetzel at wustl.edu
Fri May 15 14:20:14 UTC 2015
On 5/15/2015 8:25 AM, basile pinsard wrote:
> What is the most sensible feature selection strategy between:
> - a radius with variable number of features included, which will make
> the different classifiers trained on different amount of dimensions;
> - a fixed number of closest voxels/surface_nodes that would represent
> different surface/volume/spatial_extent depending on the localization.
I don't think there's a theoretically-best answer to this right now, but
one is really needed.
Given that fMRI data is inherently volumetric (collected as volumes),
analyses based on a maximum spatial extent strike me as most logical:
the acquired fMRI resolution isn't higher in areas with a lot of surface
folding (and so very close-together surface vertices). But I can't
demonstrate (as of now, anyway!) that using a maximum spatial extent
(i.e., as mm in the volume) is actually the best way to go.
> I had the examples with surfaces, for which I used a spherical templates
> (similar to 32k surfaces in HCP dataset) transformed into subject space.
> I computed the number of neighbors for each node with a fixed radius and
> noted a differential sampling resolution in the brain, which somewhat
> overlay with my network of interest (motor) and thus my concerns.
Interesting! And wise to check, since the accuracy (or whatever
statistic) can be strongly affected by the number of features with some
metrics, such as the linear SVM. But there's no robust way to correct
for these feature-number differences, far as I know.
You mention the HCP ... are you trying surface-based searchlight
analyses with HCP data? If so, I'd be really curious to find out how you
coded it up.
Sorry that I don't have a list of simple, unambiguous answers; hopefully
others working with these issues will share their experiences and thoughts.
More information about the Pkg-ExpPsy-PyMVPA