[pymvpa] Surface searchlight taking 6 to 8 hours
John Baublitz
jbaub at bu.edu
Thu Jul 23 14:47:41 UTC 2015
Thank you for the quick response. I tried outputting a surface file file
before using both niml.write() and surf.write() as my lab would prefer to
visualize the results on the surface. I mentioned this in a previous email
and was told that I should be using niml.write() and visualize using SUMA.
I decided against this because not only would it fail to open with our
version of SUMA (I can include the error if that would be helpful) but I
have found no evidence that .dset files are compatible with FreeSurfer. My
lab has a hard requirement that whatever we are outputting from the
analysis must be able to be visualized in FreeSurfer. Is there any way to
output a FreeSurfer-compatible surface file using PyMVPA? If not, is there
a utility to convert from SUMA surface files to FreeSurfer surface files
included in PyMVPA?
On Thu, Jul 23, 2015 at 6:45 AM, Nick Oosterhof <
n.n.oosterhof at googlemail.com> wrote:
>
> > On 22 Jul 2015, at 20:11, John Baublitz <jbaub at bu.edu> wrote:
> >
> > I have been battling with a surface searchlight that has been taking 6
> to 8 hours for a small dataset. It outputs a usable analysis but the time
> it takes is concerning given that our lab is looking to use even higher
> resolution fMRI datasets in the future. I profiled the searchlight call and
> it looks like approximately 90% of those hours is spent mapping in the
> function from feature IDs to linear voxel IDs (the function
> feature_id2linear_voxel_ids).
>
> From mvpa2.misc.surfing.queryengine, you are using the
> SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the
> former should be using the feature_id2linear_voxel_ids function.
>
> (When instantiating a query engine through disc_surface_queryengine, the
> Vertices variant is the default; the Voxels variant is used then
> output_modality=‘volume’).
>
> For the typical surface-based analysis, the output is a surface-based
> dataset, and the SurfaceVerticesQueryEngine is used for that. When using
> the SurfaceVoxelsQueryEngine, the output is a volumetric dataset.
>
> > I looked into the source code and it appears that it is using the in
> keyword on a list which has to search through every element of the list for
> each iteration of the list comprehension and then calls that function for
> each feature. This might account for the slowdown. I'm wondering if there
> is a way to work around this or speed it up.
>
> When using the SurfaceVoxelsQueryEngine, the euclidean distance between
> each node (on the surface) and each voxel (in the volume) is computed. My
> guess is that this is responsible for the slow-down. This could probably be
> made faster by dividing the 3D space into blocks and assigning nodes and
> vertices to each block, and then compute distances between nodes and voxels
> only within each block and across neighbouring ones. (a somewhat similar
> approach is taken in
> mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would
> take some time to implement and test. How important is this feature for
> you? Is there a particular reason why you would want the output to be a
> volumetric, not surface-based, dataset?
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20150723/614a2fcd/attachment.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list