[pymvpa] running time

Michael Hanke michael.hanke at gmail.com
Wed Oct 15 14:19:33 UTC 2008


Hi Andrew,

On Wed, Oct 15, 2008 at 09:58:34AM -0400, Andrew Connolly wrote:
> hi,
> 
> I am running a searchlight classifier (radius=5) on a wholebrain volume
> (41,742 vxs) using SMLR.  I have 16 conditions, 672 time points, and the
> data are divided into 6 chunks.  Cross validation method is leave-one-out.
> This is running on a fast computer (3GHz dual-quad core with 16G ram),
> nonetheless it has been running for almost 24 hours.  Is there a way to
> estimate the running time for jobs like this?  Or better yet, can someone
> give me some advice about what I can do that would be faster.
> 
> I did not use the -O optimization flag when invoking python -- but still.
> Maybe SMLR is not the best choice for this problem...???
Might be. The duration of a searchlight analysis heavily depends on the
time the classifier needs to fit the data. Running on 16 conditions
means fitting a lot of decision planes per each sphere dataset.

If you want to speed it up, you could do an odd-even split of the data
(instead of Nfold) -- does not give 3x speed-up, but should anyway be
a lot faster.

I guess you are using a mask to exclude non-brain voxels, right?

If you want to estimate the running time, simply enable the
corresponding debug modes. See the searchlight_2d.py example in
doc/example or here:

http://www.pymvpa.org/examples.html#easy-searchlight

basically:

  # enable debug output for searchlight call
  if __debug__:
    debug.active += ["SLC"]

should enable the progress output.

Finally, running with -O should also speed it up, but first try to see
how long each sphere take to be processed.

It might be the case that some other classifier is quicker than SMLR,
but usually SMLR is quite fast.


HTH,

Michael

-- 
GPG key:  1024D/3144BE0F Michael Hanke
http://apsy.gse.uni-magdeburg.de/hanke
ICQ: 48230050



More information about the Pkg-ExpPsy-PyMVPA mailing list