[Neurodebian-users] AFNI upgrades

Yaroslav Halchenko debian at onerussian.com
Thu Jan 9 14:33:29 UTC 2014


On Thu, 09 Jan 2014, Andreas Berger wrote:
> i'm setting up neurodebian for our lab, and i'm wondering whether to install 
> AFNI from upstream or via apt. as i understand it, upstream provide daily 
> builds, with no versions marked as 'stable' or otherwise recommended. 

isn't it "nice"? ;)

> previously it was up to the users in our lab to upgrade AFNI, and to not 
> upgrade it during a project. my question is: how frequently is AFNI updated in 
> neurodebian? 

not that often because it is not exactly trivial -- we maintain an
alternative build system and it takes a bit of work to adjust it for
every snapshot release to reflect changes in upstream (stock) build
system.  Usually we update whenever there is a demand for a new feature
introduced, or we found out that an important issue was addressed:
because of also absent public version control system it would be very
difficult to pick out specific fixes, that is why we do 'full update' in
those cases.

> how is the version chosen? 

whenever need in upgrade is "triggered", then looking through the
changes, adjusting build system, running basic tests -- we are very
slowly establishing some testing for the package, trying it out.  If no
issues detected -- pushing it out.  If problems found -- reporting
back/fixing them and going to the beginning - until we obtain a version
which "works" ;)

> do you recommend apt-pinning AFNI for 
> the duration of a project to avoid inconsistencies? 

if you can "afford" that -- then indeed pinning might be the best
option.

> is there some sort of 
> documented policy where i could look up these questions?

unfortunately no.  But after recent infamous FreeSurfer paper
(http://dx.plos.org/10.1371/journal.pone.0038234) people are even more
aware of not changing horses interim the project or use very
heterogeneous collection of systems for running the analysis: not even
because of obvious "bugs" but simply because of inherent intricacies of
methods and software used and often tiny effect sizes we are often
dealing with.    If you are analyzing in stages (e.g. preprocessing
first on all the data at once), that should provide some additional
level of assurance that at least within those steps "results" are not
simply due to a change of software versions interim.

In my case, to the degree possible, I am trying to automate the
processing using e.g. nipype.  So whenever software upgrades happens I
could easily rerun the analysis and see if results maintain the same.

Such an automation is in the core of 'reproducibility' anyways: it is
nice to explore different options/approaches interactively, but to
guarantee yourself a piece of  mind that the results you obtain are not
simply a side-effect of making a human error somewhere along the course
of study, it better be sooner or later automated, so that analysis could
be redone quickly.  How feasible it is for general adoption -- is a
separate question, and thus I will stop preaching here, but advise to
have a look also at the https://testkraut.readthedocs.org project
Michael has started to address the very problem of validation of the
software/results.

> curious to see how others handle this

thanks for bringing this issue up -- hopefully it would be a lively
discussion.  May be it would be even worth posting this question of a
freshly born (thus still rough around the edges):
http://neurostars.org/
-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate,     Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik        



More information about the Neurodebian-users mailing list