[med-svn] [r-cran-bbmle] 08/11: New upstream version 1.0.19

Andreas Tille tille at debian.org
Fri Sep 29 18:15:35 UTC 2017


This is an automated email from the git hooks/post-receive script.

tille pushed a commit to branch master
in repository r-cran-bbmle.

commit a23e98cd88d005aff26e373c28412b7ea0058839
Author: Andreas Tille <tille at debian.org>
Date:   Fri Sep 29 20:14:02 2017 +0200

    New upstream version 1.0.19
---
 DESCRIPTION                             |    6 +-
 MD5                                     |   62 +-
 NAMESPACE                               |    7 +-
 NEWS                                    |  203 ----
 R/IC.R                                  |    8 +-
 R/mle.R                                 |   49 +-
 R/mle2-methods.R                        |    6 +-
 R/profile.R                             |  568 ++++++-----
 build/vignette.rds                      |  Bin 285 -> 291 bytes
 inst/NEWS.Rd                            |   16 +
 inst/doc/mle2.R                         |   13 +-
 inst/doc/mle2.Rnw                       |   19 +-
 inst/doc/mle2.pdf                       |  Bin 249971 -> 249791 bytes
 inst/doc/quasi.Rnw                      |   17 +-
 inst/doc/quasi.pdf                      |  Bin 134912 -> 130748 bytes
 inst/unitTests/Makefile                 |   15 -
 man/as.data.frame.profile.mle2.Rd       |    7 +-
 man/mle-class.Rd                        |    7 +-
 man/mle2.Rd                             |    4 +-
 man/profile-methods.Rd                  |   16 +-
 man/profile.mle-class.Rd                |    2 +-
 tests/BIC.R                             |    0
 tests/BIC.Rout.save                     |    0
 tests/ICtab.R                           |    6 +
 tests/ICtab.Rout.save                   |   16 +-
 tests/Makefile                          |   18 -
 tests/RUnit-tests.R                     |    0
 tests/binomtest1.R                      |    0
 tests/binomtest1.Rout                   |    0
 tests/binomtest1.Rout.save              |    0
 {inst/unitTests => tests}/boundstest.R  |    0
 tests/controleval.R                     |    0
 tests/controleval.Rout.save             |    0
 tests/doRUnit.R                         |   63 --
 tests/eval.R                            |    0
 tests/eval.Rout.save                    |    0
 tests/formulatest.R                     |    0
 tests/formulatest.Rout.save             |    0
 tests/glmcomp.R                         |    0
 tests/glmcomp.Rout.save                 |    0
 tests/gradient_vecpar_profile.R         |    0
 tests/gradient_vecpar_profile.Rout.save |    0
 tests/grtest1.R                         |   11 +-
 tests/grtest1.Rout.save                 |   44 +-
 tests/makesavefiles                     |    0
 tests/methods.R                         |    0
 tests/methods.Rout.save                 |    0
 tests/mkout                             |    0
 tests/mortanal.R                        |    0
 tests/mortanal.Rout.save                |    0
 tests/optimize.R                        |    0
 tests/optimize.Rout.save                |    0
 tests/optimizers.R                      |    0
 tests/optimizers.Rout.save              |    0
 tests/optimx.R                          |    0
 tests/optimx.Rout.save                  |    0
 tests/order.R                           |    0
 tests/order.Rout.save                   |    0
 tests/parscale.R                        |    0
 tests/parscale.Rout                     |    0
 tests/parscale.Rout.save                |    0
 tests/predict.R                         |    0
 tests/predict.Rout.save                 |    0
 tests/prof_newmin.R                     |   10 +
 tests/prof_spec.R                       |   29 +
 tests/profbound.R                       |    0
 tests/profbound.Rout.save               |    0
 tests/richards.R                        |    0
 tests/richards.Rout.save                |    0
 tests/startvals.R                       |    0
 tests/startvals.Rout.save               |    0
 tests/startvals2.R                      |    0
 tests/startvals2.Rout.save              |    0
 tests/test-relist1.R                    |    0
 tests/test-relist1.Rout.save            |    0
 tests/testbounds.R                      |    0
 tests/testbounds.Rout                   |    0
 tests/testbounds.Rout.save              |    0
 tests/testderiv.R                       |    0
 tests/testderiv.Rout.save               |    0
 tests/testenv.R                         |   16 +-
 tests/testenv.Rout.save                 |   32 +-
 tests/testparpred.R                     |    0
 tests/testparpred.Rout.save             |    0
 tests/tmptest.R                         |    0
 tests/tmptest.Rout.save                 |    0
 tests/update.R                          |    0
 tests/update.Rout.save                  |    0
 vignettes/chicago.bst                   | 1654 -------------------------------
 vignettes/clean                         |    1 -
 vignettes/mle2.Rnw                      |   19 +-
 vignettes/quasi.Rnw                     |   17 +-
 92 files changed, 578 insertions(+), 2383 deletions(-)

diff --git a/DESCRIPTION b/DESCRIPTION
old mode 100755
new mode 100644
index 7befc11..ed3b4eb
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: bbmle
 Title: Tools for General Maximum Likelihood Estimation
-Version: 1.0.18
+Version: 1.0.19
 Author: Ben Bolker <bolker at mcmaster.ca>, R Development Core Team
 Maintainer: Ben Bolker <bolker at mcmaster.ca>
 Depends: R (>= 3.0.0), stats4
@@ -15,6 +15,6 @@ License: GPL
 Collate: 'mle2-class.R' 'mle2-methods.R' 'mle.R' 'confint.R'
         'predict.R' 'profile.R' 'update.R' 'dists.R' 'IC.R' 'slice.R'
 NeedsCompilation: no
-Packaged: 2016-02-11 15:22:35 UTC; bolker
+Packaged: 2017-04-18 01:11:10 UTC; bolker
 Repository: CRAN
-Date/Publication: 2016-02-11 16:57:55
+Date/Publication: 2017-04-18 13:04:25 UTC
diff --git a/MD5 b/MD5
index 7beb224..d8bb5e4 100644
--- a/MD5
+++ b/MD5
@@ -1,41 +1,38 @@
-3cb95ce6744cb07ceefd92cde7c7e544 *DESCRIPTION
-43132b4d01f40ed3509e8d74e5f0a00a *NAMESPACE
-254b14447eca543cd4b48657a57d1dad *NEWS
-1fb14e5d13c8fe35972e7184ecc3b04f *R/IC.R
+88b59c83d28e7e844851b8c2d87d4b95 *DESCRIPTION
+c34f63daf88f9e7118cd29176270a1bb *NAMESPACE
+6df6a983c57b15df0e0132129a336fd1 *R/IC.R
 2625b509110e94cceed414dc1ca9101d *R/confint.R
 0a63c5da03a0e73ae29f6c1c4f2fd5cd *R/dists.R
-ac81141b449f31d6d2929bf2c4f559e8 *R/mle.R
+56ddd6f4b62df8ca7a1b58ad4775aa49 *R/mle.R
 e695b7c949576d0927c0c2bf5a8e203c *R/mle2-class.R
-e6ca14e61d3632f6f91ec7a84d41f5f4 *R/mle2-methods.R
+2a2bfa30b9feb9251dabf7d17f1d1246 *R/mle2-methods.R
 2d3da6aaa0a07bd64c320a718d87d31e *R/predict.R
-be29fa8693c23b1a08ce4ddbe16e7307 *R/profile.R
+fcd9d1e3cbb09f66135890a2d3b2bc38 *R/profile.R
 2c39b590a0c236bff22bb00252544082 *R/slice.R
 28b4b4a714c1beebcc7516b888e1b641 *R/update.R
 3d7347481a3e76bb6d57990dd789c3a4 *TODO
-d373722780980723c518f40de4333748 *build/vignette.rds
-31551754937d4aba61a3096e17c64978 *inst/NEWS.Rd
-53afe4d9bcf215a652c07cacdd316976 *inst/doc/mle2.R
-cd0d040f1d726e0d550dd694b84ce196 *inst/doc/mle2.Rnw
-261bb4cec6fec1afd0b51c388f04629d *inst/doc/mle2.pdf
+492f2ea4bf11ccc85e60de3f41c5bf70 *build/vignette.rds
+d66acccbe4e9bb072d14896195791e29 *inst/NEWS.Rd
+b5555aa6d41773a0e4df0590aecc7e14 *inst/doc/mle2.R
+e8c369ae9771830ce616d69a1983031b *inst/doc/mle2.Rnw
+966cf0ffc322ff4d63fdfd146ff62933 *inst/doc/mle2.pdf
 e4073ae8723f00fe0f3b728db2b31a16 *inst/doc/quasi.R
-c5afe01abc61f4b419917852593b9e4b *inst/doc/quasi.Rnw
-605fa1a94e9b61f0e8ac05ae8d94122a *inst/doc/quasi.pdf
-334f0de6ed55dc79f59addf091097353 *inst/unitTests/Makefile
-bf9cb0badb64c11e22d1b7d15c060a73 *inst/unitTests/boundstest.R
+d4a4dc5fd5bd327c231ca1715eb74986 *inst/doc/quasi.Rnw
+e7d03ee71c8bb1f48080fab56da47ac5 *inst/doc/quasi.pdf
 a399ce19c47219ea3474978d2f4ecac6 *inst/vignetteData/orob1.rda
 fad7f0284df8a44372565c480f8e4dfb *man/BIC-methods.Rd
 7a309d55019db340dc2b1fa5e662ab32 *man/ICtab.Rd
-2973cf3eb548b5ab9cd3c3eec676ddff *man/as.data.frame.profile.mle2.Rd
+2688e18ad5fc01c2698e6557c357d8af *man/as.data.frame.profile.mle2.Rd
 b6ce8a230403e4049aeb543dcdf7f889 *man/call.to.char.Rd
 8f4ce3f14c61679b0583aada2d2c6493 *man/get.mnames.Rd
-a6b15977ffaf14454629d969f979e9c4 *man/mle-class.Rd
-e3c46635b577a2f0245a30f5be310f01 *man/mle2.Rd
+c7c79c910f6210de4528dc3b43484b05 *man/mle-class.Rd
+95c2381929b8879caad229163c2ac90c *man/mle2.Rd
 c8bdc07658fc20e685b36d709d1ced51 *man/mle2.options.Rd
 5402485d416bd59d04bbf4c4ea34c999 *man/namedrop.Rd
 7a0bc1dbcb08bc40ee81b7870967c1ef *man/parnames.Rd
 efce19f271b87e19638cb0683f7f6bd8 *man/predict-methods.Rd
-382f3a6f1bf091225064e1f293b72774 *man/profile-methods.Rd
-33e8d338881d137e634c09eb64baecc1 *man/profile.mle-class.Rd
+2ac866f204de3b921b70591b0177e3fd *man/profile-methods.Rd
+0aa7332e039cf89bca966c77dad5cbbf *man/profile.mle-class.Rd
 ea4640bf21b60e594d437304c3910e85 *man/relist.Rd
 56db6345ce930b55ae4addbfa6afc6e3 *man/sbinom.Rd
 eba67df829390e8cd96db70c52ed6fdd *man/slice.Rd
@@ -44,16 +41,15 @@ bc2aec35cda556cb0977380afebd4ca9 *man/strwrapx.Rd
 1c94867c2e5c5b7239f62290d254da0a *man/summary.mle-class.Rd
 677bab474659dbf8e1f16061a32e5f03 *tests/BIC.R
 c5f6c880e3fc121e0d6f16193014469c *tests/BIC.Rout.save
-a7a3544c028cf6e1c858559f15053914 *tests/ICtab.R
-42ab8c78d826e7b82a8eedaaabfcc7b0 *tests/ICtab.Rout.save
-3ab6379e35c758d75dbe2dd204e4766d *tests/Makefile
+741c879f7b2fa896eb8614d80823bcd5 *tests/ICtab.R
+f8ec8fb8c28cc2029e1c4d3323bd8915 *tests/ICtab.Rout.save
 7e791632cd72a0dab72b6b1059b85273 *tests/RUnit-tests.R
 202d16aa2bf77be5df020bda2240703e *tests/binomtest1.R
 138465684c603e66d87035baabc03f65 *tests/binomtest1.Rout
 de4898499c070e21485ddaed01e73c09 *tests/binomtest1.Rout.save
+bf9cb0badb64c11e22d1b7d15c060a73 *tests/boundstest.R
 055f3f858af92dac796c775fdb6cffe5 *tests/controleval.R
 54d3a16476aff59b8947a9b218733be5 *tests/controleval.Rout.save
-614b9446be434240549669abbf8cd88e *tests/doRUnit.R
 4421d42f41892221c6604df13514fab4 *tests/eval.R
 3d411aa0bc3cdad597b17fde4b539732 *tests/eval.Rout.save
 98e85875b0f557344a830f8957c600f1 *tests/formulatest.R
@@ -62,8 +58,8 @@ aa886a9c7ab1b518abd247d7d20e1ef6 *tests/glmcomp.R
 631d9de06283af92a3d567576f994553 *tests/glmcomp.Rout.save
 c5eaa3b836964c3c3a3588b13730e098 *tests/gradient_vecpar_profile.R
 332514b495edf939128e0b68c79b59c8 *tests/gradient_vecpar_profile.Rout.save
-8cb5d68e069f52992e480a7066e054ee *tests/grtest1.R
-097d2e49e2448a0bf7cb8566e39cc93d *tests/grtest1.Rout.save
+8e586c21bd27eb96bd0d381d64a216d0 *tests/grtest1.R
+4ed1220040a3742acb260754e1656f33 *tests/grtest1.Rout.save
 763a796aaa2bfa27693b4a8cb57783e2 *tests/makesavefiles
 6cf3e83f5491806bf7f8a75faafe2695 *tests/methods.R
 bd137b0505a83b54357345cdceb59dcb *tests/methods.Rout.save
@@ -83,6 +79,8 @@ b0cb07cae3015f7e56eef6708a47236e *tests/optimizers.Rout.save
 30b0b9c51cec72ecde06be963c9d3b6f *tests/parscale.Rout.save
 adf07c6ff92b4ae6f8ece745a93b1522 *tests/predict.R
 df6f12096d996324b2d19467b9905892 *tests/predict.Rout.save
+a714b957cfd9a8f6148160ae18c56472 *tests/prof_newmin.R
+0b52fc583dc02c9c422cb878ba3d6128 *tests/prof_spec.R
 68edb941f246a47564617d7aea9647bd *tests/profbound.R
 ee5f86f38e1dfc8a69958e5d5b07df08 *tests/profbound.Rout.save
 3c624de2efa1f848c87d71e5e7cb5641 *tests/richards.R
@@ -98,8 +96,8 @@ c118f8284b641d973e449de5afd584f9 *tests/test-relist1.Rout.save
 375be792dbfd82d6d56aeb19006488af *tests/testbounds.Rout.save
 ba254da51e09a22e84f803832382fc11 *tests/testderiv.R
 318b6a073389d6638ba88b2892421af9 *tests/testderiv.Rout.save
-0244d9b234c38b94b099365398dad281 *tests/testenv.R
-a16a851cc68fabac462594986166e36e *tests/testenv.Rout.save
+8f40025fa6fd7986d5dcb818fce9e100 *tests/testenv.R
+1e954bdb02ce9e9d3814cb94ca002bd1 *tests/testenv.Rout.save
 6a8dd303587eaf35a465b2e062264b50 *tests/testparpred.R
 01059ad5c653ce771ecbd81d4946026f *tests/testparpred.Rout.save
 4a76e0b4daec5dc81b0378e7bdb67826 *tests/tmptest.R
@@ -107,8 +105,6 @@ dd885bf956855f37df24d0dbe37ba7bd *tests/tmptest.Rout.save
 2d49b0803524b896e48d6879d18f8190 *tests/update.R
 53661890c555a4f7e5c21accbe775fed *tests/update.Rout.save
 0a27805bbe6b6d67ef37f760dc991917 *vignettes/cc-attrib-nc.png
-cd2df3f6f14e5d0af434d1aa53b7a0ed *vignettes/chicago.bst
-bc870713cfebc6c5b5fa52731fd3162b *vignettes/clean
-cd0d040f1d726e0d550dd694b84ce196 *vignettes/mle2.Rnw
+e8c369ae9771830ce616d69a1983031b *vignettes/mle2.Rnw
 ae21998f0dafa40e30841d4abc02ceed *vignettes/mle2.bib
-c5afe01abc61f4b419917852593b9e4b *vignettes/quasi.Rnw
+d4a4dc5fd5bd327c231ca1715eb74986 *vignettes/quasi.Rnw
diff --git a/NAMESPACE b/NAMESPACE
index b2b57e8..49586c0 100755
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -3,6 +3,7 @@ export(sbinom,snorm,sbeta,snbinom,spois,sbetabinom)
 export(ICtab,AICtab,BICtab,AICctab)
 export(stdEr,vcov)
 export(slice,sliceOld,slice1D,slice2D)
+export(proffun)
 exportClasses(mle2)
 exportMethods(AIC, AICc, qAICc, qAIC, 
                profile, coef, confint, logLik,
@@ -12,10 +13,10 @@ importClassesFrom(stats4,mle)
 importFrom(stats4,coef,confint,logLik,BIC,summary,profile,vcov,AIC, update, plot)
 importFrom(stats,
            anova,deviance,residuals,
-           simulate,predict,formula)
-importFrom(methods,setMethod)
+           simulate,predict,formula,napredict,na.omit,na.exclude)
+importFrom(methods,setMethod,is)
 importFrom(lattice,xyplot,splom,diag.panel.splom,panel.abline,panel.number,panel.points,panel.xyplot)  ## for slice methods
-importFrom(numDeriv,hessian,grad)
+importFrom(numDeriv,hessian,grad,jacobian)
 importFrom("grDevices", "dev.interactive")
 importFrom("graphics", "abline", "lines", "par", "points", "text")
 importFrom("methods", "new")
diff --git a/NEWS b/NEWS
deleted file mode 100755
index 4e9eba4..0000000
--- a/NEWS
+++ /dev/null
@@ -1,203 +0,0 @@
-0.9.8
- * gradient functions work better with fixed parameters, hence with profiling
- * profile plot reverts to linear for non-monotonic profile
- * added warning in confint for non-monotonic profile; revert
-    from spline+linear to linear approximation in this case
- * documentation improvements
- * optimx improvements
- * require data= argument when using formula interface
- * turn off hessian computation in profile
- * allow use of MASS::ginv
-
-0.9.7
- * bug fix in calc_mle2_function for no-intercept models
-  (thanks to Colin Kremer)
- * fixed optimx, added 'user' option
-
-0.9.6
- * changed hessian calculation to use numDeriv code (causes tiny
-changes to hessian results that could matter in edge cases)
-  too lazy to provide a backward compatibility mode ...
- * documented optimizer= choices in ?mle2
-
-0.9.5.1
- * fixed bug in AICc (David Harris)
-
-
-0.9.5 
-
-  too many changes!
- * added NAMESPACE, various fixes to go with that
- * beginnings of an RUnit testing framework
- * tweaked vignette
- * added prof.lower, prof.upper to profile()
- * added "optimize" to list of allowed optimizers, some
-   bug fixes
-
-0.9.4.1
-
- * tweaked par() resetting in profile plots
-
-0.9.4
-
- * more qAICc fixing
-
-0.9.3 (version bump from previous) 18/09/2009
-
-* tweaked handling of bounds: profile now succeeds
-  on some 1D problems where it didn't before
-
-15/08/2009
-
-* added deviance, residuals methods
-* added newparams argument to predict, simulate;
-   newdata argument to simulate
-* added vignette (stub)
-* added explicit params argument, to help sort out
-  full parameter specifications when parameters is
-  non-NULL
-
-0.9.2 10/08/2009
-* fixed predict() for case with parameters
-* added snorm
-* changed ICtab defaults to weight=TRUE, base=FALSE, sort=TRUE
-
-0.9.1 
-  added simulate method (formula interface only)
-  fix AICctab bug
-  remove spurious cat/print in profile
-  fix qAIC bug
-
-0.9.0 26/08/2008
-*  fix Tom Hobbs bug: named lower/upper/parscale/ndeps
-get rearranged properly, otherwise rearrange in order
-of "start" and issue a warning
-* documentation tweak for S4 as.data.frame
-* added sbeta to list of known distributions
-* removed nlme requirement & auto-loading
-
-0.8.9 04/08/2008
-* version bump, submit to CRAN
-* added predict method
-
-0.8.8  10/07/2008
-* added flexibility for profile plotting (main, x labels etc.);
-    added examples
-* added an instance of "namedrop" to fix naming problem
-* added tol.newmin to slice etc.
-* added check for numeric return from profile within confint
-* fixed bugs in profile plotting when profile is restricted
-   to a subset of variables
-* added tests for par() to reset to original on exit
-* improved profile documentation
-* replicate std.err if specified in profile
-* add as.data.frame 
-* tweak tol.newmin (better fit found during profile) code
-
-0.8.7  12/05/2008
-* version bump, moved to R-forge.
-* reordered NEWS file (most recent first)
-     
-0.8.6.1 22/03/2008:
-   tweaked stop-on-better-fit code
-   fixed (?) qAIC(c) methods
-
-   26/03/2008:
-   tweak/fix to ICtab documentation (thanks to Tom Hobbs)
-
-0.8.6 added qAIC(c) methods (not working yet!)
-
-0.8.5.1 oops. Fixed infelicity (bug?) in new 
-  environment manipulation
-
-0.8.5 tweaked environment/data assignment to preserve
-    original minuslogl environment better
-
-0.8.4 changed  plot.profile.mle2 options (added onepage etc.,
-   made plot.confstr=TRUE by default)
-
-0.8.3 added warning about too-short lower/upper 
-      added documentation
-
-0.8.2 fixed bug in AICctab
-      cosmetic change to printing -- save call.orig
-      moved ChangeLog to NEWS
-
-0.8.1 fixed (?) environment bug
-      tried to use built-in relist, but failed: renamed relist
-	  to "relist2" (try again later)
-      documented get.mnames (auxiliary function for ICtabs)
-      started to add gr (gradient) capability -- NOT TESTED
-
-0.8  changed ICtab to allow either ICtab(x,y,z) or ICtab(list(x,y,z))
-	  (L <- list(...); if is.list(L[[1]]) && length(L)==1)
-
-0.7.7 fix bug in profiling: all optim() methods EXCEPT L-BFGS-B
-     return the value of the objective function if given a function
-     with no arguments/zero-length starting parameter vector
-     (this is the situation with "profiling" a 1-D function).
-     L-BFGS-B gives funky answers.  added a check for this case.
-     (may need to check behavior for alternate optimizers (nlm etc))
-    [this behavior triggered a "found better fit" error when profiling
-     1D functions with L-BFGS-B]
-
-     changed behavior when finding better fit during profiling
-     to return new parameters
-
-
-0.7.6 tweak vignette
-    fixed second major AICc bug (was fixed in mle2 method,
-	 but not in logLik method)
-
-0.7.5 change "ll" to "LL" in examples for clarity
-    tweaked anova reporting of models (wrap instead of truncating)
-    added (undocumented) show.points option to profile plot
-	to display actual locations of profile evaluation
-    tweaked profile to behave better when profiling variables
-	with constraints (upper, lower)
-    moved vignette to inst/doc where it belongs
-    ICtab hack to protect against package:aod definition of AIC(logLik)
-    added submit stub
-    tweaked slice.mle2-class docs for consistency
-    fiddled with vignette
-    preliminary code to allow non-monotonic profiles
-    preliminary add nlm to list of optimizers (untested)
-    add aod, Hmisc, emdbook to VignetteDepends and Suggests:
-
-0.7 better df extraction in ICtab
-    minor bug fix for AICc (allows AICc of nls objects)
-    handle models with -1 in formula better:
-	starting values set "all equal"
-    made ANOVA formula line-length accessible
-    added skip.hessian and trace arguments to mle2
-    messed around with BIC definition -- attempt at consistency with nlme
-    added rudimentary support for nlminb, constrOptim
-    nlme now required for fdHess (which is required for
-	nlminb since it doesn't compute a finite-diff
-	Hessian)
-
-0.6 add experimental formula interface
-    change all names from mle to mle2 to avoid confusion/conflicts
-       with stats4 version of mle
-    change internal structure of data evaluation
-    worked on vignette
-    added optimizer slot (stub)
-
-0.5 fix AICc bug!  (was deviance+2*k*(k+1)/(n-k-1), not AIC+2*k*(k+1)/(n-k-1)
-
-0.4 change AIC to AICc for corrections
-    add AICtab for weights, delta, sort ... options
-
-    expose error messages occuring within profile()
-    uniroot tries harder to find a valid endpoint
-    truncate terms in anova.mle at 80 characters
-
-0.3: enhanced anova method, works with print.anova
-tweaked namedrop() code -- ??
-
-0.2: added parnames, parnames<-
-minor fix to allow "profiles" of 1-parameter models
-  (skip fdHess call)
-minor change to print method for mle results
-tweaking "vecpar" (to allow parameter vectors in objective function)
-removed fdHess/nlme dependency
diff --git a/R/IC.R b/R/IC.R
index b226833..3f1cfd0 100755
--- a/R/IC.R
+++ b/R/IC.R
@@ -38,8 +38,8 @@ ICtab <- function(...,type=c("AIC","BIC","AICc","qAIC","qAICc"),
         if (!is.null(df <- attr(x,"df"))) return(df)
         else if (!is.null(df <- attr(logLik(x),"df"))) return(df)
     }
-    dIC <- ICs-min(ICs)
-    dlogLiks <- logLiks-min(logLiks)
+    dIC <- ICs-min(ICs,na.rm=TRUE)
+    dlogLiks <- logLiks-min(logLiks,na.rm=TRUE)
     df <- sapply(L,getdf)
     tab <- data.frame(df=df)
     if (delta) {
@@ -57,7 +57,9 @@ ICtab <- function(...,type=c("AIC","BIC","AICc","qAIC","qAICc"),
     }
     if (!delta && !base) stop("either 'base' or 'delta' must be TRUE")
     if (weights) {
-        wts <- exp(-dIC/2)/sum(exp(-dIC/2))
+        dIC_noNA <- na.exclude(dIC)
+        wts <- napredict(attr(dIC_noNA,"na.action"),
+                                    exp(-dIC_noNA/2)/sum(exp(-dIC_noNA/2)))
         tab <- data.frame(tab,weight=wts)
     }
     if (missing(mnames)) {
diff --git a/R/mle.R b/R/mle.R
index 320d607..5803638 100755
--- a/R/mle.R
+++ b/R/mle.R
@@ -166,7 +166,7 @@ mle2 <- function(minuslogl,
                  use.ginv=TRUE,
                  trace=FALSE,
                  browse_obj=FALSE,
-                 gr,
+                 gr=NULL,
                  optimfun,
                  ...) {
 
@@ -223,6 +223,7 @@ mle2 <- function(minuslogl,
   ## call$control$ndeps <- eval.parent(call$control$ndeps)
   ## call$control$maxit <- eval.parent(call$control$maxit)
   call$control <- eval.parent(call$control)
+  call$method <- eval.parent(call$method)
   if(!missing(start))
     if (!is.list(start)) {
       if (is.null(names(start)) || !is.vector(start))
@@ -327,7 +328,7 @@ mle2 <- function(minuslogl,
       do.call("minuslogl",namedrop(args))
   } ## end of objective function
   objectivefunctiongr <-
-    if (missing(gr)) NULL else
+    if (!is.null(gr))
         function(p) {
           if (browse_obj) browser()
           l <- relist2(p,template) ## redo list structure
@@ -367,10 +368,11 @@ mle2 <- function(minuslogl,
       mapply(assign,names(d),d,
              MoreArgs=list(envir=newenv))
       environment(minuslogl) <- newenv
-      if (!missing(gr)) {
+      if (!is.null(gr)) {
           newenvgr <- new.env(hash=TRUE,parent=environment(minuslogl))
           mapply(assign,names(d),d,
                  MoreArgs=list(envir=newenvgr))
+          environment(gr) <- newenvgr
       }
   }
   if (length(start)==0 || eval.only) {
@@ -378,7 +380,8 @@ mle2 <- function(minuslogl,
     optimizer <- "none"
     skip.hessian <- TRUE
     oout <- list(par=start, value=objectivefunction(start),
-                 hessian = matrix(NA,nrow=length(start),ncol=length(start)))
+                 hessian = matrix(NA,nrow=length(start),ncol=length(start)),
+                 convergence=0)
   } else {
     oout <- switch(optimizer,
                    optim = {
@@ -475,9 +478,7 @@ mle2 <- function(minuslogl,
     names(oout$par) <- names(start)
   }
 
-  ## FIXME: worry about boundary violations?
-  ## (if we're on the boundary then the Hessian may not be useful anyway)
-  ##
+  ## compute Hessian
   if (length(oout$par)==0) skip.hessian <- TRUE
   if (!skip.hessian) {
     if ((!is.null(call$upper) || !is.null(call$lower)) &&
@@ -487,13 +488,28 @@ mle2 <- function(minuslogl,
   namatrix <- matrix(NA,nrow=length(start),ncol=length(start))
   if (!skip.hessian) {
     psc <- call$control$parscale
-    if (is.null(psc)) {
-      oout$hessian <- try(hessian(objectivefunction,oout$par,method.args=hessian.opts))
-    } else {
-      tmpf <- function(x) {
-        objectivefunction(x*psc)
-      }
-      oout$hessian <- try(hessian(tmpf,oout$par/psc,method.args=hessian.opts))/outer(psc,psc)
+    if (is.null(gr)) {
+        if (is.null(psc)) {
+            oout$hessian <- try(hessian(objectivefunction,oout$par,
+                                        method.args=hessian.opts))
+        } else {
+            tmpf <- function(x) {
+                objectivefunction(x*psc)
+            }
+            oout$hessian <- try(hessian(tmpf,oout$par/psc,
+                                 method.args=hessian.opts))/outer(psc,psc)
+        }
+    } else { ## gradient provided
+        if (is.null(psc)) {
+            oout$hessian <- try(jacobian(objectivefunctiongr,oout$par,
+                                        method.args=hessian.opts))
+        } else {
+            tmpf <- function(x) {
+                objectivefunctiongr(x*psc)
+            }
+            oout$hessian <- try(jacobian(tmpf,oout$par/psc,
+                                 method.args=hessian.opts))/outer(psc,psc)
+        }
     }
   }
   if (skip.hessian || inherits(oout$hessian,"try-error"))
@@ -525,7 +541,7 @@ mle2 <- function(minuslogl,
   ## compute termination info
   ## FIXME: should we worry about parscale here??
   if (length(coef)) {
-    gradvec <- if (!missing(gr)) {
+    gradvec <- if (!is.null(gr)) {
         objectivefunctiongr(coef)
     } else {
         if (inherits(tt <- try(grad(objectivefunction,coef),silent=TRUE),
@@ -535,7 +551,7 @@ mle2 <- function(minuslogl,
     if (!skip.hessian) {
         if (inherits(ev <- try(eigen(oout$hessian)$value,silent=TRUE),
                      "try-error")) ev <- NA
-        oout$eratio <- min(ev)/max(ev)
+        oout$eratio <- min(Re(ev))/max(Re(ev))
     }
   }
   if (!is.null(conv <- oout$conv) &&
@@ -560,6 +576,7 @@ mle2 <- function(minuslogl,
            optimizer=optimizer,data=as.list(data),formula=formula)
   attr(m,"df") = length(m at coef)
   if (!missing(data)) attr(m,"nobs") = length(data[[1]])
+  environment(m) <- parent.frame()
   ## to work with BIC as well
   m
 }
diff --git a/R/mle2-methods.R b/R/mle2-methods.R
index 2f4c285..b4890a0 100755
--- a/R/mle2-methods.R
+++ b/R/mle2-methods.R
@@ -40,7 +40,7 @@ setMethod("show", "mle2", function(object){
     if (object at optimizer=="optimx" && length(object at method)>1) {
       cat("Best method:",object at details$method.used,"\n")
     }
-    if (object at details$convergence>0)
+    if (object at details$conv>0)
       cat("\nWarning: optimization did not converge (code ",
           object at details$convergence,": ",object at details$message,")\n",sep="")
   })
@@ -167,5 +167,5 @@ setAs("profile.mle2","data.frame",
           as.data.frame.profile.mle2(from)
           })
 
-
-BIC.mle2 <- stats4:::BIC
+## causes infinite loop, and unnecessary anyway??
+## BIC.mle2 <- stats4:::BIC
diff --git a/R/profile.R b/R/profile.R
index 4dba45d..0f4343f 100755
--- a/R/profile.R
+++ b/R/profile.R
@@ -1,277 +1,303 @@
 ## FIXME: abstract to general-purpose code?  (i.e. replace 'fitted' by
-#    objective function, parameter vector, optimizer, method, control settings,
+                                        #    objective function, parameter vector, optimizer, method, control settings,
 ##   min val, standard error/Hessian, ...
 ##
 ## allow starting values to be set by "mle" (always use mle), "prevfit"
 ##  (default?), and "extrap" (linear extrapolation from previous two fits)
 ## 
 
-setMethod("profile", "mle2",
-          function (fitted, which = 1:p, maxsteps = 100,
-                    alpha = 0.01, zmax = sqrt(qchisq(1 - alpha/2, p)),
-                    del = zmax/5, trace = FALSE, skiperrs=TRUE,
+proffun <- function (fitted, which = 1:p, maxsteps = 100,
+                     alpha = 0.01, zmax = sqrt(qchisq(1 - alpha/2, p)),
+                     del = zmax/5, trace = FALSE, skiperrs=TRUE,
                     std.err, tol.newmin = 0.001, debug=FALSE,
                     prof.lower, prof.upper, skip.hessian=TRUE,
+                    continuation = c("none","naive","linear"),
                     try_harder=FALSE, ...) {
-              ## fitted: mle2 object
-              ## which: which parameters to profile (numeric or char)
-              ## maxsteps: steps to take looking for zmax
-              ## alpha: max alpha level
-              ## zmax: max log-likelihood difference to search to
-              ## del: stepsize
-              ## trace:
-              ## skiperrs:
-            if (fitted at optimizer=="optimx") {
-              fitted at call$method <- fitted at details$method.used
+    ## fitted: mle2 object
+    ## which: which parameters to profile (numeric or char)
+    ## maxsteps: steps to take looking for zmax
+    ## alpha: max alpha level
+    ## zmax: max log-likelihood difference to search to
+    ## del: stepsize
+    ## trace:
+    ## skiperrs:
+    continuation <- match.arg(continuation)
+    if (fitted at optimizer=="optimx") {
+        fitted at call$method <- fitted at details$method.used
+    }
+    if (fitted at optimizer=="constrOptim")
+        stop("profiling not yet working for constrOptim -- sorry")
+    Pnames <- names(fitted at coef)
+    p <- length(Pnames)
+    if (is.character(which)) which <- match(which,Pnames)
+    if (any(is.na(which)))
+        stop("parameters not found in model coefficients")
+    ## global flag for better fit found inside profile fit
+    newpars_found <- FALSE
+    if (debug) cat("i","bi","B0[i]","sgn","step","del","std.err[i]","\n")
+    pfit <- NULL ## save pfit to implement continuation methods
+                 ## for subsequent calls to onestep
+    onestep <- function(step,bi) {
+        if (missing(bi)) {
+            bi <- B0[i] + sgn * step * del * std.err[i]
+            if (debug) cat(i,bi,B0[i],sgn,step,del,std.err[i],"\n")
+        } else if (debug) cat(bi,"\n")
+        fix <- list(bi)
+        names(fix) <- p.i
+        if (is.null(call$fixed)) call$fixed <- fix
+        else call$fixed <- c(eval(call$fixed),fix)
+        ## 
+        if (continuation!="none") {
+            if (continuation != "naive")
+                stop("only 'naive' continuation implemented")
+            if (!is.null(pfit)) {
+                for (nm in setdiff(names(call$start),names(call$fixed))) {
+                    call$start[[nm]] <- coef(pfit)[nm]
+                }
             }
-            if (fitted at optimizer=="constrOptim")
-              stop("profiling not yet working for constrOptim -- sorry")
-            Pnames <- names(fitted at coef)
-            p <- length(Pnames)
-            if (is.character(which)) which <- match(which,Pnames)
-            if (any(is.na(which)))
-              stop("parameters not found in model coefficients")
-            ## global flag for better fit found inside profile fit
-            newpars_found <- FALSE
-            if (debug) cat("i","bi","B0[i]","sgn","step","del","std.err[i]","\n")
-            onestep <- function(step,bi) {
-                if (missing(bi)) {
-                    bi <- B0[i] + sgn * step * del * std.err[i]
-                    if (debug) cat(i,bi,B0[i],sgn,step,del,std.err[i],"\n")
-                } else if (debug) cat(bi,"\n")
-                fix <- list(bi)
-                names(fix) <- p.i
-                if (is.null(call$fixed)) call$fixed <- fix
-                else call$fixed <- c(eval(call$fixed),fix)
-                if (skiperrs) {
-                    pfit <- try(eval.parent(call, 2L), silent=TRUE)
+        }
+        ## now try to fit ...
+        if (skiperrs) {
+            pfit <<- try(eval.parent(call, 2L), silent=TRUE)
+            pfit <<- try(eval(call, environment(fitted)), silent=TRUE)
+        } else {
+            pfit <<- eval(call, environment(fitted))
+        }
+        ok <- ! inherits(pfit,"try-error")
+        if (debug && ok) cat(coef(pfit),-logLik(pfit),"\n")
+        if(skiperrs && !ok) {
+            warning(paste("Error encountered in profile:",pfit))
+            return(NA)
+        }
+        else {
+            ## pfit is current (profile) fit,
+            ##   fitted is original fit
+            ## pfit at min _should_ be > fitted at min
+            ## thus zz below should be >0
+            zz <- 2*(pfit at min - fitted at min)
+            ri <- pv0
+            ri[, names(pfit at coef)] <- pfit at coef
+            ri[, p.i] <- bi
+            ##cat(2*pfit at min,2*fitted at min,zz,
+            ##   tol.newmin,zz<(-tol.newmin),"\n")
+            if (!is.na(zz) && zz<0) {
+                if (zz > (-tol.newmin)) {
+                    z <- 0
+                    ## HACK for non-monotonic profiles? z <- -sgn*sqrt(abs(zz))
                 } else {
-                    pfit <- eval.parent(call, 2L)
+                    ## cat() instead of warning(); FIXME use message() instead???
+                    message("Profiling has found a better solution,",
+                            "so original fit had not converged:\n")
+                    message(sprintf("(new deviance=%1.4g, old deviance=%1.4g, diff=%1.4g)",
+                                    2*pfit at min,2*fitted at min,2*(pfit at min-fitted@min)),"\n")
+                    message("Returning better fit ...\n")
+                    ## need to return parameters all the way up
+                    ##   to top level
+                    newpars_found <<- TRUE
+                    ## return(pfit at fullcoef)
+                    return(pfit) ## return full fit
                 }
-                ok <- ! inherits(pfit,"try-error")
-                if (debug && ok) cat(coef(pfit),-logLik(pfit),"\n")
-                if(skiperrs && !ok) {
-                    warning(paste("Error encountered in profile:",pfit))
-                    return(NA)
-                }
-                else {
-                    ## pfit is current (profile) fit,
-                    ##   fitted is original fit
-                    ## pfit at min _should_ be > fitted at min
-                    ## thus zz below should be >0
-                    zz <- 2*(pfit at min - fitted at min)
-                    ri <- pv0
-                    ri[, names(pfit at coef)] <- pfit at coef
-                    ri[, p.i] <- bi
-                    ##cat(2*pfit at min,2*fitted at min,zz,
-                    ##   tol.newmin,zz<(-tol.newmin),"\n")
-                    if (!is.na(zz) && zz<0) {
-                        if (zz > (-tol.newmin)) {
-                          z <- 0
-                          ## HACK for non-monotonic profiles? z <- -sgn*sqrt(abs(zz))
-                        } else {
-                          ## cat() instead of warning(); FIXME use message() instead???
-                          message("Profiling has found a better solution,",
-                                "so original fit had not converged:\n")
-                          message(sprintf("(new deviance=%1.4g, old deviance=%1.4g, diff=%1.4g)",
-                                        2*pfit at min,2*fitted at min,2*(pfit at min-fitted@min)),"\n")
-                          message("Returning better fit ...\n")
-                          ## need to return parameters all the way up
-                          ##   to top level
-                          newpars_found <<- TRUE
-                          ## return(pfit at fullcoef)
-                          return(pfit) ## return full fit
-                        }
-                    } else {
-                      z <- sgn * sqrt(zz)
-                    }
-                    pvi <<- rbind(pvi, ri)
-                    zi <<- c(zi, z) ## nb GLOBAL set
-                }
-                if (trace) cat(bi, z, "\n")
-                z
-            } ## end onestep
-            ## Profile the likelihood around its maximum
-            ## Based on profile.glm in MASS
-            summ <- summary(fitted)
-            if (missing(std.err)) {
-                std.err <- summ at coef[, "Std. Error"]
             } else {
-                n <- length(summ at coef)
-                if (length(std.err)<n)
-                  std.err <- rep(std.err,length.out=length(summ at coef))
-                if (any(is.na(std.err)))
-                  std.err[is.na(std.err)] <- summ at coef[is.na(std.err)]
+                z <- sgn * sqrt(zz)
+            }
+            pvi <<- rbind(pvi, ri)
+            zi <<- c(zi, z) ## nb GLOBAL set
+        }
+        if (trace) cat(bi, z, "\n")
+        return(z)
+    } ## end onestep
+    ## Profile the likelihood around its maximum
+    ## Based on profile.glm in MASS
+    ## suppressWarnings (don't want to know e.g. about bad vcov)
+    summ <- suppressWarnings(summary(fitted))
+    if (missing(std.err)) {
+        std.err <- summ at coef[, "Std. Error"]
+    } else {
+        n <- length(summ at coef)
+        if (length(std.err)<n)
+            std.err <- rep(std.err,length.out=length(summ at coef))
+        if (any(is.na(std.err)))
+            std.err[is.na(std.err)] <- summ at coef[is.na(std.err)]
+    }
+    if (any(is.na(std.err))) {
+        std.err[is.na(std.err)] <- sqrt(1/diag(fitted at details$hessian))[is.na(std.err)]
+        if (any(is.na(std.err))) {  ## still bad
+            stop("Hessian is ill-behaved or missing, ",
+                 "can't find an initial estimate of std. error ",
+                 "(consider specifying std.err in profile call)")
+        }
+        ## warn anyway ...
+        warning("Non-positive-definite Hessian, ",
+                "attempting initial std err estimate from diagonals")
+    }
+    Pnames <- names(B0 <- fitted at coef)
+    pv0 <- t(as.matrix(B0))
+    p <- length(Pnames)
+    prof <- vector("list", length = length(which))
+    names(prof) <- Pnames[which]
+    call <- fitted at call
+    call$skip.hessian <- skip.hessian ## BMB: experimental
+    call$minuslogl <- fitted at minuslogl
+    ndeps <- eval.parent(call$control$ndeps)
+    parscale <- eval.parent(call$control$parscale)
+    nc <- length(fitted at coef)
+    xf <- function(x) rep(x,length.out=nc) ## expand to length
+    upper <- xf(unlist(eval.parent(call$upper)))
+    lower <- xf(unlist(eval.parent(call$lower)))
+    if (all(upper==Inf & lower==-Inf)) {
+        lower <- upper <- NULL
+        ## kluge: lower/upper may have been set to +/- Inf
+        ##   in previous rounds, 
+        ##  but we don't want them in that case
+    }
+    if (!missing(prof.lower)) prof.lower <- xf(prof.lower)
+    if (!missing(prof.upper)) prof.upper <- xf(prof.upper)
+    ## cat("upper\n")
+    ## print(upper)
+    stop_msg <- list()
+    for (i in which) {
+        zi <- 0
+        pvi <- pv0
+        p.i <- Pnames[i]
+        wfun <- function(txt) paste(txt," (",p.i,")",sep="")
+        ## omit values from control vectors:
+        ##   is this necessary/correct?
+        if (!is.null(ndeps)) call$control$ndeps <- ndeps[-i]
+        if (!is.null(parscale)) call$control$parscale <- parscale[-i]
+        if (!is.null(upper) && length(upper)>1) call$upper <- upper[-i]
+        if (!is.null(lower) && length(lower)>1) call$lower <- lower[-i]
+        stop_msg[[i]] <- list(down="",up="")
+        for (sgn in c(-1, 1)) {
+            pfit <- NULL ## reset for continuation method
+            dir_ind <- (sgn+1)/2+1 ## (-1,1) -> (1,2)
+            if (trace) {
+                cat("\nParameter:", p.i, c("down", "up")[dir_ind], "\n")
+                cat("par val","sqrt(dev diff)\n")
             }
-            if (any(is.na(std.err))) {
-              std.err[is.na(std.err)] <- sqrt(1/diag(fitted at details$hessian))[is.na(std.err)]
-              if (any(is.na(std.err))) {  ## still bad
-                stop("Hessian is ill-behaved or missing, ",
-                     "can't find an initial estimate of std. error ",
-                     "(consider specifying std.err in profile call)")
-              }
-              ## warn anyway ...
-              warning("Non-positive-definite Hessian, ",
-                      "attempting initial std err estimate from diagonals")
+            step <- 0
+            z <- 0
+            ## This logic was a bit frail in some cases with
+            ## high parameter curvature. We should probably at least
+            ## do something about cases where the mle2 call fails
+            ## because the parameter gets stepped outside the domain.
+            ## (We now have.)
+            call$start <- as.list(B0)
+            lastz <- 0
+            valf <- function(b) {
+                (!is.null(b) && length(b)>1) ||
+                    (length(b)==1 && i==1 && is.finite(b))
             }
-            Pnames <- names(B0 <- fitted at coef)
-            pv0 <- t(as.matrix(B0))
-            p <- length(Pnames)
-            prof <- vector("list", length = length(which))
-            names(prof) <- Pnames[which]
-            call <- fitted at call
-            call$skip.hessian <- skip.hessian ## BMB: experimental
-            call$minuslogl <- fitted at minuslogl
-            ndeps <- eval.parent(call$control$ndeps)
-            parscale <- eval.parent(call$control$parscale)
-            nc <- length(fitted at coef)
-            xf <- function(x) rep(x,length.out=nc) ## expand to length
-            upper <- xf(unlist(eval.parent(call$upper)))
-            lower <- xf(unlist(eval.parent(call$lower)))
-            if (all(upper==Inf & lower==-Inf)) {
-                lower <- upper <- NULL
-                ## kluge: lower/upper may have been set to +/- Inf
-                ##   in previous rounds, 
-                ##  but we don't want them in that case
+            lbound <- if (!missing(prof.lower)) {
+                          prof.lower[i]
+                      } else if (valf(lower))
+                      { lower[i]
+                      } else -Inf
+            ubound <- if (!missing(prof.upper)) prof.upper[i] else if (valf(upper)) upper[i] else Inf
+            stop_bound <- stop_na <- stop_cutoff <- stop_flat <- FALSE
+            while ((step <- step + 1) < maxsteps &&
+                   ## added is.na() test for try_harder case
+                   ## FIXME: add unit test!
+                   (is.na(z) || abs(z) < zmax)) {
+                       curval <- B0[i] + sgn * step * del * std.err[i]
+                       if ((sgn==-1 & curval<lbound) ||
+                           (sgn==1 && curval>ubound)) {
+                           stop_bound <- TRUE;
+                           stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit bound"))
+                           break
+                       }
+                       z <- onestep(step)
+                       ## stop on flat spot, unless try_harder
+                       if (step>1 && (identical(oldcurval,curval) || identical(oldz,z))) {
+                           stop_flat <- TRUE
+                           stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit flat spot"),
+                                                             sep=";")
+                           if (!try_harder) break
+                       }
+                       oldcurval <- curval
+                       oldz <- z
+                       if (newpars_found) return(z)
+                       if(is.na(z)) {
+                           stop_na <- TRUE
+                           stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit NA"),sep=";")
+                           if (!try_harder) break
+                       }
+                       lastz <- z
+                       if (newpars_found) return(z)
+                   }
+            stop_cutoff <- (!is.na(z) && abs(z)>=zmax)
+            stop_maxstep <- (step==maxsteps)
+            if (stop_maxstep) stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("max steps"),sep=";")
+            if (debug) {
+                if (stop_na) message(wfun("encountered NA"),"\n")
+                if (stop_cutoff) message(wfun("above cutoff"),"\n")
             }
-            if (!missing(prof.lower)) prof.lower <- xf(prof.lower)
-            if (!missing(prof.upper)) prof.upper <- xf(prof.upper)
-            ## cat("upper\n")
-            ## print(upper)
-            stop_msg <- list()
-            for (i in which) {
-              zi <- 0
-              pvi <- pv0
-              p.i <- Pnames[i]
-              wfun <- function(txt) paste(txt," (",p.i,")",sep="")
-              ## omit values from control vectors:
-              ##   is this necessary/correct?
-               if (!is.null(ndeps)) call$control$ndeps <- ndeps[-i]
-               if (!is.null(parscale)) call$control$parscale <- parscale[-i]
-               if (!is.null(upper) && length(upper)>1) call$upper <- upper[-i]
-               if (!is.null(lower) && length(lower)>1) call$lower <- lower[-i]
-              stop_msg[[i]] <- list(down="",up="")
-              for (sgn in c(-1, 1)) {
-                dir_ind <- (sgn+1)/2+1 ## (-1,1) -> (1,2)
-                if (trace) {
-                    cat("\nParameter:", p.i, c("down", "up")[dir_ind], "\n")
-                    cat("par val","sqrt(dev diff)\n")
-                }
-                step <- 0
-                z <- 0
-                ## This logic was a bit frail in some cases with
-                ## high parameter curvature. We should probably at least
-                ## do something about cases where the mle2 call fails
-                ## because the parameter gets stepped outside the domain.
-                ## (We now have.)
-                call$start <- as.list(B0)
-                lastz <- 0
-                valf <- function(b) {
-                  (!is.null(b) && length(b)>1) ||
-                  (length(b)==1 && i==1 && is.finite(b))
-                }
-                lbound <- if (!missing(prof.lower)) {
-                  prof.lower[i]
-                } else if (valf(lower))
-                  { lower[i]
-                  } else -Inf
-                ubound <- if (!missing(prof.upper)) prof.upper[i] else if (valf(upper)) upper[i] else Inf
-                stop_bound <- stop_na <- stop_cutoff <- stop_flat <- FALSE
-                while ((step <- step + 1) < maxsteps &&
-                       ## added is.na() test for try_harder case
-                       ## FIXME: add unit test!
-                       (is.na(z) || abs(z) < zmax)) {
-                  curval <- B0[i] + sgn * step * del * std.err[i]
-                  if ((sgn==-1 & curval<lbound) ||
-                      (sgn==1 && curval>ubound)) {
-                    stop_bound <- TRUE;
-                    stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit bound"))
-                    break
-                  }
-                  z <- onestep(step)
-                  ## stop on flat spot, unless try_harder
-                  if (step>1 && (identical(oldcurval,curval) || identical(oldz,z))) {
-                    stop_flat <- TRUE
-                    stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit flat spot"),
-                                                      sep=";")
-                    if (!try_harder) break
-                  }
-                  oldcurval <- curval
-                  oldz <- z
-                  if (newpars_found) return(z)
-                  if(is.na(z)) {
-                    stop_na <- TRUE
-                    stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("hit NA"),sep=";")
-                    if (!try_harder) break
-                  }
-                  lastz <- z
-                  if (newpars_found) return(z)
-                }
-                stop_cutoff <- (!is.na(z) && abs(z)>=zmax)
-                stop_maxstep <- (step==maxsteps)
-                if (stop_maxstep) stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("max steps"),sep=";")
-                if (debug) {
-                  if (stop_na) message(wfun("encountered NA"),"\n")
-                  if (stop_cutoff) message(wfun("above cutoff"),"\n")
-                }
-                if (stop_flat) {
-                  warning(wfun("stepsize effectively zero/flat profile"))
-                } else {
-                  if (stop_maxstep) warning(wfun("hit maximum number of steps"))
-                  if(!stop_cutoff) {
+            if (stop_flat) {
+                warning(wfun("stepsize effectively zero/flat profile"))
+            } else {
+                if (stop_maxstep) warning(wfun("hit maximum number of steps"))
+                if(!stop_cutoff) {
                     if (debug) cat(wfun("haven't got to zmax yet, trying harder"),"\n")
                     stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("past cutoff"),sep=";")
                     ## now let's try a bit harder if we came up short
                     for(dstep in c(0.2, 0.4, 0.6, 0.8, 0.9)) {
-                      curval <- B0[i] + sgn * (step-1+dstep) * del * std.err[i]
-                      if ((sgn==-1 & curval<lbound) ||
-                          (sgn==1 && curval>ubound)) break
-                      z <- onestep(step - 1 + dstep)
-                      if (newpars_found) return(z)
-                      if(is.na(z) || abs(z) > zmax) break
-                      lastz <- z
-                      if (newpars_found) return(z)
+                        curval <- B0[i] + sgn * (step-1+dstep) * del * std.err[i]
+                        if ((sgn==-1 & curval<lbound) ||
+                            (sgn==1 && curval>ubound)) break
+                        z <- onestep(step - 1 + dstep)
+                        if (newpars_found) return(z)
+                        if(is.na(z) || abs(z) > zmax) break
+                        lastz <- z
+                        if (newpars_found) return(z)
                     }
                     if (!stop_cutoff && stop_bound) {
-                      if (debug) cat(wfun("bounded and didn't make it, try at boundary"),"\n")
-                      ## bounded and didn't make it, try at boundary
-                      if (sgn==-1 && B0[i]>lbound) z <- onestep(bi=lbound)
-                      if (sgn==1  && B0[i]<ubound) z <- onestep(bi=ubound)
-                      if (newpars_found) return(z)
+                        if (debug) cat(wfun("bounded and didn't make it, try at boundary"),"\n")
+                        ## bounded and didn't make it, try at boundary
+                        if (sgn==-1 && B0[i]>lbound) z <- onestep(bi=lbound)
+                        if (sgn==1  && B0[i]<ubound) z <- onestep(bi=ubound)
+                        if (newpars_found) return(z)
                     }
-                  } else if (length(zi) < 5) { # try smaller steps
+                } else if (length(zi) < 5) { # try smaller steps
                     if (debug) cat(wfun("try smaller steps"),"\n")
                     stop_msg[[i]][[dir_ind]] <- paste(stop_msg[[i]][[dir_ind]],wfun("took more steps"),sep=";")
                     mxstep <- step - 1
                     step <- 0.5
                     while ((step <- step + 1) < mxstep) {
-                      z <- onestep(step)
+                        z <- onestep(step)
                     }
-                  } ## smaller steps
-                } ## !zero stepsize
-              } ## step in both directions
-              si <- order(pvi[, i])
-              prof[[p.i]] <- data.frame(z = zi[si])
-              prof[[p.i]]$par.vals <- pvi[si,, drop=FALSE]
-            } ## for i in which
-            newprof <- new("profile.mle2", profile = prof, summary = summ)
-            attr(newprof,"stop_msg") <- stop_msg
-            newprof
-          })
+                } ## smaller steps
+            } ## !zero stepsize
+        } ## step in both directions
+        si <- order(pvi[, i])
+        prof[[p.i]] <- data.frame(z = zi[si])
+        prof[[p.i]]$par.vals <- pvi[si,, drop=FALSE]
+    } ## for i in which
+    return(list(prof=prof,summ=summ,stop_msg=stop_msg))
+}
+setMethod("profile", "mle2",
+function(fitted,...) {
+    ## cc <- match.call()
+    ## cc[[1]] <- quote(proffun)
+    ## prof <- eval.parent(cc)
+    prof <- proffun(fitted,...)
+    if (is(prof,"mle2")) return(prof)
+    newprof <- new("profile.mle2", profile = prof$prof, summary = prof$summ)
+    attr(newprof,"stop_msg") <- prof$stop_msg
+    newprof
+})
 
 
 setMethod("plot", signature(x="profile.mle2", y="missing"),
           function (x, levels, which=1:p, conf = c(99, 95, 90, 80, 50)/100,
-          plot.confstr = TRUE, confstr = NULL, absVal = TRUE, add = FALSE,
-          col.minval="green", lty.minval=2,
-          col.conf="magenta", lty.conf=2,
-          col.prof="blue", lty.prof=1,
-          xlabs=nm, ylab="z",
-          onepage=TRUE,
-          ask=((prod(par("mfcol")) < length(which)) && dev.interactive() &&
-               !onepage),
-          show.points=FALSE,
-          main, xlim, ylim, ...)
+                    plot.confstr = TRUE, confstr = NULL, absVal = TRUE, add = FALSE,
+                    col.minval="green", lty.minval=2,
+                    col.conf="magenta", lty.conf=2,
+                    col.prof="blue", lty.prof=1,
+                    xlabs=nm, ylab="z",
+                    onepage=TRUE,
+                    ask=((prod(par("mfcol")) < length(which)) && dev.interactive() &&
+                         !onepage),
+                    show.points=FALSE,
+                    main, xlim, ylim, ...)
 {
     ## Plot profiled likelihood
     ## Based on profile.nls (package stats)
@@ -292,8 +318,8 @@ setMethod("plot", signature(x="profile.mle2", y="missing"),
             columns <- ceiling(nplots/rows)
             mfrow_orig <- par(mfrow=c(rows,columns))
             op <- c(op,mfrow_orig)
-          }
-      }
+        }
+    }
     on.exit(par(op))
     confstr <- NULL
     if (missing(levels)) {
@@ -310,12 +336,12 @@ setMethod("plot", signature(x="profile.mle2", y="missing"),
     mlev <- max(levels) * 1.05
     ##    opar <- par(mar = c(5, 4, 1, 1) + 0.1)
     if (!missing(xlabs) && length(which)<length(nm)) {
-      xl2 = nm
-      xl2[which] <- xlabs
-      xlabs <- xl2
+        xl2 = nm
+        xl2[which] <- xlabs
+        xlabs <- xl2
     }
     if (missing(main)) 
-      main <- paste("Likelihood profile:",nm)
+        main <- paste("Likelihood profile:",nm)
     main <- rep(main,length=length(nm))
     for (i in seq(along = nm)[which]) {
         ## <FIXME> This does not need to be monotonic
@@ -324,39 +350,39 @@ setMethod("plot", signature(x="profile.mle2", y="missing"),
         yvals <- obj[[i]]$par.vals[,nm[i],drop=FALSE]
         avals <- data.frame(x=unname(yvals), y=obj[[i]]$z)
         if (!all(diff(obj[[i]]$z)>0)) {
-          warning("non-monotonic profile: reverting to linear interpolation.  Consider setting std.err manually")
-          predback <- approxfun(obj[[i]]$z,yvals)
+            warning("non-monotonic profile: reverting to linear interpolation.  Consider setting std.err manually")
+            predback <- approxfun(obj[[i]]$z,yvals)
         } else {
-          sp <- splines::interpSpline(yvals, obj[[i]]$z,
-                                      na.action=na.omit)
-          avals <- rbind(avals,as.data.frame(predict(sp)))
-          avals <- avals[order(avals$x),]
-          bsp <- try(splines::backSpline(sp),silent=TRUE)
-          bsp.OK <- (class(bsp)[1]!="try-error")
-          if (bsp.OK) {
-            predback <- function(y) { predict(bsp,y)$y }
-          } else { ## backspline failed
-            warning("backspline failed: using uniroot(), confidence limits may be unreliable")
-            ## what do we do?
-            ## attempt to use uniroot
-            predback <- function(y) {
-              pfun0 <- function(z1) {
-                t1 <- try(uniroot(function(z) {
-                  predict(sp,z)$y-z1
-                }, range(obj[[i]]$par.vals[,nm[i]])),silent=TRUE)
-                if (class(t1)[1]=="try-error") NA else t1$root
+            sp <- splines::interpSpline(yvals, obj[[i]]$z,
+                                        na.action=na.omit)
+            avals <- rbind(avals,as.data.frame(predict(sp)))
+            avals <- avals[order(avals$x),]
+            bsp <- try(splines::backSpline(sp),silent=TRUE)
+            bsp.OK <- (class(bsp)[1]!="try-error")
+            if (bsp.OK) {
+                predback <- function(y) { predict(bsp,y)$y }
+            } else { ## backspline failed
+                warning("backspline failed: using uniroot(), confidence limits may be unreliable")
+                ## what do we do?
+                ## attempt to use uniroot
+                predback <- function(y) {
+                    pfun0 <- function(z1) {
+                        t1 <- try(uniroot(function(z) {
+                            predict(sp,z)$y-z1
+                        }, range(obj[[i]]$par.vals[,nm[i]])),silent=TRUE)
+                        if (class(t1)[1]=="try-error") NA else t1$root
+                    }
+                    sapply(y,pfun0)
                 }
-              sapply(y,pfun0)
             }
-          }
         }
         ## </FIXME>
         if (no.xlim) xlim <- predback(c(-mlev, mlev))
         xvals <- obj[[i]]$par.vals[,nm[i]]
         if (is.na(xlim[1]))
-          xlim[1] <- min(xvals)
+            xlim[1] <- min(xvals)
         if (is.na(xlim[2]))
-          xlim[2] <- max(xvals)
+            xlim[2] <- max(xvals)
         if (absVal) {
             if (!add) {
                 if (no.ylim) ylim <- c(0,mlev)
@@ -407,4 +433,4 @@ setMethod("plot", signature(x="profile.mle2", y="missing"),
         } ## loop over levels
     } ## loop over variables
     ## par(opar)
-  })
+})
diff --git a/build/vignette.rds b/build/vignette.rds
index c9587b4..c7961cd 100644
Binary files a/build/vignette.rds and b/build/vignette.rds differ
diff --git a/inst/NEWS.Rd b/inst/NEWS.Rd
index f3826a7..6bc1278 100755
--- a/inst/NEWS.Rd
+++ b/inst/NEWS.Rd
@@ -4,6 +4,22 @@
 \title{bbmle News}
 \encoding{UTF-8}
 
+\section{Changes in version 1.0.19 (2017-04-08)}{
+  \itemize{
+    \item fixed bug: evaluate \code{call$method} so that
+    profiling/updating works within a function environment
+    \item make AICtab smarter about NA values
+    \item fix BIC bug (infinite recursion)
+    \item hessian computation uses gradient function if provided
+    \item basic continuation method implemented for profiling (with
+    stubs for smarter methods)
+    \item mle2 stores its calling environment for more flexibility
+    when re-evaluating, e.g. in profiling (could lead to occasional
+    surprises, e.g. if saving a fitted mle2 object with large objects
+    in its calling environment)
+  }
+}
+
 \section{Changes in version 1.0.18 (2016-02-11)}{
   \itemize{
     \item update slice functionality; allow for explicit ranges
diff --git a/inst/doc/mle2.R b/inst/doc/mle2.R
index 25192e3..99a3564 100644
--- a/inst/doc/mle2.R
+++ b/inst/doc/mle2.R
@@ -12,7 +12,7 @@ set.seed(1001)
 x1 <- rbetabinom(n=1000,prob=0.1,size=50,theta=10)
 
 ## ----bbmle,message=FALSE-------------------------------------------------
-library("bbmle")
+library(bbmle)
 
 ## ----likfun1-------------------------------------------------------------
 mtmp <- function(prob,size,theta) {
@@ -133,9 +133,9 @@ m3f <- update(m0f,
 anova(m0f,m2f,m3f)
 
 ## ----ICtabfit------------------------------------------------------------
-AICtab(m0f,m2f,m3f,weights=TRUE,delta=TRUE,sort=TRUE)
-BICtab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
-AICctab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
+AICtab(m0f,m2f,m3f,weights=TRUE)
+BICtab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
+AICctab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
 
 ## ----reWarn,echo=FALSE---------------------------------------------------
 opts_chunk$set(warning=FALSE)
@@ -151,8 +151,9 @@ library(ggplot2)
 
 ## ----gg1-----------------------------------------------------------------
 gg1 <- ggplot(frogdat,aes(x=size,y=killed))+geom_point()+
-      stat_sum(aes(size=factor(..n..)))+
-      labs(size="#")+scale_x_continuous(limits=c(0,40))
+      stat_sum(aes(size=..n..))+
+      labs(size="#")+scale_x_continuous(limits=c(0,40))+
+scale_size(breaks=1:3)
 
 ## ----gg1plot-------------------------------------------------------------
 gg1 + geom_line(data=pdat1,colour="red")+
diff --git a/inst/doc/mle2.Rnw b/inst/doc/mle2.Rnw
index 25e6994..fe59d5b 100755
--- a/inst/doc/mle2.Rnw
+++ b/inst/doc/mle2.Rnw
@@ -110,7 +110,7 @@ x1 <- rbetabinom(n=1000,prob=0.1,size=50,theta=10)
 
 Load the package:
 <<bbmle,message=FALSE>>=
-library("bbmle")
+library(bbmle)
 @ 
 
 Construct a simple negative log-likelihood function:
@@ -410,12 +410,13 @@ anova(m0f,m2f,m3f)
 @ 
 
 The various \code{ICtab} commands produce tables of
-information criteria, optionally sorted and
-with model weights.
+information criteria; by default the results are sorted and
+presented as $\Delta$IC; there are various options, including
+printing model weights.
 <<ICtabfit>>=
-AICtab(m0f,m2f,m3f,weights=TRUE,delta=TRUE,sort=TRUE)
-BICtab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
-AICctab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
+AICtab(m0f,m2f,m3f,weights=TRUE)
+BICtab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
+AICctab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
 @ 
 <<reWarn,echo=FALSE>>=
 opts_chunk$set(warning=FALSE)
@@ -437,10 +438,10 @@ library(ggplot2)
 
 <<gg1>>=
 gg1 <- ggplot(frogdat,aes(x=size,y=killed))+geom_point()+
-      stat_sum(aes(size=factor(..n..)))+
-      labs(size="#")+scale_x_continuous(limits=c(0,40))
+      stat_sum(aes(size=..n..))+
+      labs(size="#")+scale_x_continuous(limits=c(0,40))+
+scale_size(breaks=1:3)
 @ 
-
 <<frogfit1,cache=TRUE,warning=FALSE>>=
 m3 <- mle2(killed~dbinom(prob=c*(size/d)^g*exp(1-size/d),
   size=initial),data=frogdat,start=list(c=0.5,d=5,g=1))
diff --git a/inst/doc/mle2.pdf b/inst/doc/mle2.pdf
index c01fde0..933447c 100644
Binary files a/inst/doc/mle2.pdf and b/inst/doc/mle2.pdf differ
diff --git a/inst/doc/quasi.Rnw b/inst/doc/quasi.Rnw
index 3ce84e8..31a448c 100755
--- a/inst/doc/quasi.Rnw
+++ b/inst/doc/quasi.Rnw
@@ -1,16 +1,18 @@
 \documentclass{article}
 %\VignettePackage{mle2}
 %\VignetteIndexEntry{quasi: notes on quasi-likelihood/qAIC analysis inR}
-%\VignetteDepends{MuMIn,AICcmodavg}
+%\VignetteDepends{MuMIn,AICcmodavg,bbmle}
 %\VignetteEngine{knitr::knitr}
 
 \usepackage{graphicx}
+\usepackage{hyperref}
 \usepackage{url}
 \newcommand{\code}[1]{{\tt #1}}
 \title{Dealing with \code{quasi-}  models in R}
 \date{\today}
 \author{Ben Bolker}
 \begin{document}
+\newcommand{\rpkg}[1]{\href{https://CRAN.R-project.org/package=#1}{{\tt #1}}}
 \maketitle
 
 \includegraphics[width=2.64cm,height=0.93cm]{cc-attrib-nc.png}
@@ -30,8 +32,9 @@ pain, because  the R Core team (or at least the ones who wrote \code{glm},
 \code{glmmPQL}, etc.) 
 are purists and don't believe that quasi- models should report a likelihood.
 As far as I know, there are three R packages that compute/handle
-QAIC: \code{bbmle}, \code{AICcmodavg} (both on CRAN) and \code{MuMIn}
-(formerly known as \code{dRedging}, on r-forge).
+QAIC: 
+\rpkg{bbmle}, \rpkg{AICcmodavg} and \rpkg{MuMIn}.
+
 
 The basic problem is that quasi- model fits with \code{glm} return
 an \code{NA} for the log-likelihood, while the dispersion parameter
@@ -64,7 +67,7 @@ The whole problem is worse for \code{MASS::glmmPQL}, where (1) the
 authors have gone to greater efforts to make sure that the (quasi-)deviance
 is no longer preserved anywhere in the fitted model, and (2) they
 may have done it for good reason --- it is not clear whether the
-number that would get left in the 'deviance' slot at the end of
+number that would get left in the `deviance' slot at the end of
 \code{glmmPQL}'s alternating \code{lme} and \code{glm} fits is
 even meaningful to the extent that regular QAICs are.  (For 
 discussion of a similar situation, see the \code{WARNING}
@@ -128,7 +131,7 @@ on a quasi- fit:
 
 \section*{Examples}
 
-\subsection*{\code{bbmle} package (Ben Bolker), CRAN/R-forge}
+\subsection*{\code{bbmle}}
 
 <<bbmle>>=
 library(bbmle)
@@ -142,7 +145,7 @@ ICtab(glmOT.D93,glmT.D93,glmO.D93,glmX.D93,
 detach("package:bbmle")
 @ 
 
-\subsection*{\code{AICcmodavg} package (Marc Mazerolle), CRAN}
+\subsection*{\code{AICcmodavg}}
 
 <<AICcmodavg>>=
 library(AICcmodavg)
@@ -152,7 +155,7 @@ aictab(list(glmOT.D93,glmT.D93,glmO.D93,glmX.D93),
 detach("package:AICcmodavg")
 @ 
 
-\subsection*{\code{MuMIn} package (Kamil Barto{\'n})}
+\subsection*{\code{MuMIn}}
 
 <<MuMin>>=
 library(MuMIn); packageVersion("MuMIn")
diff --git a/inst/doc/quasi.pdf b/inst/doc/quasi.pdf
index 211de0a..256b247 100644
Binary files a/inst/doc/quasi.pdf and b/inst/doc/quasi.pdf differ
diff --git a/inst/unitTests/Makefile b/inst/unitTests/Makefile
deleted file mode 100755
index 8d13225..0000000
--- a/inst/unitTests/Makefile
+++ /dev/null
@@ -1,15 +0,0 @@
-TOP=../..
-PKG=${shell cd ${TOP};pwd}
-SUITE=doRUnit.R
-R=R
-
-all: inst test
-
-inst: # Install package
-	cd ${TOP}/..;\
-	${R} CMD INSTALL ${PKG}
-
-test: # Run unit tests
-	export RCMDCHECK=FALSE;\
-	cd ${TOP}/tests;\
-	${R} --vanilla --slave < ${SUITE}
diff --git a/man/as.data.frame.profile.mle2.Rd b/man/as.data.frame.profile.mle2.Rd
index f8edd02..62a5b2b 100755
--- a/man/as.data.frame.profile.mle2.Rd
+++ b/man/as.data.frame.profile.mle2.Rd
@@ -31,12 +31,13 @@ optional=FALSE, \dots)
   x <- 0:10
   y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
   library(bbmle)
-  LL <- function(ymax=15, xhalf=6)
-  -sum(stats::dpois(y, lambda=ymax/(1+x/xhalf), log=TRUE))
+  LL <- function(ymax=15, xhalf=6) {
+        -sum(stats::dpois(y, lambda=ymax/(1+x/xhalf), log=TRUE))
+    }
   ## uses default parameters of LL
   fit1 <- mle2(LL)
   p1 <- profile(fit1)
-  d1 = as.data.frame(p1)
+  d1 <- as.data.frame(p1)
   library(lattice)
   xyplot(abs(z)~focal|param,data=d1,
     subset=abs(z)<3,
diff --git a/man/mle-class.Rd b/man/mle-class.Rd
index 5683006..2447179 100755
--- a/man/mle-class.Rd
+++ b/man/mle-class.Rd
@@ -39,13 +39,13 @@
     \item{\code{method}:}{(character) The optimization method used.}
     \item{\code{formula}:}{(character) If a formula was specified, a
       character vector giving the formula and parameter specifications.}
-}
+  }
 }
 \section{Methods}{
   \describe{
     \item{coef}{\code{signature(object = "mle2")}: Extract coefficients.
       If \code{exclude.fixed=TRUE} (it is \code{FALSE} by default),
-      only the non-fixed parameter values are returned.}t
+      only the non-fixed parameter values are returned.}
     \item{confint}{\code{signature(object = "mle2")}: Confidence
       intervals from likelihood profiles, or quadratic approximations,
       or root-finding.}
@@ -61,7 +61,7 @@
       profile. }
   }
 }
-\details{
+\section{Details on the confint method}{
   When the parameters in the original fit are constrained using
   \code{lower} or \code{upper}, or when \code{prof.lower} or
   \code{prof.upper} are set, and the confidence intervals lie
@@ -73,7 +73,6 @@
   (If you have a strong opinion about the need for a new
   option to \code{confint} that sets the bounds to the limits
   automatically, please contact the package maintainer.)
-  
 }
 \examples{
 x <- 0:10
diff --git a/man/mle2.Rd b/man/mle2.Rd
index a714fdc..8682bd7 100755
--- a/man/mle2.Rd
+++ b/man/mle2.Rd
@@ -18,7 +18,7 @@ hessian.opts=NULL,
 use.ginv=TRUE,
 trace=FALSE,
 browse_obj=FALSE,
-gr,
+gr=NULL,
 optimfun,\dots)
 calc_mle2_function(formula,parameters, links, start,
    parnames, use.deriv=FALSE, data=NULL,trace=FALSE)
@@ -212,6 +212,7 @@ mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
    data=d,
    parameters=list(lymax~1,lhalf~1))
 
+\dontrun{
 ## try bounded optimization with nlminb and constrOptim
 (fit1B <- mle2(LL, optimizer="nlminb", lower=c(lymax=1e-7, lhalf=1e-7)))
 p1B <- profile(fit1B)
@@ -231,5 +232,6 @@ fit3 <- mle2(y~dnbinom(mu=exp(lymax)/(1+x/exp(lhalf)),size=exp(logk)),
     parameters=list(lymax~g),data=d2,
     start=list(lymax=0,lhalf=0,logk=0))
 }
+}
 \keyword{models}
 
diff --git a/man/profile-methods.Rd b/man/profile-methods.Rd
index 42c78e1..cbd2297 100755
--- a/man/profile-methods.Rd
+++ b/man/profile-methods.Rd
@@ -1,5 +1,6 @@
 \name{profile-methods}
 \docType{methods}
+\alias{proffun}
 \alias{profile-methods}
 \alias{profile,mle2-method}
 \alias{profile.mle2}
@@ -9,13 +10,16 @@
 }
 
 \usage{
-\S4method{profile}{mle2}(fitted, which = 1:p, maxsteps = 100,
+proffun(fitted, which = 1:p, maxsteps = 100,
                     alpha = 0.01, zmax = sqrt(qchisq(1 - alpha/2, p)),
                     del = zmax/5, trace = FALSE, skiperrs=TRUE,
                     std.err, 
                     tol.newmin = 0.001, debug=FALSE,
                     prof.lower, prof.upper,
-skip.hessian = TRUE, try_harder=FALSE, \dots)
+                    skip.hessian = TRUE,
+                    continuation = c("none","naive","linear"),
+                    try_harder=FALSE, \dots)
+\S4method{profile}{mle2}(fitted, \dots)
 }
 \arguments{
   \item{fitted}{A fitted maximum likelihood model of class
@@ -41,12 +45,20 @@ value of the negative log-likelihood}
   \item{debug}{(logical) debugging output?}
   \item{prof.lower}{optional vector of lower bounds for profiles}
   \item{prof.upper}{optional vector of upper bounds for profiles}
+  \item{continuation}{use continuation method to set starting values?
+    \code{"none"} sets starting values to best fit; \code{"naive"}
+    sets starting values to those of previous profiling fit;
+    \code{"linear"} (not yet implemented) would use linear extrapolation
+    from the previous two profiling fits}
   \item{skip.hessian}{skip hessian (defunct?)}
   \item{try_harder}{(logical) ignore \code{NA} and flat spots in the
 profile, try to continue anyway?}
   \item{\dots}{additional arguments (not used)}
 }
 \details{
+  \code{proffun} is the guts of the profile method, exposed
+  so that other packages can use it directly.
+  
   See the vignette (\code{vignette("mle2",package="bbmle")})
   for more technical details of how profiling is done.
  }
diff --git a/man/profile.mle-class.Rd b/man/profile.mle-class.Rd
index 9477f5c..b7dbd19 100755
--- a/man/profile.mle-class.Rd
+++ b/man/profile.mle-class.Rd
@@ -131,6 +131,7 @@ suppressWarnings(confint(fit1,method="uniroot"))
 ## alternatively, we can use box constraints to keep ourselves
 ##  to positive parameter values ...
 fit2 <- update(fit1,method="L-BFGS-B",lower=c(ymax=0.001,xhalf=0.001))
+\dontrun{
 p2 <- profile(fit2)
 plot(p2,show.points=TRUE)
 ## but the fit for ymax is just bad enough that the spline gets wonky
@@ -139,7 +140,6 @@ confint(fit2,method="uniroot")
 ## bobyqa is a better-behaved bounded optimizer ...
 ##  BUT recent (development, 2012.5.24) versions of
 ##    optimx no longer allow single-parameter fits!
-\dontrun{
 if (require(optimx)) {
   fit3 <- update(fit1,
       optimizer="optimx",
diff --git a/tests/BIC.R b/tests/BIC.R
old mode 100755
new mode 100644
diff --git a/tests/BIC.Rout.save b/tests/BIC.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/ICtab.R b/tests/ICtab.R
old mode 100755
new mode 100644
index 3e74bf9..c8a3717
--- a/tests/ICtab.R
+++ b/tests/ICtab.R
@@ -9,3 +9,9 @@ ICtab(m1,type="qAICc",dispersion=1.2,nobs=100)
 
 m2 = glm(z~1,family=poisson)
 qAICc(m2,nobs=100,dispersion=2)
+
+## test that dAIC ignores
+m3 <- glm(z~1,family=quasipoisson)
+aa <- AICtab(m1,m2,m3,weights=TRUE)
+stopifnot(any(!is.na(aa$dAIC)),
+          any(!is.na(aa$weight)))
diff --git a/tests/ICtab.Rout.save b/tests/ICtab.Rout.save
old mode 100755
new mode 100644
index 61e3f7d..0dda0c5
--- a/tests/ICtab.Rout.save
+++ b/tests/ICtab.Rout.save
@@ -1,8 +1,7 @@
 
-R Under development (unstable) (2012-07-27 r60013) -- "Unsuffered Consequences"
-Copyright (C) 2012 The R Foundation for Statistical Computing
-ISBN 3-900051-07-0
-Platform: i686-pc-linux-gnu (32-bit)
+R Under development (unstable) (2016-12-05 r71733) -- "Unsuffered Consequences"
+Copyright (C) 2016 The R Foundation for Statistical Computing
+Platform: x86_64-pc-linux-gnu (64-bit)
 
 R is free software and comes with ABSOLUTELY NO WARRANTY.
 You are welcome to redistribute it under certain conditions.
@@ -19,6 +18,7 @@ Type 'demo()' for some demos, 'help()' for on-line help, or
 Type 'q()' to quit R.
 
 > library(bbmle)
+Loading required package: stats4
 > 
 > set.seed(101)
 > z = rpois(100,lambda=5)
@@ -33,6 +33,12 @@ m1 0      1
 > qAICc(m2,nobs=100,dispersion=2)
 [1] 226.1823
 > 
+> ## test that dAIC ignores
+> m3 <- glm(z~1,family=quasipoisson)
+> aa <- AICtab(m1,m2,m3,weights=TRUE)
+> stopifnot(any(!is.na(aa$dAIC)),
++           any(!is.na(aa$weight)))
+> 
 > proc.time()
    user  system elapsed 
-  0.704   1.028   1.583 
+  1.140   0.212   1.648 
diff --git a/tests/Makefile b/tests/Makefile
deleted file mode 100755
index af3f3eb..0000000
--- a/tests/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-TOP=../..
-PKG=${shell cd ${TOP};pwd}
-SUITE=doRUnit.R
-R=R
-
-all: inst test
-
-inst: # Install package
-	cd ${TOP}/..;\
-	${R} CMD INSTALL ${PKG}
-
-test: # Run unit tests
-	export RCMDCHECK=FALSE;\
-	cd ${TOP}/tests;\
-	${R} --vanilla --slave < ${SUITE}
-
-savefiles:
-	./makesavefiles
diff --git a/tests/RUnit-tests.R b/tests/RUnit-tests.R
old mode 100755
new mode 100644
diff --git a/tests/binomtest1.R b/tests/binomtest1.R
old mode 100755
new mode 100644
diff --git a/tests/binomtest1.Rout b/tests/binomtest1.Rout
old mode 100755
new mode 100644
diff --git a/tests/binomtest1.Rout.save b/tests/binomtest1.Rout.save
old mode 100755
new mode 100644
diff --git a/inst/unitTests/boundstest.R b/tests/boundstest.R
old mode 100755
new mode 100644
similarity index 100%
rename from inst/unitTests/boundstest.R
rename to tests/boundstest.R
diff --git a/tests/controleval.R b/tests/controleval.R
old mode 100755
new mode 100644
diff --git a/tests/controleval.Rout.save b/tests/controleval.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/doRUnit.R b/tests/doRUnit.R
deleted file mode 100755
index e80bc1f..0000000
--- a/tests/doRUnit.R
+++ /dev/null
@@ -1,63 +0,0 @@
-## RUnit script obtained from:
-##   http://wiki.r-project.org/rwiki/doku.php?id=developers:runit
-##  copied from phylobase package
-
-## unit tests will not be done if RUnit is not available
-if(require("RUnit", quietly=TRUE)) {
- 
-  ## --- Setup ---
- 
-  pkg <- "bbmle"
-  if(Sys.getenv("RCMDCHECK") == "FALSE") {
-    ## Path to unit tests for standalone running under Makefile (not R CMD check)
-    ## PKG/tests/../inst/unitTests
-    path <- file.path(getwd(), "..", "inst", "unitTests")
-  } else {
-    ## Path to unit tests for R CMD check
-    ## PKG.Rcheck/tests/../PKG/unitTests
-    path <- system.file(package=pkg, "unitTests")
-  }
-  cat("\nRunning unit tests\n")
-  print(list(pkg=pkg, getwd=getwd(), pathToUnitTests=path))
- 
-  library(package=pkg, character.only=TRUE)
- 
-  ## If desired, load the name space to allow testing of private functions
-  ## if (is.element(pkg, loadedNamespaces()))
-  ##     attach(loadNamespace(pkg), name=paste("namespace", pkg, sep=":"), pos=3)
-  ##
-  ## or simply call PKG:::myPrivateFunction() in tests
- 
-  ## --- Testing ---
- 
-  ## Define tests
-  testSuite <- defineTestSuite(name=paste(pkg, "unit testing"),
-                                          dirs=path)
-  ## Run
-  tests <- runTestSuite(testSuite)
- 
-  ## Default report name
-  pathReport <- file.path(path, "report")
- 
-  ## Report to stdout and text files
-  cat("------------------- UNIT TEST SUMMARY ---------------------\n\n")
-  printTextProtocol(tests, showDetails=FALSE)
-  printTextProtocol(tests, showDetails=FALSE,
-                    fileName=paste(pathReport, "Summary.txt", sep=""))
-  printTextProtocol(tests, showDetails=TRUE,
-                    fileName=paste(pathReport, ".txt", sep=""))
- 
-  ## Report to HTML file
-  printHTMLProtocol(tests, fileName=paste(pathReport, ".html", sep=""))
- 
-  ## Return stop() to cause R CMD check stop in case of
-  ##  - failures i.e. FALSE to unit tests or
-  ##  - errors i.e. R errors
-  tmp <- getErrors(tests)
-  if(tmp$nFail > 0 | tmp$nErr > 0) {
-    stop(paste("\n\nunit testing failed (#test failures: ", tmp$nFail,
-               ", #R errors: ",  tmp$nErr, ")\n\n", sep=""))
-  }
-} else {
-  warning("cannot run unit tests -- package RUnit is not available")
-}
diff --git a/tests/eval.R b/tests/eval.R
old mode 100755
new mode 100644
diff --git a/tests/eval.Rout.save b/tests/eval.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/formulatest.R b/tests/formulatest.R
old mode 100755
new mode 100644
diff --git a/tests/formulatest.Rout.save b/tests/formulatest.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/glmcomp.R b/tests/glmcomp.R
old mode 100755
new mode 100644
diff --git a/tests/glmcomp.Rout.save b/tests/glmcomp.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/gradient_vecpar_profile.R b/tests/gradient_vecpar_profile.R
old mode 100755
new mode 100644
diff --git a/tests/gradient_vecpar_profile.Rout.save b/tests/gradient_vecpar_profile.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/grtest1.R b/tests/grtest1.R
old mode 100755
new mode 100644
index 7f8835b..d3ac10b
--- a/tests/grtest1.R
+++ b/tests/grtest1.R
@@ -4,6 +4,11 @@ f <- function(x=2,a=1) x^2 - a
 f.g <- function(x,a) 2*x
 f.g2 <- function(x,a) c(2*x,0)
 options(digits=3)
-mle2(f,fixed=list(a=1))
-mle2(f,gr=f.g,fixed=list(a=1))
-mle2(f,gr=f.g2,fixed=list(a=1))
+m1 <- mle2(f,fixed=list(a=1))
+m2 <- mle2(f,gr=f.g,fixed=list(a=1))
+m3 <- mle2(f,gr=f.g2,fixed=list(a=1))
+stopifnot(all.equal(coef(m1),coef(m2)))
+stopifnot(all.equal(coef(m1),coef(m3)))
+tt <- function(x) x at details$hessian
+stopifnot(all.equal(tt(m1),tt(m2),tolerance=1e-6))
+stopifnot(all.equal(tt(m1),tt(m3),tolerance=1e-6))
diff --git a/tests/grtest1.Rout.save b/tests/grtest1.Rout.save
index 0dfc1e1..e564d6c 100644
--- a/tests/grtest1.Rout.save
+++ b/tests/grtest1.Rout.save
@@ -1,6 +1,6 @@
 
-R Under development (unstable) (2016-02-09 r70138) -- "Unsuffered Consequences"
-Copyright (C) 2016 The R Foundation for Statistical Computing
+R Under development (unstable) (2017-02-13 r72168) -- "Unsuffered Consequences"
+Copyright (C) 2017 The R Foundation for Statistical Computing
 Platform: x86_64-pc-linux-gnu (64-bit)
 
 R is free software and comes with ABSOLUTELY NO WARRANTY.
@@ -24,37 +24,15 @@ Loading required package: stats4
 > f.g <- function(x,a) 2*x
 > f.g2 <- function(x,a) c(2*x,0)
 > options(digits=3)
-> mle2(f,fixed=list(a=1))
-
-Call:
-mle2(minuslogl = f, fixed = list(a = 1))
-
-Coefficients:
-       x        a 
-1.09e-13 1.00e+00 
-
-Log-likelihood: 1 
-> mle2(f,gr=f.g,fixed=list(a=1))
-
-Call:
-mle2(minuslogl = f, fixed = list(a = 1), gr = f.g)
-
-Coefficients:
-x a 
-0 1 
-
-Log-likelihood: 1 
-> mle2(f,gr=f.g2,fixed=list(a=1))
-
-Call:
-mle2(minuslogl = f, fixed = list(a = 1), gr = f.g2)
-
-Coefficients:
-x a 
-0 1 
-
-Log-likelihood: 1 
+> m1 <- mle2(f,fixed=list(a=1))
+> m2 <- mle2(f,gr=f.g,fixed=list(a=1))
+> m3 <- mle2(f,gr=f.g2,fixed=list(a=1))
+> stopifnot(all.equal(coef(m1),coef(m2)))
+> stopifnot(all.equal(coef(m1),coef(m3)))
+> tt <- function(x) x at details$hessian
+> stopifnot(all.equal(tt(m1),tt(m2),tolerance=1e-6))
+> stopifnot(all.equal(tt(m1),tt(m3),tolerance=1e-6))
 > 
 > proc.time()
    user  system elapsed 
-  0.436   0.044   0.488 
+  1.992   0.128   2.122 
diff --git a/tests/makesavefiles b/tests/makesavefiles
old mode 100755
new mode 100644
diff --git a/tests/methods.R b/tests/methods.R
old mode 100755
new mode 100644
diff --git a/tests/methods.Rout.save b/tests/methods.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/mkout b/tests/mkout
old mode 100755
new mode 100644
diff --git a/tests/mortanal.R b/tests/mortanal.R
old mode 100755
new mode 100644
diff --git a/tests/mortanal.Rout.save b/tests/mortanal.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/optimize.R b/tests/optimize.R
old mode 100755
new mode 100644
diff --git a/tests/optimize.Rout.save b/tests/optimize.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/optimizers.R b/tests/optimizers.R
old mode 100755
new mode 100644
diff --git a/tests/optimizers.Rout.save b/tests/optimizers.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/optimx.R b/tests/optimx.R
old mode 100755
new mode 100644
diff --git a/tests/optimx.Rout.save b/tests/optimx.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/order.R b/tests/order.R
old mode 100755
new mode 100644
diff --git a/tests/order.Rout.save b/tests/order.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/parscale.R b/tests/parscale.R
old mode 100755
new mode 100644
diff --git a/tests/parscale.Rout b/tests/parscale.Rout
old mode 100755
new mode 100644
diff --git a/tests/parscale.Rout.save b/tests/parscale.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/predict.R b/tests/predict.R
old mode 100755
new mode 100644
diff --git a/tests/predict.Rout.save b/tests/predict.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/prof_newmin.R b/tests/prof_newmin.R
new file mode 100644
index 0000000..e454717
--- /dev/null
+++ b/tests/prof_newmin.R
@@ -0,0 +1,10 @@
+library(bbmle)
+x <- 0:10
+y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
+d <- data.frame(x,y)
+
+## uses default parameters of LL
+fit <- mle2(y~dpois(exp(loglam)),
+            data=d,
+           start=list(loglam=0),control=list(maxit=2))
+pp <- profile(fit)
diff --git a/tests/prof_spec.R b/tests/prof_spec.R
new file mode 100644
index 0000000..dd6401d
--- /dev/null
+++ b/tests/prof_spec.R
@@ -0,0 +1,29 @@
+## test whether profiling works when custom optimizer is defined
+##  inside a function (GH #7)
+
+library(bbmle)
+test <- function(t, X) {
+  likfun <- function(p) {
+    mu <- with(as.list(p), {
+      exp(a+b*t)
+    })
+    -sum(dpois(X, mu, log=TRUE))
+  }
+  parnames(likfun) <- c("a", "b")
+  
+  optimfun <- function(par, fn, gr = NULL, ...,
+                       method = NULL, lower = -Inf, upper = Inf,
+                       control = NULL, hessian = FALSE) {
+    ## cat("using custom optimfun!\n")
+    optim(par, fn=fn, gr=gr, ...,
+          method="BFGS", control=control, hessian=TRUE)
+  }
+  
+  mle2(likfun, start=c(a=1,b=1), optimizer="user", optimfun=optimfun)
+}
+
+f <- test(0:5, round(exp(1:6)))
+pp <- profile(f,skiperrs=FALSE)
+stopifnot(inherits(pp,"profile.mle2"))
+
+
diff --git a/tests/profbound.R b/tests/profbound.R
old mode 100755
new mode 100644
diff --git a/tests/profbound.Rout.save b/tests/profbound.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/richards.R b/tests/richards.R
old mode 100755
new mode 100644
diff --git a/tests/richards.Rout.save b/tests/richards.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/startvals.R b/tests/startvals.R
old mode 100755
new mode 100644
diff --git a/tests/startvals.Rout.save b/tests/startvals.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/startvals2.R b/tests/startvals2.R
old mode 100755
new mode 100644
diff --git a/tests/startvals2.Rout.save b/tests/startvals2.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/test-relist1.R b/tests/test-relist1.R
old mode 100755
new mode 100644
diff --git a/tests/test-relist1.Rout.save b/tests/test-relist1.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/testbounds.R b/tests/testbounds.R
old mode 100755
new mode 100644
diff --git a/tests/testbounds.Rout b/tests/testbounds.Rout
old mode 100755
new mode 100644
diff --git a/tests/testbounds.Rout.save b/tests/testbounds.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/testderiv.R b/tests/testderiv.R
old mode 100755
new mode 100644
diff --git a/tests/testderiv.Rout.save b/tests/testderiv.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/testenv.R b/tests/testenv.R
old mode 100755
new mode 100644
index bc8abe9..8f90ab7
--- a/tests/testenv.R
+++ b/tests/testenv.R
@@ -6,11 +6,25 @@ f <- function() {
   mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
    start=list(lymax=0,lhalf=0),
    data=d,
-     control=list(maxit=maxit),
+   control=list(maxit=maxit),
    parameters=list(lymax~1,lhalf~1))
 }
 
+f2 <- function(method="BFGS") {
+    d <- data.frame(x=0:10,
+                    y=c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8))
+    mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
+         start=list(lymax=0,lhalf=0),
+         data=d,
+         method=method,
+         parameters=list(lymax~1,lhalf~1))
+}
+
 m1 <- f()
 p <- profile(m1)
 ## FIXME: check results (need to save in an environment-friendly way!)
 print(head(as.data.frame(p)),digits=3)
+
+m2 <- f2()
+p2 <- profile(m2)
+print(head(as.data.frame(p2)),digits=3)
diff --git a/tests/testenv.Rout.save b/tests/testenv.Rout.save
old mode 100755
new mode 100644
index f0f9f83..c43d8c2
--- a/tests/testenv.Rout.save
+++ b/tests/testenv.Rout.save
@@ -1,8 +1,7 @@
 
-R Under development (unstable) (2012-12-14 r61321) -- "Unsuffered Consequences"
-Copyright (C) 2012 The R Foundation for Statistical Computing
-ISBN 3-900051-07-0
-Platform: i686-pc-linux-gnu (32-bit)
+R Under development (unstable) (2016-10-08 r71471) -- "Unsuffered Consequences"
+Copyright (C) 2016 The R Foundation for Statistical Computing
+Platform: x86_64-pc-linux-gnu (64-bit)
 
 R is free software and comes with ABSOLUTELY NO WARRANTY.
 You are welcome to redistribute it under certain conditions.
@@ -27,10 +26,20 @@ Loading required package: stats4
 +   mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
 +    start=list(lymax=0,lhalf=0),
 +    data=d,
-+      control=list(maxit=maxit),
++    control=list(maxit=maxit),
 +    parameters=list(lymax~1,lhalf~1))
 + }
 > 
+> f2 <- function(method="BFGS") {
++     d <- data.frame(x=0:10,
++                     y=c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8))
++     mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
++          start=list(lymax=0,lhalf=0),
++          data=d,
++          method=method,
++          parameters=list(lymax~1,lhalf~1))
++ }
+> 
 > m1 <- f()
 > p <- profile(m1)
 > ## FIXME: check results (need to save in an environment-friendly way!)
@@ -43,6 +52,17 @@ lymax.4 lymax -1.931           2.89           1.73  2.89
 lymax.5 lymax -1.292           3.00           1.51  3.00
 lymax.6 lymax -0.648           3.11           1.31  3.11
 > 
+> m2 <- f2()
+> p2 <- profile(m2)
+> print(head(as.data.frame(p2)),digits=3)
+        param      z par.vals.lymax par.vals.lhalf focal
+lymax.1 lymax -5.469           2.56          27.21  2.56
+lymax.2 lymax -3.204           2.67           2.22  2.67
+lymax.3 lymax -2.569           2.78           1.96  2.78
+lymax.4 lymax -1.931           2.89           1.73  2.89
+lymax.5 lymax -1.292           3.00           1.51  3.00
+lymax.6 lymax -0.648           3.11           1.31  3.11
+> 
 > proc.time()
    user  system elapsed 
-  0.920   1.088   1.862 
+  0.768   0.028   0.832 
diff --git a/tests/testparpred.R b/tests/testparpred.R
old mode 100755
new mode 100644
diff --git a/tests/testparpred.Rout.save b/tests/testparpred.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/tmptest.R b/tests/tmptest.R
old mode 100755
new mode 100644
diff --git a/tests/tmptest.Rout.save b/tests/tmptest.Rout.save
old mode 100755
new mode 100644
diff --git a/tests/update.R b/tests/update.R
old mode 100755
new mode 100644
diff --git a/tests/update.Rout.save b/tests/update.Rout.save
old mode 100755
new mode 100644
diff --git a/vignettes/chicago.bst b/vignettes/chicago.bst
deleted file mode 100755
index 511f3aa..0000000
--- a/vignettes/chicago.bst
+++ /dev/null
@@ -1,1654 +0,0 @@
-%%% ====================================================================
-%%%  @BibTeX-style-file{
-%%%     author          = "Glenn Paulley",
-%%%     version         = "4",
-%%%     date            = "28 August 1992",
-%%%     time            = "10:23:39 199",
-%%%     filename        = "chicago.bst",
-%%%     address         = "Data Structuring Group
-%%%                        Department of Computer Science
-%%%                        University of Waterloo
-%%%                        Waterloo, Ontario, Canada
-%%%                        N2L 3G1",
-%%%     telephone       = "(519) 885-1211",
-%%%     FAX             = "(519) 885-1208",
-%%%     checksum        = "26323 1654 5143 37417",
-%%%     email           = "gnpaulle at bluebox.uwaterloo.ca",
-%%%     codetable       = "ISO/ASCII",
-%%%     keywords        = "",
-%%%     supported       = "yes",
-%%%     abstract        = "A BibTeX bibliography style that follows the
-%%%                        `B' reference style of the 13th Edition of
-%%%                        the Chicago Manual of Style. A detailed
-%%%                        feature list is given below.",
-%%%     docstring       = "The checksum field above contains a CRC-16
-%%%                        checksum as the first value, followed by the
-%%%                        equivalent of the standard UNIX wc (word
-%%%                        count) utility output of lines, words, and
-%%%                        characters.  This is produced by Robert
-%%%                        Solovay's checksum utility.",
-%%%  }
-%%% ====================================================================
-%
-% "Chicago" BibTeX style, chicago.bst
-% ===================================
-%
-% BibTeX `chicago' style file for BibTeX version 0.99c, LaTeX version 2.09
-% Place it in a file called chicago.bst in the BibTeX search path.
-% You need to include chicago.sty as a \documentstyle option.
-% (Placing it in the same directory as the LaTeX document should also work.)
-% This "chicago" style is based on newapa.bst (American Psych. Assoc.)
-% found at ymir.claremont.edu.
-%
-%   Citation format: (author-last-name year)
-%             (author-last-name and author-last-name year)
-%             (author-last-name, author-last-name, and author-last-name year)
-%             (author-last-name et al. year)
-%             (author-last-name)
-%             author-last-name (year)
-%             (author-last-name and author-last-name)
-%             (author-last-name et al.)
-%             (year) or (year,year)
-%             year or year,year
-%
-%   Reference list ordering: alphabetical by author or whatever passes
-%    for author in the absence of one.
-%
-% This BibTeX style has support for abbreviated author lists and for
-%    year-only citations.  This is done by having the citations
-%    actually look like
-%
-%    \citeauthoryear{full-author-info}{abbrev-author-info}{year}
-%
-% The LaTeX style has to have the following (or similar)
-%
-%     \let\@internalcite\cite
-%     \def\fullcite{\def\citeauthoryear##1##2##3{##1, ##3}\@internalcite}
-%     \def\fullciteA{\def\citeauthoryear##1##2##3{##1}\@internalcite}
-%     \def\shortcite{\def\citeauthoryear##1##2##3{##2, ##3}\@internalcite}
-%     \def\shortciteA{\def\citeauthoryear##1##2##3{##2}\@internalcite}
-%     \def\citeyear{\def\citeauthoryear##1##2##3{##3}\@internalcite}
-%
-% These TeX macro definitions are found in chicago.sty. Additional
-% commands to manipulate different components of a citation can be defined
-% so that, for example, you can list author's names without parentheses
-% if using a citation as a noun or object in a sentence.
-%
-% This file was originally copied from newapa.bst at ymir.claremont.edu.
-%
-% Features of chicago.bst:
-% =======================
-%
-% - full names used in citations, but abbreviated citations are available
-%   (see above)
-% - if an entry has a "month", then the month and year are also printed
-%   as part of that bibitem.
-% - all conjunctions use "and" instead of "\&"
-% - major modification from Chicago Manual of Style (13th ed.) is that
-%   only the first author in a reference appears last name first-
-%   additional authors appear as J. Q. Public.
-% - pages are listed as "pp. xx-xx" in all entry types except
-%   article entries.
-% - book, inbook, and manual use "location: publisher" (or organization)
-%   for address and publisher. All other types list publishers separately.
-% - "pp." are used to identify page numbers for all entry types except
-%   articles.
-% - organization is used as a citation label if neither author nor editor
-%   is present (for manuals).
-% - "et al." is used for long author and editor lists, or when "others"
-%   is used.
-%
-% Modifications and bug fixes from newapa.bst:
-% ===========================================
-%
-%   - added month, year to bib entries if month is present
-%   - fixed bug with In proceedings, added necessary comma after title
-%   - all conjunctions changed to "and" from "\&"
-%   - fixed bug with author labels in my.full.label: "et al." now is
-%        generated when "others" is an author name
-%   - major modification from Chicago Manual of Style (13th ed.) is that
-%     only the first author in a reference appears last name first-
-%     additional authors appear as J. Q. Public.
-%   - pages are listed as "pp. xx-xx" in all entry types except
-%     article entries. Unnecessary (IMHO) "()" around page numbers
-%     were removed, and page numbers now don't end with a period.
-%   - created chicago.sty for use with this bibstyle (required).
-%   - fixed bugs in FUNCTION {format.vol.num.pages} for missing volume,
-%     number, and /or pages. Renamed to format.jour.vol.
-%   - fixed bug in formatting booktitles: additional period an error if
-%     book has a volume.
-%   - fixed bug: editors usually given redundant period before next clause
-%     (format.editors.dot) removed.
-%   - added label support for organizations, if both author and editor
-%     are missing (from alpha.bst). If organization is too long, then
-%     the key field is used for abbreviated citations.
-%   - In proceedings or books of several volumes, no comma was written
-%     between the "Volume x" and the page numbers (this was intentional
-%     in newapa.bst). Fixed.
-%   - Some journals may not have volumes/numbers, only month/year (eg.
-%     IEEE Computer). Fixed bug in article style that assumed volume/number
-%     was always present.
-%
-% Original documentation for newapa.sty:
-% =====================================
-%
-% This version was made by modifying the master file made by
-% Oren Patashnik (PATASHNIK at SCORE.STANFORD.EDU), and the 'named' BibTeX
-% style of Peter F. Patel-Schneider.
-%
-% Copyright (C) 1985, all rights reserved.
-% Copying of this file is authorized only if either
-% (1) you make absolutely no changes to your copy, including name, or
-% (2) if you do make changes, you name it something other than 'newapa.bst'.
-% There are undoubtably bugs in this style.  If you make bug fixes,
-% improvements, etc.  please let me know.  My e-mail address is:
-%    spencer at cgrg.ohio.state.edu or 71160.3141 at compuserve.com
-%
-% This style was made from 'plain.bst', 'named.bst', and 'apalike.bst',
-% with lots of tweaking to make it look like APA style, along with tips
-% from Young Ryu and Brian Reiser's modifications of 'apalike.bst'.
-
-ENTRY
-  { address
-    author
-    booktitle
-    chapter
-    edition
-    editor
-    howpublished
-    institution
-    journal
-    key
-    month
-    note
-    number
-    organization
-    pages
-    publisher
-    school
-    series
-    title
-    type
-    volume
-    year
-  }
-  {}
-  { label.year extra.label sort.year sort.label }
-
-INTEGERS { output.state before.all mid.sentence after.sentence after.block }
-
-FUNCTION {init.state.consts}
-{ #0 'before.all :=
-  #1 'mid.sentence :=
-  #2 'after.sentence :=
-  #3 'after.block :=
-}
-
-STRINGS { s t u }
-
-FUNCTION {output.nonnull}
-{ 's :=
-  output.state mid.sentence =
-    { ", " * write$ }
-    { output.state after.block =
-    { add.period$ write$
-      newline$
-      "\newblock " write$
-    }
-    { output.state before.all =
-        'write$
-        { add.period$ " " * write$ }
-      if$
-    }
-      if$
-      mid.sentence 'output.state :=
-    }
-  if$
-  s
-}
-
-% Use a colon to separate output. Used only for address/publisher
-% combination in book/inbook types, address/institution for manuals,
-% and organization:publisher for proceedings (inproceedings).
-%
-FUNCTION {output.nonnull.colon}
-{ 's :=
-  output.state mid.sentence =
-    { ": " * write$ }
-    { output.state after.block =
-    { add.period$ write$
-      newline$
-      "\newblock " write$
-    }
-    { output.state before.all =
-        'write$
-        { add.period$ " " * write$ }
-      if$
-    }
-      if$
-      mid.sentence 'output.state :=
-    }
-  if$
-  s
-}
-
-FUNCTION {output}
-{ duplicate$ empty$
-    'pop$
-    'output.nonnull
-  if$
-}
-
-FUNCTION {output.colon}
-{ duplicate$ empty$
-    'pop$
-    'output.nonnull.colon
-  if$
-}
-
-FUNCTION {output.check}
-{ 't :=
-  duplicate$ empty$
-    { pop$ "empty " t * " in " * cite$ * warning$ }
-    'output.nonnull
-  if$
-}
-
-FUNCTION {output.check.colon}
-{ 't :=
-  duplicate$ empty$
-    { pop$ "empty " t * " in " * cite$ * warning$ }
-    'output.nonnull.colon
-  if$
-}
-
-FUNCTION {output.year.check}
-{ year empty$
-     { "empty year in " cite$ * warning$ }
-     { write$
-        " (" year * extra.label *
-       month empty$
-          { ")" * }
-          { ", " * month * ")" * }
-       if$
-       mid.sentence 'output.state :=
-     }
-  if$
-}
-
-
-FUNCTION {fin.entry}
-{ add.period$
-  write$
-  newline$
-}
-
-FUNCTION {new.block}
-{ output.state before.all =
-    'skip$
-    { after.block 'output.state := }
-  if$
-}
-
-FUNCTION {new.sentence}
-{ output.state after.block =
-    'skip$
-    { output.state before.all =
-    'skip$
-    { after.sentence 'output.state := }
-      if$
-    }
-  if$
-}
-
-FUNCTION {not}
-{   { #0 }
-    { #1 }
-  if$
-}
-
-FUNCTION {and}
-{   'skip$
-    { pop$ #0 }
-  if$
-}
-
-FUNCTION {or}
-{   { pop$ #1 }
-    'skip$
-  if$
-}
-
-FUNCTION {new.block.checka}
-{ empty$
-    'skip$
-    'new.block
-  if$
-}
-
-FUNCTION {new.block.checkb}
-{ empty$
-  swap$ empty$
-  and
-    'skip$
-    'new.block
-  if$
-}
-
-FUNCTION {new.sentence.checka}
-{ empty$
-    'skip$
-    'new.sentence
-  if$
-}
-
-FUNCTION {new.sentence.checkb}
-{ empty$
-  swap$ empty$
-  and
-    'skip$
-    'new.sentence
-  if$
-}
-
-FUNCTION {field.or.null}
-{ duplicate$ empty$
-    { pop$ "" }
-    'skip$
-  if$
-}
-
-%
-% Emphasize the top string on the stack.
-%
-FUNCTION {emphasize}
-{ duplicate$ empty$
-    { pop$ "" }
-    { "{\em " swap$ * "}" * }
-  if$
-}
-
-%
-% Emphasize the top string on the stack, but add a trailing space.
-%
-FUNCTION {emphasize.space}
-{ duplicate$ empty$
-    { pop$ "" }
-    { "{\em " swap$ * "\/}" * }
-  if$
-}
-
-INTEGERS { nameptr namesleft numnames }
-%
-% Format bibliographical entries with the first author last name first,
-% and subsequent authors with initials followed by last name.
-% All names are formatted in this routine.
-%
-FUNCTION {format.names}
-{ 's :=
-  #1 'nameptr :=               % nameptr = 1;
-  s num.names$ 'numnames :=    % numnames = num.name$(s);
-  numnames 'namesleft :=
-    { namesleft #0 > }
-
-    { nameptr #1 =
-        {s nameptr "{vv~}{ll}{, jj}{, f.}" format.name$ 't := }
-        {s nameptr "{f.~}{vv~}{ll}{, jj}" format.name$ 't := }
-      if$
-      nameptr #1 >
-        { namesleft #1 >
-              { ", " * t * }
-              { numnames #2 >
-                  { "," * }
-                  'skip$
-                if$
-                t "others" =
-                    { " et~al." * }
-                    { " and " * t * } % from Chicago Manual of Style
-                  if$
-               }
-               if$
-             }
-            't
-        if$
-        nameptr #1 + 'nameptr :=          % nameptr += 1;
-        namesleft #1 - 'namesleft :=      % namesleft =- 1;
-    }
-  while$
-}
-
-FUNCTION {my.full.label}
-{ 's :=
-  #1 'nameptr :=               % nameptr = 1;
-  s num.names$ 'numnames :=    % numnames = num.name$(s);
-  numnames 'namesleft :=
-    { namesleft #0 > }
-
-    { s nameptr "{vv~}{ll}" format.name$ 't :=  % get the next name
-      nameptr #1 >
-        { namesleft #1 >
-              { ", " * t * }
-              { numnames #2 >
-                  { "," * }
-                  'skip$
-                if$
-                t "others" =
-                    { " et~al." * }
-                    { " and " * t * } % from Chicago Manual of Style
-                  if$
-               }
-               if$
-             }
-            't
-        if$
-        nameptr #1 + 'nameptr :=          % nameptr += 1;
-        namesleft #1 - 'namesleft :=      % namesleft =- 1;
-    }
-  while$
-
-}
-
-FUNCTION {format.names.fml}
-%
-% Format names in "familiar" format, with first initial followed by
-% last name. Like format.names, ALL names are formatted.
-%
-{ 's :=
-  #1 'nameptr :=               % nameptr = 1;
-  s num.names$ 'numnames :=    % numnames = num.name$(s);
-  numnames 'namesleft :=
-    { namesleft #0 > }
-
-    { s nameptr "{f.~}{vv~}{ll}{, jj}" format.name$ 't :=
-
-      nameptr #1 >
-        { namesleft #1 >
-              { ", " * t * }
-               { numnames #2 >
-                    { "," * }
-                    'skip$
-                  if$
-                  t "others" =
-                        { " et~al." * }
-                        { " and " * t * }
-%                       { " \& " * t * }
-                      if$
-                }
-               if$
-             }
-            't
-        if$
-        nameptr #1 + 'nameptr :=          % nameptr += 1;
-        namesleft #1 - 'namesleft :=      % namesleft =- 1;
-    }
-  while$
-}
-
-FUNCTION {format.authors}
-{ author empty$
-    { "" }
-    { author format.names }
-  if$
-}
-
-FUNCTION {format.key}
-{ empty$
-    { key field.or.null }
-    { "" }
-  if$
-}
-
-%
-% Format editor names for use in the "in" types: inbook, incollection,
-% inproceedings: first initial, then last names. When editors are the
-% LABEL for an entry, then format.editor is used which lists editors
-% by last name first.
-%
-FUNCTION {format.editors.fml}
-{ editor empty$
-    { "" }
-    { editor format.names.fml
-      editor num.names$ #1 >
-    { " (Eds.)" * }
-    { " (Ed.)" * }
-      if$
-    }
-  if$
-}
-
-%
-% Format editor names for use in labels, last names first.
-%
-FUNCTION {format.editors}
-{ editor empty$
-    { "" }
-    { editor format.names
-      editor num.names$ #1 >
-    { " (Eds.)" * }
-    { " (Ed.)" * }
-      if$
-    }
-  if$
-}
-
-FUNCTION {format.title}
-{ title empty$
-    { "" }
-    { title "t" change.case$ }
-  if$
-}
-
-% Note that the APA style requres case changes
-% in article titles. The following does not
-% change cases. If you perfer it, uncomment the
-% following and comment out the above.
-
-%FUNCTION {format.title}
-%{ title empty$
-%    { "" }
-%    { title }
-%  if$
-%}
-
-FUNCTION {n.dashify}
-{ 't :=
-  ""
-    { t empty$ not }
-    { t #1 #1 substring$ "-" =
-    { t #1 #2 substring$ "--" = not
-        { "--" *
-          t #2 global.max$ substring$ 't :=
-        }
-        {   { t #1 #1 substring$ "-" = }
-        { "-" *
-          t #2 global.max$ substring$ 't :=
-        }
-          while$
-        }
-      if$
-    }
-    { t #1 #1 substring$ *
-      t #2 global.max$ substring$ 't :=
-    }
-      if$
-    }
-  while$
-}
-
-FUNCTION {format.btitle}
-{ edition empty$
-  { title emphasize }
-  { title empty$
-    { title emphasize }
-    { volume empty$     % gnp - check for volume, then don't need period
-       { "{\em " title * "\/} (" * edition * " ed.)" * "." * }
-       { "{\em " title * "\/} (" * edition * " ed.)" * }
-      if$
-    }
-    if$
-  }
-  if$
-}
-
-FUNCTION {format.emphasize.booktitle}
-{ edition empty$
-  { booktitle emphasize }
-  { booktitle empty$
-    { booktitle emphasize }
-    { volume empty$    % gnp - extra period an error if book has a volume
-        { "{\em " booktitle * "\/} (" * edition * " ed.)" * "." *}
-        { "{\em " booktitle * "\/} (" * edition * " ed.)" * }
-      if$
-      }
-    if$
-    }
-  if$
-  }
-
-
-FUNCTION {tie.or.space.connect}
-{ duplicate$ text.length$ #3 <
-    { "~" }
-    { " " }
-  if$
-  swap$ * *
-}
-
-FUNCTION {either.or.check}
-{ empty$
-    'pop$
-    { "can't use both " swap$ * " fields in " * cite$ * warning$ }
-  if$
-}
-
-FUNCTION {format.bvolume}
-{ volume empty$
-    { "" }
-    { "Volume" volume tie.or.space.connect % gnp - changed to mixed case
-      series empty$
-        'skip$
-        { " of " * series emphasize * }
-      if$
-      "volume and number" number either.or.check
-    }
-  if$
-}
-
-FUNCTION {format.number.series}
-{ volume empty$
-    { number empty$
-    { series field.or.null }
-    { output.state mid.sentence =
-        { "Number" } % gnp - changed to mixed case always
-        { "Number" }
-      if$
-      number tie.or.space.connect
-      series empty$
-        { "there's a number but no series in " cite$ * warning$ }
-        { " in " * series * }
-      if$
-    }
-      if$
-    }
-    { "" }
-  if$
-}
-
-INTEGERS { multiresult }
-
-FUNCTION {multi.page.check}
-{ 't :=
-  #0 'multiresult :=
-    { multiresult not
-      t empty$ not
-      and
-    }
-    { t #1 #1 substring$
-      duplicate$ "-" =
-      swap$ duplicate$ "," =
-      swap$ "+" =
-      or or
-    { #1 'multiresult := }
-    { t #2 global.max$ substring$ 't := }
-      if$
-    }
-  while$
-  multiresult
-}
-
-FUNCTION {format.pages}
-{ pages empty$
-  { "" }
-  { pages multi.page.check
-	{ "pp.\ " pages n.dashify tie.or.space.connect } % gnp - removed ()
-	{ "pp.\ " pages tie.or.space.connect }
-    if$
-  }
-  if$
-}
-
-% By Young (and Spencer)
-% GNP - fixed bugs with missing volume, number, and/or pages
-%
-% Format journal, volume, number, pages for article types.
-%
-FUNCTION {format.jour.vol}
-{ journal empty$
-    { "no journal in " cite$ * warning$
-      "" }
-    { journal emphasize.space }
-    if$
-  number empty$
-    { volume empty$
-       { "no number and no volume in " cite$ * warning$
-         "" * }
-       { "~{\em " * Volume * "}" * }
-      if$
-    }
-    { volume empty$
-      {"no volume for " cite$ * warning$
-       "~(" * number * ")" * }
-      { "~" *
-        volume emphasize.space
-        "(" * number * ")" * * }
-      if$
-    }
-  if$
-  pages empty$
-    {"page numbers missing in " cite$ * warning$
-     "" * } % gnp - place a null string on the stack for output
-    { duplicate$ empty$
-      { pop$ format.pages }
-      { ", " *  pages n.dashify * } % gnp - removed pp. for articles
-      if$
-    }
-  if$
-}
-
-FUNCTION {format.chapter.pages}
-{ chapter empty$
-    'format.pages
-    { type empty$
-        { "Chapter" } % gnp - changed to mixed case
-        { type "t" change.case$ }
-      if$
-      chapter tie.or.space.connect
-      pages empty$
-        {"page numbers missing in " cite$ * warning$} % gnp - added check
-        { ", " * format.pages * }
-      if$
-    }
-  if$
-}
-
-FUNCTION {format.in.ed.booktitle}
-{ booktitle empty$
-  { "" }
-  { editor empty$
-    { "In " format.emphasize.booktitle * }
-    { "In " format.editors.fml * ", " * format.emphasize.booktitle * }
-    if$
-  }
-  if$
-}
-
-FUNCTION {format.thesis.type}
-{ type empty$
-    'skip$
-    { pop$
-      type "t" change.case$
-    }
-  if$
-}
-
-FUNCTION {format.tr.number}
-{ type empty$
-    { "Technical Report" }
-    'type
-  if$
-  number empty$
-    { "t" change.case$ }
-    { number tie.or.space.connect }
-  if$
-}
-
-FUNCTION {format.article.crossref}
-{ "See"
-  "\citeN{" * crossref * "}" *
-}
-
-FUNCTION {format.crossref.editor}
-{ editor #1 "{vv~}{ll}" format.name$
-  editor num.names$ duplicate$
-  #2 >
-    { pop$ " et~al." * }
-    { #2 <
-    'skip$
-    { editor #2 "{ff }{vv }{ll}{ jj}" format.name$ "others" =
-        { " et~al." * }
-        { " and " * editor #2 "{vv~}{ll}" format.name$ * }
-      if$
-    }
-      if$
-    }
-  if$
-}
-
-FUNCTION {format.book.crossref}
-{ volume empty$
-    { "empty volume in " cite$ * "'s crossref of " * crossref * warning$
-      "In "
-    }
-    { "Volume" volume tie.or.space.connect % gnp - changed to mixed case
-      " of " *
-    }
-  if$
-  editor empty$
-  editor field.or.null author field.or.null =
-  or
-    { key empty$
-    { series empty$
-        { "need editor, key, or series for " cite$ * " to crossref " *
-          crossref * warning$
-          "" *
-        }
-        { "{\em " * series * "\/}" * }
-      if$
-    }
-    { key * }
-      if$
-    }
-    { format.crossref.editor * }
-  if$
-  " \citeN{" * crossref * "}" *
-}
-
-FUNCTION {format.incoll.inproc.crossref}
-{ "See"
-  " \citeN{" * crossref * "}" *
-}
-
-% format.lab.names:
-%
-% determines "short" names for the abbreviated author information.
-% "Long" labels are created in calc.label, using the routine my.full.label
-% to format author and editor fields.
-%
-% There are 4 cases for labels.   (n=3 in the example)
-% a) one author             Foo
-% b) one to n               Foo, Bar and Baz
-% c) use of "and others"    Foo, Bar et al.
-% d) more than n            Foo et al.
-%
-FUNCTION {format.lab.names}
-{ 's :=
-  s num.names$ 'numnames :=
-  numnames #2 >    % change number to number of others allowed before
-		   % forcing "et al".
-    { s #1 "{vv~}{ll}" format.name$ " et~al." * }
-    {
-      numnames #1 - 'namesleft :=
-      #2 'nameptr :=
-      s #1 "{vv~}{ll}" format.name$
-	{ namesleft #0 > }
-	{ nameptr numnames =
-	    { s nameptr "{ff }{vv }{ll}{ jj}" format.name$ "others" =
-		{ " et~al." * }
-		{ " and " * s nameptr "{vv~}{ll}" format.name$ * }
-	      if$
-	    }
-	    { ", " * s nameptr "{vv~}{ll}" format.name$ * }
-	  if$
-	  nameptr #1 + 'nameptr :=
-	  namesleft #1 - 'namesleft :=
-	}
-      while$
-    }
-  if$
-}
-
-FUNCTION {author.key.label}
-{ author empty$
-    { key empty$
-          { "no key, author in " cite$ * warning$
-            cite$ #1 #3 substring$ }
-         'key
-      if$
-    }
-    { author format.lab.names }
-  if$
-}
-
-FUNCTION {editor.key.label}
-{ editor empty$
-    { key empty$
-          { "no key, editor in " cite$ * warning$
-            cite$ #1 #3 substring$ }
-          'key
-        if$
-     }
-     { editor format.lab.names }
-  if$
-}
-
-FUNCTION {author.key.organization.label}
-%
-% added - gnp. Provide label formatting by organization if author is null.
-%
-{ author empty$
-    { organization empty$
-	{ key empty$
-	    { "no key, author or organization in " cite$ * warning$
-              cite$ #1 #3 substring$ }
-	    'key
-	  if$
-	}
-        { organization }
-      if$
-    }
-    { author format.lab.names }
-  if$
-}
-
-FUNCTION {editor.key.organization.label}
-%
-% added - gnp. Provide label formatting by organization if editor is null.
-%
-{ editor empty$
-    { organization empty$
-	{ key empty$
-	    { "no key, editor or organization in " cite$ * warning$
-              cite$ #1 #3 substring$ }
-	    'key
-	  if$
-	}
-        { organization }
-      if$
-    }
-    { editor format.lab.names }
-  if$
-}
-
-FUNCTION {author.editor.key.label}
-{ author empty$
-    { editor empty$
-          { key empty$
-               { "no key, author, or editor in " cite$ * warning$
-                 cite$ #1 #3 substring$ }
-             'key
-           if$
-         }
-          { editor format.lab.names }
-      if$
-    }
-    { author format.lab.names }
-  if$
-}
-
-FUNCTION {calc.label}
-%
-% Changed - GNP. See also author.organization.sort, editor.organization.sort
-% Form label for BibTeX entry. The classification of which fields are used
-% for which type of entry (book, inbook, etc.) are taken from alpha.bst.
-% The change here from newapa is to also include organization as a
-% citation label if author or editor is missing.
-%
-{ type$ "book" =
-  type$ "inbook" =
-  or
-    'author.editor.key.label
-    { type$ "proceedings" =
-	'editor.key.organization.label
-	{ type$ "manual" =
-	    'author.key.organization.label
-	    'author.key.label
-	  if$
-	}
-      if$
-    }
-  if$
-
-  author empty$  % generate the full label citation information.
-    { editor empty$
-        { organization empty$
-           { "no author, editor, or organization in " cite$ * warning$
-             "??" }
-           { organization }
-           if$
-        }
-        { editor my.full.label }
-        if$
-    }
-    { author my.full.label }
-  if$
-
-% leave label on the stack, to be popped when required.
-
-  "}{" * swap$ * "}{" *
-%  year field.or.null purify$ #-1 #4 substring$ *
-%
-% save the year for sort processing afterwards (adding a, b, c, etc.)
-%
-  year field.or.null purify$ #-1 #4 substring$
-  'label.year :=
-}
-
-FUNCTION {output.bibitem}
-{ newline$
-
-  "\bibitem[\protect\citeauthoryear{" write$
-  calc.label write$
-  sort.year write$
-  "}]{" write$
-
-  cite$ write$
-  "}" write$
-  newline$
-  ""
-  before.all 'output.state :=
-}
-
-FUNCTION {article}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output          % added
-  output.year.check                 % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  crossref missing$
-    { format.jour.vol output
-    }
-    { format.article.crossref output.nonnull
-      format.pages output
-    }
-  if$
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {book}
-{ output.bibitem
-  author empty$
-    { format.editors
-	  "author and editor" output.check }
-    { format.authors
-	  output.nonnull
-      crossref missing$
-    	{ "author and editor" editor either.or.check }
-    	'skip$
-      if$
-    }
-  if$
-  output.year.check       % added
-  new.block
-  format.btitle
-  "title" output.check
-  crossref missing$
-    { format.bvolume output
-      new.block
-      format.number.series output
-      new.sentence
-      address output
-      publisher "publisher" output.check.colon
-    }
-    { new.block
-      format.book.crossref output.nonnull
-    }
-  if$
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {booklet}
-{ output.bibitem
-  format.authors output
-  author format.key output          % added
-  output.year.check                 % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  howpublished output
-  address output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {inbook}
-{ output.bibitem
-  author empty$
-    { format.editors
-      "author and editor" output.check
-    }
-    { format.authors output.nonnull
-      crossref missing$
-    { "author and editor" editor either.or.check }
-    'skip$
-      if$
-    }
-  if$
-  output.year.check                 % added
-  new.block
-  format.btitle
-  "title" output.check
-  crossref missing$
-    { format.bvolume output
-      format.chapter.pages
-      "chapter and pages" output.check
-      new.block
-      format.number.series output
-      new.sentence
-      address output
-      publisher
-      "publisher" output.check.colon
-    }
-    { format.chapter.pages "chapter and pages" output.check
-      new.block
-      format.book.crossref output.nonnull
-    }
-  if$
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {incollection}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output       % added
-  output.year.check              % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  crossref missing$
-  { format.in.ed.booktitle
-    "booktitle" output.check
-    format.bvolume output
-    format.number.series output
-    format.chapter.pages output % gnp - was special.output.nonnull
-%                                 left out comma before page numbers
-    new.sentence
-    address output
-    publisher "publisher" output.check.colon
-  }
-  { format.incoll.inproc.crossref
-	output.nonnull
-    format.chapter.pages output
-  }
-  if$
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {inproceedings}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output            % added
-  output.year.check                   % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  crossref missing$
-    { format.in.ed.booktitle
-	  "booktitle" output.check
-      format.bvolume output
-      format.number.series output
-      address output
-      format.pages output
-      new.sentence
-      organization output
-      publisher output.colon
-      }
-    { format.incoll.inproc.crossref output.nonnull
-      format.pages output
-    }
-  if$
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {conference} { inproceedings }
-
-FUNCTION {manual}
-{ output.bibitem
-  author empty$
-    { editor empty$
-      { organization "organization" output.check
-        organization format.key output }  % if all else fails, use key
-      { format.editors "author and editor" output.check }
-      if$
-    }
-    { format.authors output.nonnull }
-    if$
-  output.year.check                 % added
-  new.block
-  format.btitle
-  "title" output.check
-  organization address new.block.checkb
-% Reversed the order of "address" and "organization", added the ":".
-  address output
-  organization "organization" output.check.colon
-%  address output
-%  ":" output
-%  organization output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {mastersthesis}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output          % added
-  output.year.check                 % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  "Master's thesis" format.thesis.type output.nonnull
-  school "school" output.check
-  address output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {misc}
-{ output.bibitem
-  format.authors output
-  author format.key output            % added
-  output.year.check                   % added
-  title howpublished new.block.checkb
-  format.title output
-  new.block
-  howpublished output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {phdthesis}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output            % added
-  output.year.check                   % added
-  new.block
-  format.btitle
-  "title" output.check
-  new.block
-  "Ph.\ D. thesis" format.thesis.type output.nonnull
-  school "school" output.check
-  address output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {proceedings}
-{ output.bibitem
-  editor empty$
-    { organization output
-      organization format.key output }  % gnp - changed from author format.key
-    { format.editors output.nonnull }
-  if$
-% author format.key output             % gnp - removed (should be either
-%                                        editor or organization
-  output.year.check                    % added (newapa)
-  new.block
-  format.btitle
-  "title" output.check
-  format.bvolume output
-  format.number.series output
-  address output
-  new.sentence
-  organization output
-  publisher output.colon
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {techreport}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output             % added
-  output.year.check                    % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  format.tr.number output.nonnull
-  institution
-  "institution" output.check
-  address output
-  new.block
-  note output
-  fin.entry
-}
-
-FUNCTION {unpublished}
-{ output.bibitem
-  format.authors
-  "author" output.check
-  author format.key output              % added
-  output.year.check                      % added
-  new.block
-  format.title
-  "title" output.check
-  new.block
-  note "note" output.check
-  fin.entry
-}
-
-FUNCTION {default.type} { misc }
-
-MACRO {jan} {"January"}
-
-MACRO {feb} {"February"}
-
-MACRO {mar} {"March"}
-
-MACRO {apr} {"April"}
-
-MACRO {may} {"May"}
-
-MACRO {jun} {"June"}
-
-MACRO {jul} {"July"}
-
-MACRO {aug} {"August"}
-
-MACRO {sep} {"September"}
-
-MACRO {oct} {"October"}
-
-MACRO {nov} {"November"}
-
-MACRO {dec} {"December"}
-
-MACRO {acmcs} {"ACM Computing Surveys"}
-
-MACRO {acta} {"Acta Informatica"}
-
-MACRO {ai} {"Artificial Intelligence"}
-
-MACRO {cacm} {"Communications of the ACM"}
-
-MACRO {ibmjrd} {"IBM Journal of Research and Development"}
-
-MACRO {ibmsj} {"IBM Systems Journal"}
-
-MACRO {ieeese} {"IEEE Transactions on Software Engineering"}
-
-MACRO {ieeetc} {"IEEE Transactions on Computers"}
-
-MACRO {ieeetcad}
- {"IEEE Transactions on Computer-Aided Design of Integrated Circuits"}
-
-MACRO {ipl} {"Information Processing Letters"}
-
-MACRO {jacm} {"Journal of the ACM"}
-
-MACRO {jcss} {"Journal of Computer and System Sciences"}
-
-MACRO {scp} {"Science of Computer Programming"}
-
-MACRO {sicomp} {"SIAM Journal on Computing"}
-
-MACRO {tocs} {"ACM Transactions on Computer Systems"}
-
-MACRO {tods} {"ACM Transactions on Database Systems"}
-
-MACRO {tog} {"ACM Transactions on Graphics"}
-
-MACRO {toms} {"ACM Transactions on Mathematical Software"}
-
-MACRO {toois} {"ACM Transactions on Office Information Systems"}
-
-MACRO {toplas} {"ACM Transactions on Programming Languages and Systems"}
-
-MACRO {tcs} {"Theoretical Computer Science"}
-
-READ
-
-FUNCTION {sortify}
-{ purify$
-  "l" change.case$
-}
-
-INTEGERS { len }
-
-FUNCTION {chop.word}
-{ 's :=
-  'len :=
-  s #1 len substring$ =
-    { s len #1 + global.max$ substring$ }
-    's
-  if$
-}
-
-
-
-FUNCTION {sort.format.names}
-{ 's :=
-  #1 'nameptr :=
-  ""
-  s num.names$ 'numnames :=
-  numnames 'namesleft :=
-    { namesleft #0 > }
-    { nameptr #1 >
-          { "   " * }
-         'skip$
-      if$
-      s nameptr "{vv{ } }{ll{ }}{  f{ }}{  jj{ }}" format.name$ 't :=
-      nameptr numnames = t "others" = and
-          { " et~al" * }
-          { t sortify * }
-      if$
-      nameptr #1 + 'nameptr :=
-      namesleft #1 - 'namesleft :=
-    }
-  while$
-}
-
-FUNCTION {sort.format.title}
-{ 't :=
-  "A " #2
-    "An " #3
-      "The " #4 t chop.word
-    chop.word
-  chop.word
-  sortify
-  #1 global.max$ substring$
-}
-
-FUNCTION {author.sort}
-{ author empty$
-    { key empty$
-         { "to sort, need author or key in " cite$ * warning$
-           "" }
-         { key sortify }
-      if$
-    }
-    { author sort.format.names }
-  if$
-}
-
-FUNCTION {editor.sort}
-{ editor empty$
-    { key empty$
-         { "to sort, need editor or key in " cite$ * warning$
-           ""
-         }
-         { key sortify }
-      if$
-    }
-    { editor sort.format.names }
-  if$
-}
-
-FUNCTION {author.editor.sort}
-{ author empty$
-    { "missing author in " cite$ * warning$
-      editor empty$
-         { key empty$
-             { "to sort, need author, editor, or key in " cite$ * warning$
-               ""
-             }
-             { key sortify }
-           if$
-         }
-         { editor sort.format.names }
-      if$
-    }
-    { author sort.format.names }
-  if$
-}
-
-FUNCTION {author.organization.sort}
-%
-% added - GNP. Stack author or organization for sorting (from alpha.bst).
-% Unlike alpha.bst, we need entire names, not abbreviations
-%
-{ author empty$
-    { organization empty$
-	{ key empty$
-	    { "to sort, need author, organization, or key in " cite$ * warning$
-	      ""
-	    }
-	    { key sortify }
-	  if$
-	}
-	{ organization sortify }
-      if$
-    }
-    { author sort.format.names }
-  if$
-}
-
-FUNCTION {editor.organization.sort}
-%
-% added - GNP. Stack editor or organization for sorting (from alpha.bst).
-% Unlike alpha.bst, we need entire names, not abbreviations
-%
-{ editor empty$
-    { organization empty$
-	{ key empty$
-	    { "to sort, need editor, organization, or key in " cite$ * warning$
-	      ""
-	    }
-	    { key sortify }
-	  if$
-	}
-	{ organization sortify }
-      if$
-    }
-    { editor sort.format.names }
-  if$
-}
-
-FUNCTION {presort}
-%
-% Presort creates the bibentry's label via a call to calc.label, and then
-% sorts the entries based on entry type. Chicago.bst adds support for
-% including organizations as the sort key; the following is stolen from
-% alpha.bst.
-%
-{ calc.label sortify % recalculate bibitem label
-  year field.or.null purify$ #-1 #4 substring$ * % add year
-  "    "
-  *
-  type$ "book" =
-  type$ "inbook" =
-  or
-    'author.editor.sort
-    { type$ "proceedings" =
-	'editor.organization.sort
-	{ type$ "manual" =
-	    'author.organization.sort
-	    'author.sort
-	  if$
-	}
-      if$
-    }
-  if$
-  #1 entry.max$ substring$        % added for newapa
-  'sort.label :=                  % added for newapa
-  sort.label                      % added for newapa
-  *
-  "    "
-  *
-  title field.or.null
-  sort.format.title
-  *
-  #1 entry.max$ substring$
-  'sort.key$ :=
-}
-
-ITERATE {presort}
-
-SORT             % by label, year, author/editor, title
-
-STRINGS { last.label next.extra }
-
-INTEGERS { last.extra.num }
-
-FUNCTION {initialize.extra.label.stuff}
-{ #0 int.to.chr$ 'last.label :=
-  "" 'next.extra :=
-  #0 'last.extra.num :=
-}
-
-FUNCTION {forward.pass}
-%
-% Pass through all entries, comparing current entry to last one.
-% Need to concatenate year to the stack (done by calc.label) to determine
-% if two entries are the same (see presort)
-%
-{ last.label
-  calc.label year field.or.null purify$ #-1 #4 substring$ * % add year
-  #1 entry.max$ substring$ =     % are they equal?
-     { last.extra.num #1 + 'last.extra.num :=
-       last.extra.num int.to.chr$ 'extra.label :=
-     }
-     { "a" chr.to.int$ 'last.extra.num :=
-       "" 'extra.label :=
-       calc.label year field.or.null purify$ #-1 #4 substring$ * % add year
-       #1 entry.max$ substring$ 'last.label := % assign to last.label
-     }
-  if$
-}
-
-FUNCTION {reverse.pass}
-{ next.extra "b" =
-    { "a" 'extra.label := }
-     'skip$
-  if$
-  label.year extra.label * 'sort.year :=
-  extra.label 'next.extra :=
-}
-
-EXECUTE {initialize.extra.label.stuff}
-
-ITERATE {forward.pass}
-
-REVERSE {reverse.pass}
-
-FUNCTION {bib.sort.order}
-{ sort.label
-  "    "
-  *
-  year field.or.null sortify
-  *
-  "    "
-  *
-  title field.or.null
-  sort.format.title
-  *
-  #1 entry.max$ substring$
-  'sort.key$ :=
-}
-
-ITERATE {bib.sort.order}
-
-SORT             % by sort.label, year, title --- giving final bib. order.
-
-FUNCTION {begin.bib}
-
-{ preamble$ empty$
-    'skip$
-    { preamble$ write$ newline$ }
-  if$
-  "\begin{thebibliography}{}" write$ newline$
-}
-
-
-EXECUTE {begin.bib}
-
-EXECUTE {init.state.consts}
-
-ITERATE {call.type$}
-
-FUNCTION {end.bib}
-{ newline$
-  "\end{thebibliography}" write$ newline$
-}
-
-EXECUTE {end.bib}
-
-
diff --git a/vignettes/clean b/vignettes/clean
deleted file mode 100755
index 297e593..0000000
--- a/vignettes/clean
+++ /dev/null
@@ -1 +0,0 @@
-rm -f mle2.{aux,bbl,blg,log,out,toc,tex} quasi.{aux,bbl,blg,log,out,tex}
diff --git a/vignettes/mle2.Rnw b/vignettes/mle2.Rnw
index 25e6994..fe59d5b 100755
--- a/vignettes/mle2.Rnw
+++ b/vignettes/mle2.Rnw
@@ -110,7 +110,7 @@ x1 <- rbetabinom(n=1000,prob=0.1,size=50,theta=10)
 
 Load the package:
 <<bbmle,message=FALSE>>=
-library("bbmle")
+library(bbmle)
 @ 
 
 Construct a simple negative log-likelihood function:
@@ -410,12 +410,13 @@ anova(m0f,m2f,m3f)
 @ 
 
 The various \code{ICtab} commands produce tables of
-information criteria, optionally sorted and
-with model weights.
+information criteria; by default the results are sorted and
+presented as $\Delta$IC; there are various options, including
+printing model weights.
 <<ICtabfit>>=
-AICtab(m0f,m2f,m3f,weights=TRUE,delta=TRUE,sort=TRUE)
-BICtab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
-AICctab(m0f,m2f,m3f,delta=TRUE,nobs=nrow(orob1),sort=TRUE,weights=TRUE)
+AICtab(m0f,m2f,m3f,weights=TRUE)
+BICtab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
+AICctab(m0f,m2f,m3f,nobs=nrow(orob1),weights=TRUE)
 @ 
 <<reWarn,echo=FALSE>>=
 opts_chunk$set(warning=FALSE)
@@ -437,10 +438,10 @@ library(ggplot2)
 
 <<gg1>>=
 gg1 <- ggplot(frogdat,aes(x=size,y=killed))+geom_point()+
-      stat_sum(aes(size=factor(..n..)))+
-      labs(size="#")+scale_x_continuous(limits=c(0,40))
+      stat_sum(aes(size=..n..))+
+      labs(size="#")+scale_x_continuous(limits=c(0,40))+
+scale_size(breaks=1:3)
 @ 
-
 <<frogfit1,cache=TRUE,warning=FALSE>>=
 m3 <- mle2(killed~dbinom(prob=c*(size/d)^g*exp(1-size/d),
   size=initial),data=frogdat,start=list(c=0.5,d=5,g=1))
diff --git a/vignettes/quasi.Rnw b/vignettes/quasi.Rnw
index 3ce84e8..31a448c 100755
--- a/vignettes/quasi.Rnw
+++ b/vignettes/quasi.Rnw
@@ -1,16 +1,18 @@
 \documentclass{article}
 %\VignettePackage{mle2}
 %\VignetteIndexEntry{quasi: notes on quasi-likelihood/qAIC analysis inR}
-%\VignetteDepends{MuMIn,AICcmodavg}
+%\VignetteDepends{MuMIn,AICcmodavg,bbmle}
 %\VignetteEngine{knitr::knitr}
 
 \usepackage{graphicx}
+\usepackage{hyperref}
 \usepackage{url}
 \newcommand{\code}[1]{{\tt #1}}
 \title{Dealing with \code{quasi-}  models in R}
 \date{\today}
 \author{Ben Bolker}
 \begin{document}
+\newcommand{\rpkg}[1]{\href{https://CRAN.R-project.org/package=#1}{{\tt #1}}}
 \maketitle
 
 \includegraphics[width=2.64cm,height=0.93cm]{cc-attrib-nc.png}
@@ -30,8 +32,9 @@ pain, because  the R Core team (or at least the ones who wrote \code{glm},
 \code{glmmPQL}, etc.) 
 are purists and don't believe that quasi- models should report a likelihood.
 As far as I know, there are three R packages that compute/handle
-QAIC: \code{bbmle}, \code{AICcmodavg} (both on CRAN) and \code{MuMIn}
-(formerly known as \code{dRedging}, on r-forge).
+QAIC: 
+\rpkg{bbmle}, \rpkg{AICcmodavg} and \rpkg{MuMIn}.
+
 
 The basic problem is that quasi- model fits with \code{glm} return
 an \code{NA} for the log-likelihood, while the dispersion parameter
@@ -64,7 +67,7 @@ The whole problem is worse for \code{MASS::glmmPQL}, where (1) the
 authors have gone to greater efforts to make sure that the (quasi-)deviance
 is no longer preserved anywhere in the fitted model, and (2) they
 may have done it for good reason --- it is not clear whether the
-number that would get left in the 'deviance' slot at the end of
+number that would get left in the `deviance' slot at the end of
 \code{glmmPQL}'s alternating \code{lme} and \code{glm} fits is
 even meaningful to the extent that regular QAICs are.  (For 
 discussion of a similar situation, see the \code{WARNING}
@@ -128,7 +131,7 @@ on a quasi- fit:
 
 \section*{Examples}
 
-\subsection*{\code{bbmle} package (Ben Bolker), CRAN/R-forge}
+\subsection*{\code{bbmle}}
 
 <<bbmle>>=
 library(bbmle)
@@ -142,7 +145,7 @@ ICtab(glmOT.D93,glmT.D93,glmO.D93,glmX.D93,
 detach("package:bbmle")
 @ 
 
-\subsection*{\code{AICcmodavg} package (Marc Mazerolle), CRAN}
+\subsection*{\code{AICcmodavg}}
 
 <<AICcmodavg>>=
 library(AICcmodavg)
@@ -152,7 +155,7 @@ aictab(list(glmOT.D93,glmT.D93,glmO.D93,glmX.D93),
 detach("package:AICcmodavg")
 @ 
 
-\subsection*{\code{MuMIn} package (Kamil Barto{\'n})}
+\subsection*{\code{MuMIn}}
 
 <<MuMin>>=
 library(MuMIn); packageVersion("MuMIn")

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-bbmle.git



More information about the debian-med-commit mailing list