[med-svn] [r-cran-fields] 08/12: New upstream version 9.0
Andreas Tille
tille at debian.org
Wed Nov 29 15:21:54 UTC 2017
This is an automated email from the git hooks/post-receive script.
tille pushed a commit to branch master
in repository r-cran-fields.
commit e54fa5768c73b81055977238b14e1355e0d15936
Author: Andreas Tille <tille at debian.org>
Date: Wed Nov 29 16:16:39 2017 +0100
New upstream version 9.0
---
DESCRIPTION | 41 +
LICENSE.note | 12 +
MD5 | 365 ++++++++
NAMESPACE | 64 ++
R/BD.R | 72 ++
R/ChicagoO3.R | 57 ++
R/Exponential.R | 37 +
R/ExponentialUpper.R | 34 +
R/Krig.family.R | 1534 +++++++++++++++++++++++++++++++++
R/MLE.Matern.R | 106 +++
R/MLESpatialProcess.R | 115 +++
R/MLESpatialProcess.fast.R | 32 +
R/MLEfast.R | 134 +++
R/Matern.R | 46 +
R/Matern.parameters.R | 44 +
R/QTps.R | 103 +++
R/REMLtest.R | 209 +++++
R/RMprecip.R | 522 +++++++++++
R/RadialBasis.R | 54 ++
R/SUBSCRIPTINGSpatialDesign.R | 23 +
R/Tps.R | 53 ++
R/US.R | 24 +
R/add.image.R | 44 +
R/arrow.plot.R | 56 ++
R/as.image.R | 84 ++
R/as.surface.R | 40 +
R/bisection.search.R | 47 +
R/bplot.family.R | 58 ++
R/cat.matrix.R | 32 +
R/cat.to.list.R | 32 +
R/ceiling2.R | 29 +
R/coef.Krig.R | 23 +
R/color.scale.R | 39 +
R/colorbar.plot.R | 76 ++
R/compactToMat.R | 42 +
R/cover.design.R | 175 ++++
R/cubic.cov.R | 50 ++
R/describe.R | 41 +
R/discretize.image.R | 72 ++
R/double.exp.R | 24 +
R/drape.color.R | 67 ++
R/drape.plot.R | 68 ++
R/dyadic.2check.R | 36 +
R/dyadic.check.R | 34 +
R/evlpoly.R | 34 +
R/evlpoly2.R | 40 +
R/exp.cov.R | 102 +++
R/exp.earth.cov.R | 23 +
R/exp.image.cov.R | 59 ++
R/exp.simple.cov.R | 62 ++
R/fast.1way.R | 46 +
R/fastTps.MLE.R | 32 +
R/fastTps.family.R | 255 ++++++
R/fastTpsMLE.R | 133 +++
R/fields.color.picker.R | 50 ++
R/fields.convert.grid.R | 34 +
R/fields.derivative.poly.R | 40 +
R/fields.diagonalize.R | 48 ++
R/fields.diagonalize2.R | 49 ++
R/fields.duplicated.matrix.R | 32 +
R/fields.mkpoly.R | 42 +
R/fields.rdist.near.R | 51 ++
R/fields.style.R | 25 +
R/fields.x.to.grid.R | 41 +
R/find.upcross.R | 53 ++
R/fitted.Krig.R | 23 +
R/flame.R | 44 +
R/gauss.cov.R | 23 +
R/gcv.Krig.R | 232 +++++
R/gcv.sreg.R | 193 +++++
R/golden.section.search.R | 119 +++
R/image.family.R | 399 +++++++++
R/image.plot.R | 180 ++++
R/image.smooth.R | 99 +++
R/in.poly.R | 73 ++
R/interp.surface.R | 51 ++
R/interp.surface.grid.R | 31 +
R/larry.colors.R | 39 +
R/mKrig.MLE.R | 157 ++++
R/mKrig.MLE.joint.R | 191 ++++
R/mKrig.R | 257 ++++++
R/mKrig.family.R | 290 +++++++
R/mKrigCheckXY.R | 77 ++
R/mKrigMLEGrid.R | 149 ++++
R/mKrigMLEJoint.R | 197 +++++
R/make.surface.grid.R | 38 +
R/matern.image.cov.R | 74 ++
R/minimax.crit.R | 31 +
R/minitri.R | 72 ++
R/parse.grid.list.R | 55 ++
R/plot.Krig.R | 77 ++
R/plot.spatialDesign.R | 23 +
R/plot.spatialProcess.R | 75 ++
R/plot.sreg.R | 61 ++
R/plot.surface.R | 100 +++
R/plot.vgram.matrix.R | 37 +
R/poly.image.R | 100 +++
R/predict.Krig.R | 150 ++++
R/predict.interp.surface.R | 24 +
R/predict.sreg.R | 28 +
R/predictDerivative.Krig.R | 72 ++
R/predictSE.R | 21 +
R/predictSE.family.R | 227 +++++
R/predictSEUsingKrigA.R | 120 +++
R/predictSurface.Krig.R | 71 ++
R/predictSurface.family.R | 91 ++
R/print.Krig.R | 60 ++
R/print.spatial.design.R | 23 +
R/print.spatialProcess.R | 90 ++
R/print.sreg.R | 52 ++
R/print.summary.Krig.R | 100 +++
R/print.summary.spatialProcess.R | 94 ++
R/print.summary.sreg.R | 66 ++
R/print.summarySpatialDesign.R | 45 +
R/printGCVWarnings.R | 56 ++
R/qr.q2ty.R | 31 +
R/qr.yq2.R | 23 +
R/qsreg.family.R | 324 +++++++
R/quickPrint.R | 36 +
R/quilt.plot.R | 62 ++
R/rad.cov.R | 100 +++
R/rad.image.cov.R | 59 ++
R/rad.simple.cov.R | 55 ++
R/radbas.constant.R | 44 +
R/rat.diet.R | 38 +
R/rdist.R | 40 +
R/rdist.earth.R | 45 +
R/rdist.earth.vec.R | 40 +
R/rdist.vec.R | 32 +
R/residuals.Krig.R | 23 +
R/ribbon.plot.R | 40 +
R/set.panel.R | 30 +
R/setup.image.smooth.R | 47 +
R/sim.Krig.R | 139 +++
R/sim.Krig.approx.R | 131 +++
R/sim.mKrig.approx.R | 134 +++
R/sim.rf.R | 32 +
R/sim.spatialProcess.R | 128 +++
R/smooth.2d.R | 75 ++
R/spam_2lz.R | 69 ++
R/spatialProcess.R | 117 +++
R/splint.R | 91 ++
R/sreg.family.R | 317 +++++++
R/stationary.cov.R | 148 ++++
R/stationary.image.cov.R | 96 +++
R/stationary.taper.cov.R | 144 ++++
R/stats.R | 47 +
R/stats.bin.R | 36 +
R/summary.Krig.R | 50 ++
R/summary.ncdf.R | 53 ++
R/summary.spatialDesign.R | 26 +
R/summary.spatialProcess.R | 23 +
R/summary.sreg.R | 42 +
R/summaryGCV.Krig.R | 57 ++
R/summaryGCV.sreg.R | 46 +
R/supportsArg.R | 30 +
R/surface.family.R | 90 ++
R/test.for.zero.R | 40 +
R/tim.colors.R | 55 ++
R/transformx.R | 49 ++
R/unrollZGrid.R | 42 +
R/unscale.R | 25 +
R/vgram.family.R | 322 +++++++
R/vgram.matrix.R | 86 ++
R/wendland.family.R | 368 ++++++++
R/wendland.image.cov.R | 99 +++
R/which.max.matrix.R | 39 +
R/world.R | 39 +
R/xline.R | 23 +
R/yline.R | 23 +
data/CO2.rda | Bin 0 -> 455372 bytes
data/COmonthlyMet.rda | Bin 0 -> 1294782 bytes
data/NorthAmericanRainfall.rda | Bin 0 -> 100801 bytes
data/PRISMelevation.rda | Bin 0 -> 667593 bytes
data/RCMexample.rda | Bin 0 -> 201318 bytes
data/RMelevation.rda | Bin 0 -> 94458 bytes
data/US.dat.rda | Bin 0 -> 44988 bytes
data/WorldBankCO2.rda | Bin 0 -> 3214 bytes
data/datalist | 11 +
data/lennon.rda | Bin 0 -> 38012 bytes
data/ozone2.rda | Bin 0 -> 31376 bytes
data/rat.diet.rda | Bin 0 -> 579 bytes
data/world.dat.rda | Bin 0 -> 15254 bytes
debian/README.source | 51 --
debian/README.test | 8 -
debian/changelog | 36 -
debian/compat | 1 -
debian/control | 36 -
debian/copyright | 33 -
debian/docs | 3 -
debian/rules | 4 -
debian/source/format | 1 -
debian/tests/control | 3 -
debian/tests/run-unit-test | 39 -
debian/watch | 2 -
inst/CITATION | 19 +
man/BD.Rd | 76 ++
man/CO.Rd | 265 ++++++
man/CO2.Rd | 100 +++
man/CovarianceUpper.Rd | 70 ++
man/Exponential.Rd | 156 ++++
man/FORTRAN.internal.Rd | 58 ++
man/Krig.Amatrix.Rd | 110 +++
man/Krig.Rd | 690 +++++++++++++++
man/Krig.engine.default.Rd | 291 +++++++
man/Krig.null.function.Rd | 51 ++
man/Krig.replicates.Rd | 90 ++
man/MLESpatialProcess.Rd | 222 +++++
man/NorthAmericanRainfall.Rd | 76 ++
man/QTps.Rd | 263 ++++++
man/RCMexample.Rd | 76 ++
man/REML.test.Rd | 252 ++++++
man/RMprecip.Rd | 165 ++++
man/Tps.Rd | 489 +++++++++++
man/US.Rd | 53 ++
man/US.dat.Rd | 34 +
man/Wendland.Rd | 160 ++++
man/WorldBank.Rd | 103 +++
man/add.image.Rd | 76 ++
man/arrow.plot.Rd | 117 +++
man/as.image.Rd | 116 +++
man/as.surface.Rd | 107 +++
man/bplot.Rd | 117 +++
man/bplot.xy.Rd | 78 ++
man/colorbar.plot.Rd | 131 +++
man/compactToMat.Rd | 123 +++
man/cover.design.Rd | 415 +++++++++
man/drape.plot.Rd | 171 ++++
man/exp.cov.Rd | 405 +++++++++
man/fields-internal.Rd | 264 ++++++
man/fields-stuff.Rd | 148 ++++
man/fields.Rd | 182 ++++
man/fields.grid.Rd | 84 ++
man/fields.hints.Rd | 180 ++++
man/fields.tests.Rd | 150 ++++
man/flame.Rd | 49 ++
man/gcv.Krig.Rd | 152 ++++
man/grid.list.Rd | 226 +++++
man/image.cov.Rd | 242 ++++++
man/image.plot.Rd | 482 +++++++++++
man/image.smooth.Rd | 173 ++++
man/image2lz.Rd | 182 ++++
man/interp.surface.Rd | 119 +++
man/lennon.Rd | 36 +
man/mKrig.MLE.Rd | 266 ++++++
man/mKrig.Rd | 525 +++++++++++
man/mKrigMLE.Rd | 356 ++++++++
man/minitri.Rd | 55 ++
man/ozone.Rd | 66 ++
man/ozone2.Rd | 71 ++
man/plot.Krig.Rd | 113 +++
man/plot.surface.Rd | 105 +++
man/poly.image.Rd | 130 +++
man/predict.Krig.Rd | 189 ++++
man/predictSE.Krig.Rd | 139 +++
man/predictSurface.Rd | 203 +++++
man/print.Krig.Rd | 55 ++
man/pushpin.Rd | 72 ++
man/qsreg.Rd | 181 ++++
man/quilt.plot.Rd | 117 +++
man/rat.diet.Rd | 52 ++
man/rdist.Rd | 157 ++++
man/rdist.earth.Rd | 88 ++
man/registeredC.Rd | 85 ++
man/ribbon.plot.Rd | 89 ++
man/set.panel.Rd | 78 ++
man/sim.Krig.Rd | 347 ++++++++
man/sim.rf.Rd | 120 +++
man/smooth.2d.Rd | 163 ++++
man/spam2lz.Rd | 123 +++
man/spatialProcess.Rd | 376 ++++++++
man/splint.Rd | 112 +++
man/sreg.Rd | 339 ++++++++
man/stats.Rd | 75 ++
man/stats.bin.Rd | 86 ++
man/summary.Krig.Rd | 68 ++
man/summary.ncdf.Rd | 54 ++
man/supportsArg.Rd | 85 ++
man/surface.Krig.Rd | 123 +++
man/tim.colors.Rd | 182 ++++
man/transformx.Rd | 72 ++
man/vgram.Rd | 195 +++++
man/vgram.matrix.Rd | 123 +++
man/world.Rd | 64 ++
man/xline.Rd | 56 ++
man/yline.Rd | 52 ++
src/ExponentialUpperC.c | 55 ++
src/addToDiagC.c | 39 +
src/compactToMatC.c | 69 ++
src/compactToMatCOLD.c | 65 ++
src/css.f | 360 ++++++++
src/csstr.f | 25 +
src/cvrf.f | 27 +
src/dchold.f | 77 ++
src/ddfind.f | 52 ++
src/dlv.f | 71 ++
src/dmaket.f | 89 ++
src/drdfun.f | 28 +
src/dsetup.f | 73 ++
src/evlpoly.f | 34 +
src/evlpoly2.f | 41 +
src/expfn.f | 16 +
src/expfnC.c | 41 +
src/ifind.f | 33 +
src/igpoly.f | 82 ++
src/init.c | 77 ++
src/inpoly.f | 163 ++++
src/mltdrb.f | 50 ++
src/mltdtd.f | 61 ++
src/multW.f | 41 +
src/multebC.c | 81 ++
src/multrb.f | 45 +
src/radbas.f | 33 +
src/radfun.f | 22 +
src/rcss.f | 238 +++++
src/rcssr.f | 38 +
src/rcsswt.f | 48 ++
src/rdist.f | 81 ++
src/rdistC.c | 48 ++
src/sortm.f | 93 ++
tests/Krig.Z.test.R | 285 ++++++
tests/Krig.Z.test.Rout.save | 109 +++
tests/Krig.se.W.R | 88 ++
tests/Krig.se.W.Rout.save | 68 ++
tests/Krig.se.grid.test.R | 94 ++
tests/Krig.se.grid.test.Rout.save | 60 ++
tests/Krig.se.test.R | 194 +++++
tests/Krig.se.test.Rout.save | 248 ++++++
tests/Krig.test.R | 359 ++++++++
tests/Krig.test.Rout.save | 158 ++++
tests/Krig.test.W.R | 124 +++
tests/Krig.test.W.Rout.save | 90 ++
tests/KrigGCVREML.test.R | 115 +++
tests/KrigGCVREML.test.Rout.save | 79 ++
tests/Likelihood.test.R | 136 +++
tests/Likelihood.test.Rout.save | 86 ++
tests/REMLest.test.R | 131 +++
tests/REMLest.test.Rout.save | 80 ++
tests/Tps.test.R | 165 ++++
tests/Tps.test.Rout.save | 94 ++
tests/Wend.test.R | 97 +++
tests/Wend.test.Rout.save | 69 ++
tests/cov.test.R | 157 ++++
tests/cov.test.Rout.save | 71 ++
tests/cov.test2.R | 300 +++++++
tests/cov.test2.Rout.save | 119 +++
tests/derivative.test.R | 200 +++++
tests/derivative.test.Rout.save | 108 +++
tests/diag.multiply.test.R | 36 +
tests/diag.multiply.test.Rout.save | 59 ++
tests/diagonal2.test.R | 47 +
tests/diagonal2.test.Rout.save | 66 ++
tests/evlpoly.test.R | 69 ++
tests/evlpoly.test.Rout.save | 59 ++
tests/fastTpsPredict.test.R | 67 ++
tests/fastTpsPredict.test.Rout.save | 67 ++
tests/mKrig.MLE.test.R | 101 +++
tests/mKrig.MLE.test.Rout.save | 64 ++
tests/mKrig.Z.R | 54 ++
tests/mKrig.Z.Rout.save | 67 ++
tests/mKrig.parameters.test.R | 240 ++++++
tests/mKrig.parameters.test.Rout.save | 314 +++++++
tests/mKrig.se.test.R | 174 ++++
tests/mKrig.se.test.Rout.save | 81 ++
tests/mKrig.test.R | 287 ++++++
tests/mKrig.test.Rout.save | 114 +++
tests/mKrigMLETest.R | 243 ++++++
tests/mKrigMLETest.Rout.save | 94 ++
tests/mKrigREMLTest.R | 107 +++
tests/mKrigREMLTest.Rout.save | 70 ++
tests/misc.test.R | 45 +
tests/misc.test.Rout.save | 60 ++
tests/spam.test.R | 160 ++++
tests/spam.test.Rout.save | 76 ++
tests/sreg.test.R | 81 ++
tests/sreg.test.Rout.save | 83 ++
tests/vgram.test.R | 99 +++
tests/vgram.test.Rout.save | 65 ++
378 files changed, 39299 insertions(+), 217 deletions(-)
diff --git a/DESCRIPTION b/DESCRIPTION
new file mode 100644
index 0000000..6aa1f42
--- /dev/null
+++ b/DESCRIPTION
@@ -0,0 +1,41 @@
+Package: fields
+Version: 9.0
+Date: 2017-06-03
+Title: Tools for Spatial Data
+Authors at R: c( person("Douglas", "Nychka", role = c("aut", "cre"),
+ email = "nychka at ucar.edu"),
+ person("Reinhard", "Furrer", role = c("aut"),
+ email = "reinhard.furrer at math.uzh.ch"),
+ person("John", "Paige", role = c("aut"),
+ email = "paigejo at uw.edu"),
+ person("Stephan", "Sain", role = "aut",
+ email = "sainsr2 at gmail.com"))
+Author: Douglas Nychka [aut, cre], Reinhard Furrer [aut], John Paige [aut], Stephan Sain [aut]
+Maintainer: Douglas Nychka <nychka at ucar.edu>
+Description: For curve, surface and function fitting with an emphasis
+ on splines, spatial data and spatial statistics. The major methods
+ include cubic, and thin plate splines, Kriging, and compactly supported
+ covariance functions for large data sets. The splines and Kriging methods are
+ supported by functions that can determine the smoothing parameter
+ (nugget and sill variance) and other covariance function parameters by cross
+ validation and also by restricted maximum likelihood. For Kriging
+ there is an easy to use function that also estimates the correlation
+ scale (range parameter). A major feature is that any covariance function
+ implemented in R and following a simple format can be used for
+ spatial prediction. There are also many useful functions for plotting
+ and working with spatial data as images. This package also contains
+ an implementation of sparse matrix methods for large spatial data
+ sets and currently requires the sparse matrix (spam) package. Use
+ help(fields) to get started and for an overview. The fields source
+ code is deliberately commented and provides useful explanations of
+ numerical details as a companion to the manual pages. The commented
+ source code can be viewed by expanding source code version
+ and looking in the R subdirectory. The reference for fields can be generated
+ by the citation function in R and has DOI <doi:10.5065/D6W957CT>.
+License: GPL (>= 2)
+URL: http://www.image.ucar.edu/fields
+Depends: R (>= 3.0), methods, spam, maps
+NeedsCompilation: yes
+Packaged: 2017-06-04 22:20:38 UTC; nychka
+Repository: CRAN
+Date/Publication: 2017-06-06 17:06:25 UTC
diff --git a/LICENSE.note b/LICENSE.note
new file mode 100644
index 0000000..5f2e6b0
--- /dev/null
+++ b/LICENSE.note
@@ -0,0 +1,12 @@
+All R code and documentation in this package (fields) is licensed
+under the terms of the GPL license.
+
+NOTE:
+
+The sparse matrix methods used in fields are supported by the package
+spam. The spam package contains some FORTRAN routines where some
+licensing issues are unclear. Please refer to the spam license
+information for details. Note that many functions in fields e.g. Tps,
+Krig, all the graphical functions, function indepedently of the spam package so much
+of fields will be functional under a GPL license.
+
diff --git a/MD5 b/MD5
new file mode 100644
index 0000000..4dcdce5
--- /dev/null
+++ b/MD5
@@ -0,0 +1,365 @@
+eca2b30211ac2797c5274344c1e2e599 *DESCRIPTION
+3b1b7ce654d028cbe16127df91148c8b *LICENSE.note
+6e10c98a165ae9ba3892fb7105fc17c2 *NAMESPACE
+5137cbc65a5fd88d9e33692c6b82a2d3 *R/BD.R
+c0bae8fcb42622c73f8bb3ca5d2859a8 *R/ChicagoO3.R
+b3e7e7a608eb5488fa476ed480eb0162 *R/Exponential.R
+0b1b2c4f46e5269fa42b42cb84396049 *R/ExponentialUpper.R
+32a37a3a51ec58a2a19495d735ac4b5f *R/Krig.family.R
+96be4e6fbd4421db717567ab24511e97 *R/MLE.Matern.R
+1a7bf058d8e07a9a730d45a6bd467ffb *R/MLESpatialProcess.R
+0603d90f368134b62ec57cf5c763399a *R/MLESpatialProcess.fast.R
+b2a133b97e874d8aa3b8bafc50c2606d *R/MLEfast.R
+981f7574f4580a3c01dbd8d8cd19cd3c *R/Matern.R
+490351f1b1ca6c2f24757f8a846bcab0 *R/Matern.parameters.R
+4de07e5827043ee5514869a8c4a43455 *R/QTps.R
+6ad9d9d1b2499004e70555d4b2bd0f1f *R/REMLtest.R
+aec49bfc420e3ded630f1042f78db1e4 *R/RMprecip.R
+f3fd67532284ed9a07927710ee5061a2 *R/RadialBasis.R
+c6e957b7be01012215059bbe39a914a3 *R/SUBSCRIPTINGSpatialDesign.R
+f346b0a400c20f6556e16d69e7af182c *R/Tps.R
+5d4d1ae775bcc8334414a8352716ca55 *R/US.R
+bf1ce44a134e66439980775dc6c1ec6a *R/add.image.R
+cda99fd7376de145aa8cc30237989b92 *R/arrow.plot.R
+53ad5c155ad6169651450b7be118874d *R/as.image.R
+5ef72c6fad313cf8a8f7cb7fc3ac0a48 *R/as.surface.R
+ab2e562b917774234e8659c3dab06c6a *R/bisection.search.R
+4328ace7074bc985c5617191b6ec733f *R/bplot.family.R
+607becd497ad5e366d166e63e2b2d087 *R/cat.matrix.R
+ad602f452deb2c0cb8786203972907c3 *R/cat.to.list.R
+2210e2c6472bec2914e7a216d2507fe2 *R/ceiling2.R
+5151e2f5046316c2952d55f298b0cc4a *R/coef.Krig.R
+5389bda1e267e0e58729be7f6b90c6bc *R/color.scale.R
+9a2c6da1d915174308324d5e6d4fd208 *R/colorbar.plot.R
+487f04cab7f68b1fd949f2916772bef4 *R/compactToMat.R
+8ee4c92809b12df27adaa5f01810a528 *R/cover.design.R
+0b20086e5d8876a206077cf187c2a782 *R/cubic.cov.R
+fcb5476f729738804569c7d9ad1ba5da *R/describe.R
+94603119d8fd07269b176dd2a74298dc *R/discretize.image.R
+16632a5c694c2cf21e4eebf9fca499ee *R/double.exp.R
+4a14ea9c10a50f618e424c7543f35241 *R/drape.color.R
+17ceab42d4ad1dd7b79f588dd94025ab *R/drape.plot.R
+05817c5ab6604dee1984e3618dfe4cae *R/dyadic.2check.R
+fbd72fa15a4d7a6eca0fa69feb87dd2c *R/dyadic.check.R
+5de45cbdc5ac367dc54322b9f5eacda0 *R/evlpoly.R
+1bf9342aaf140ed582ffd2f8c85e1b4a *R/evlpoly2.R
+191a5655ba44461526672e375f6be8ab *R/exp.cov.R
+4c32e70b6c3e03e8d54954eebcda1d4e *R/exp.earth.cov.R
+9df442f9ba15b9c81169124b37f53673 *R/exp.image.cov.R
+daa6f5cbde40fd9edcef71e49156f0a8 *R/exp.simple.cov.R
+2047cac1b25a60cdee47f0cf88ac89be *R/fast.1way.R
+8d5c4e5b6876d7d21a8051bdc96e38c9 *R/fastTps.MLE.R
+c1705b82ccb3f3a4efe2a85654e0ce90 *R/fastTps.family.R
+be1caf59c4f536257ba1d51556d3c4f4 *R/fastTpsMLE.R
+b759be2a5f49a3a0d357fe7588322440 *R/fields.color.picker.R
+2cb185ca3471e1d2461ad1e2ad175e53 *R/fields.convert.grid.R
+197b3d9f42b9a7afec007cfe4315ae49 *R/fields.derivative.poly.R
+6e74ef3e0a48a970ed43965955ade1a1 *R/fields.diagonalize.R
+2a597d72f19f6bb43974bb2ff7718d79 *R/fields.diagonalize2.R
+a7639c2165022b68f05092cf7600aa68 *R/fields.duplicated.matrix.R
+436f973f2cb27923036524ca373ef19f *R/fields.mkpoly.R
+c3b2b783555be55be6c436a0f6642896 *R/fields.rdist.near.R
+cd1eeb10e0fa27449da65ef2b395be90 *R/fields.style.R
+2a8e5d6ed14bb99b7a3139258b154340 *R/fields.x.to.grid.R
+05a9ce5bd585e7fd6e5b522183a1c305 *R/find.upcross.R
+a695f54ccf9070467cdced681ce38f17 *R/fitted.Krig.R
+32c973e0ee59595c853428b72ebfba0d *R/flame.R
+6fa398c50e0d9258409d7d692e8c1420 *R/gauss.cov.R
+f1d864d45131c33e82ca83e01e5920a6 *R/gcv.Krig.R
+7de46064534f74630dc6ca74f96d617a *R/gcv.sreg.R
+103f5c40893df8e8de66b9823223e688 *R/golden.section.search.R
+577137879383e107adc33ef100f187fa *R/image.family.R
+fe85ac067d5716fb8b447df8d5e3062d *R/image.plot.R
+b3c7049da6f6e68d48ddf7b7be19cb8a *R/image.smooth.R
+15887f49f76fed749d76b4493d1cc4c1 *R/in.poly.R
+c31da4d8c2d3ddc46f4182729cf5812a *R/interp.surface.R
+37312363cfff6f40bf19e35e85a342a9 *R/interp.surface.grid.R
+33a83a351f0321b8d2c4e4fdc5920d50 *R/larry.colors.R
+d436aca78973083c501c0e830bf10348 *R/mKrig.MLE.R
+afa4d5c2f24a7f6f9a23531c1c4dcf95 *R/mKrig.MLE.joint.R
+1f81f9e9c9d67f34b8d3a569f6a5db7b *R/mKrig.R
+0a2981c0ebe2760857970e8396b6be6a *R/mKrig.family.R
+83e05a3216861f810135a6740a4a5924 *R/mKrigCheckXY.R
+73490ac7d494eb514a680d61da9e67f5 *R/mKrigMLEGrid.R
+bc799e91eceeef3312d18585638358d8 *R/mKrigMLEJoint.R
+736826635d7ea8514708235c41292889 *R/make.surface.grid.R
+0e3e26d88ed98c2ece937d8926fc81a1 *R/matern.image.cov.R
+b61ee10908747de4c01852eaace3c4e3 *R/minimax.crit.R
+52e1cb5a744e06824013affd4f08750d *R/minitri.R
+b5642bd99f6d155b01ac863181756c25 *R/parse.grid.list.R
+d5a43bc619c14fdfc907fece953d8ca1 *R/plot.Krig.R
+efe84a1cc3a9e4f71b4d23afdf1d4266 *R/plot.spatialDesign.R
+c478b8ba185f92fcd52469dd95e9977b *R/plot.spatialProcess.R
+27a5ca4cd998998ca592a43e0608461c *R/plot.sreg.R
+73cd0bdec5a60ac43c7699eeb1705641 *R/plot.surface.R
+bb392d49bd9c7c32a4187a3d60339ef3 *R/plot.vgram.matrix.R
+01bb69c92527f09e69eac37e4f0c8a3e *R/poly.image.R
+5dc2402fb41ac91cd928358cabcf1711 *R/predict.Krig.R
+e921730dc09383253673c6642b6fe2c2 *R/predict.interp.surface.R
+1f65025d17cd091e6c26a384edd869dc *R/predict.sreg.R
+38dbf75448865a3fa32b324ba2e11b57 *R/predictDerivative.Krig.R
+2b36a91e12de277d3d5a95e6ab573b45 *R/predictSE.R
+23dd4426b858588797c18d06be63c7a8 *R/predictSE.family.R
+c707e01468e49498d6b899cab144a44e *R/predictSEUsingKrigA.R
+d9016e89357c91b1d63f173f13b7360e *R/predictSurface.Krig.R
+88d6df00536cffcb0c73f15e34a4e4a7 *R/predictSurface.family.R
+1ecbd19e1b7f4ab7ee8404e798127cdf *R/print.Krig.R
+bd284079ee5f4c0e017d680c636ddb0f *R/print.spatial.design.R
+9bd855e8c8f1da729abe984df153082b *R/print.spatialProcess.R
+bdae2913a994b4208ad6c0a978520bc9 *R/print.sreg.R
+0c52b6bc1fecf2aa50e8749daa84e73a *R/print.summary.Krig.R
+2ae18ab991af4fddd6e1b11b26d71555 *R/print.summary.spatialProcess.R
+8bb7d9ee9b1e3dffcce12e3f6935034c *R/print.summary.sreg.R
+872a9eeeb87835b580562a327cdae954 *R/print.summarySpatialDesign.R
+d0824cfeddfa85773f2a515adb5a280e *R/printGCVWarnings.R
+2de3e239a47b159f28885c3ebd12d4b4 *R/qr.q2ty.R
+4e8ac99cfae635738f448e519c9e71d1 *R/qr.yq2.R
+2e5d1fda4fbe31407871a079de6493aa *R/qsreg.family.R
+2d0984e2974517d7bdd0eae8256d0486 *R/quickPrint.R
+37f6f6dd0f20000e68b5c159a7a13ca2 *R/quilt.plot.R
+79b5e7ac8fac2b8e346fce0c75d9cbd1 *R/rad.cov.R
+2b2462456027d0b93f607dd2c5b9aeb2 *R/rad.image.cov.R
+57d959d63e7902e061970c01411acb3d *R/rad.simple.cov.R
+e3d41067d3d788ed0691edf3a34b8b42 *R/radbas.constant.R
+381a73652cd30f71ded047996d4276fc *R/rat.diet.R
+05838449c6431eb588318e1edeaee590 *R/rdist.R
+4b818c2aaae835dfc3192119737b0e31 *R/rdist.earth.R
+d5b1cfa87817311875e60ebbe5dd84aa *R/rdist.earth.vec.R
+3018a0985e02e923a86579fc64e1f45f *R/rdist.vec.R
+2862a2ae91fba738017566a27642c908 *R/residuals.Krig.R
+e0881cae66b40ecb1ed0078113eb37db *R/ribbon.plot.R
+9ae15ed80eb12b39a73713b9f3cbd2f1 *R/set.panel.R
+f3b0c6c92c0791a5f1e6a0c772cdd998 *R/setup.image.smooth.R
+b0217896eccdc94ae13f07fb939f623f *R/sim.Krig.R
+2b4852a9195006af2e63fb26b589bcff *R/sim.Krig.approx.R
+cc67f146eb97fb0b00012aecabf0638f *R/sim.mKrig.approx.R
+f22556326ed6eab778350e3c5b4f7fc3 *R/sim.rf.R
+6c46e9d66951ffda984b801d74956f43 *R/sim.spatialProcess.R
+31e954a30cfc8afed72560bf831f473c *R/smooth.2d.R
+aa2cf4d8ee2c23354608878e6077c326 *R/spam_2lz.R
+0a8d61af328c74cd384a5630adc0833b *R/spatialProcess.R
+1d0bb0688eb5066c8f1940ab77452b23 *R/splint.R
+c26ce8ac716a061a5bfac2b0f141cf51 *R/sreg.family.R
+c515646bcb3e9926e313e96db7e2071f *R/stationary.cov.R
+f19375bcc3296878a2be42ef7e72b391 *R/stationary.image.cov.R
+e210b7f99253ea9b060f3f241bcef6d5 *R/stationary.taper.cov.R
+63ce8c1593f701a74751d50a9e9cb444 *R/stats.R
+249e784b92bb3e5ca0dbfc797d2c6395 *R/stats.bin.R
+a8f7f73e31dcab71fc1c213289cf9b06 *R/summary.Krig.R
+3522a4f37e5de6971312e1483e2a8dea *R/summary.ncdf.R
+ba9fc0592a6d2186fe60249539e5abf9 *R/summary.spatialDesign.R
+9ee9c3b292f754c33022815c3f3d3bc4 *R/summary.spatialProcess.R
+5704f74f617e723643005399b4e75740 *R/summary.sreg.R
+640ab925d80b6e19776ae96606ca20f9 *R/summaryGCV.Krig.R
+55c4150b595a62b5c8ee96f5875e91ab *R/summaryGCV.sreg.R
+75ad1473ff8c4c777bf3c34ea5a1593b *R/supportsArg.R
+235a33f6889ff931cabb5dee3ba0349c *R/surface.family.R
+cdabb8e64742b7652489b97b651d2c02 *R/test.for.zero.R
+5a0055352043860ca1b66c3b470f4e96 *R/tim.colors.R
+0c68d725f55be46aed348ad6374d3b45 *R/transformx.R
+f99bbcbe693ec35942a127de28810238 *R/unrollZGrid.R
+94835525bc6026e7fe678d3d4490bbbd *R/unscale.R
+31f944110c365cc1cce2d78cc99af1f6 *R/vgram.family.R
+61132570bf5e3f4b2606eb3b36f2af94 *R/vgram.matrix.R
+dcebc91fd6d6db3098aa0732f2508a46 *R/wendland.family.R
+32edd6f10a604cb18e39fa518853d4b6 *R/wendland.image.cov.R
+3b63ce0b34ab3635d012ca8165f14328 *R/which.max.matrix.R
+fdc9d7fe11f771602f07f1234955696a *R/world.R
+4f1b3fb9078e2db51686962b7fbafefa *R/xline.R
+8702a3c6e7e27d364fea0d7bca0afb8c *R/yline.R
+def2e3c5b2487cf369719ecedc0ec009 *data/CO2.rda
+5d5cc9f8d3c5b782b5d87844727694f4 *data/COmonthlyMet.rda
+510e588dbd657d31f456f0a1a4f45adc *data/NorthAmericanRainfall.rda
+0c0be8a07fa347d68fc36fc6a117c759 *data/PRISMelevation.rda
+e98af9c5d6b770b5f04819fba9aa72af *data/RCMexample.rda
+ec8555f8006ae1cbdf3937be5609b37e *data/RMelevation.rda
+5f0f32a5ee0dbc877a28c20bdeb6f64d *data/US.dat.rda
+5baf657b81b2ee2809dc96ecde12794d *data/WorldBankCO2.rda
+66b8754c3221f29585a9b52f84a34e9f *data/datalist
+f560c509a3416dcb22274af12ebf5d07 *data/lennon.rda
+c909744661a3fbd93bbc36773d7183d3 *data/ozone2.rda
+43a5afaacf0deb2f95d4288968aa7168 *data/rat.diet.rda
+cafddcfafb1344f9158d8607867d0575 *data/world.dat.rda
+ce4feec1dfb98c62c42b48c88d99abb6 *inst/CITATION
+4f38f7f2844d2a2f44b7f70d73bd7522 *man/BD.Rd
+af4f6aec96e3f05f34f49e01198e3d01 *man/CO.Rd
+b152caacdf8bfca6dfcc391f77a31a69 *man/CO2.Rd
+455f35fc0a2c48f15aee935213b79807 *man/CovarianceUpper.Rd
+b382f5873679354d36799a57e86d54dd *man/Exponential.Rd
+627c4258981c3b597d63b845204fd447 *man/FORTRAN.internal.Rd
+706afb4b4649bdfa3f898ccd48e80bdd *man/Krig.Amatrix.Rd
+d679bd41f61c4fb7679b675810dce40e *man/Krig.Rd
+ad63b0555ae880488d60d7ded0d44421 *man/Krig.engine.default.Rd
+b474b79c643b9a18bfa08b215e5bf0e1 *man/Krig.null.function.Rd
+137afa39d95eb043d52bc891ca3bbcae *man/Krig.replicates.Rd
+fc9bba671cfc127934d8d112a1b73b42 *man/MLESpatialProcess.Rd
+8e7ef5967035b933d4b90a0cfb885926 *man/NorthAmericanRainfall.Rd
+570611860686cf659bdc2af31486075b *man/QTps.Rd
+feb1f1777c8eece8a1f5855c86fa43bb *man/RCMexample.Rd
+31dc908213d0dcccdfb156b1d228a91c *man/REML.test.Rd
+a2c757200f8244b8537d8889d94b4efc *man/RMprecip.Rd
+cf50d258fe84e188dcbea4234892304c *man/Tps.Rd
+295b07b27c6ef3fcb06313050a8b8771 *man/US.Rd
+f49333a2c7644453257ed81b9586ed88 *man/US.dat.Rd
+67f403c6f0bc9fe180fb8dfcab228adf *man/Wendland.Rd
+589e8360339f1d709d4839ba0ff309f4 *man/WorldBank.Rd
+c48f84135edbeebd66d4a28770e9bf88 *man/add.image.Rd
+3db903c522ded03dceb814d75a8a6b5c *man/arrow.plot.Rd
+70e9739a5885ab9aa3cf99da377caf09 *man/as.image.Rd
+814cdadda2f50fe00c280d15937d6a9a *man/as.surface.Rd
+10d5a7fa13bab78dc0eed37ac2188849 *man/bplot.Rd
+19e405a7e6265a181a25deecb7ae38e6 *man/bplot.xy.Rd
+8342236651b2bf2ffd6cd2f900a34450 *man/colorbar.plot.Rd
+b34472336cf1b0a8e0111b4ae6b70090 *man/compactToMat.Rd
+ae18f416e93a9cf271aac88a326d1682 *man/cover.design.Rd
+ca8f5ef6a334960c5e58d90650232202 *man/drape.plot.Rd
+c3aee807e048046fe9f4df24e536b620 *man/exp.cov.Rd
+8eabcd2497f28ea710ea1870348ea798 *man/fields-internal.Rd
+81631a672d7e9ed6124b9d6d9cf807ba *man/fields-stuff.Rd
+872f421784fbdbcc1dbc746aa889092c *man/fields.Rd
+ea7a0be950809040f44199f2275008bf *man/fields.grid.Rd
+41e485b9ddf070c647324f1fb5946da1 *man/fields.hints.Rd
+4043c27a97ec82f7d7f7edfef998114c *man/fields.tests.Rd
+5175729cc6d9bced31a36b6ee9a95ade *man/flame.Rd
+76f0cbd8ac817ef5d23bcd0a00b3769e *man/gcv.Krig.Rd
+cb1e0fc343221bb100a76a14ebc27cd4 *man/grid.list.Rd
+ce2a30f801375fb9f6dfb633f3393d5d *man/image.cov.Rd
+569f775ed37b2de7b12b2d5d34d73862 *man/image.plot.Rd
+d8ad3c7f16e9ba425a6d1648eb5ceccc *man/image.smooth.Rd
+8e158daeaccae26ebce01460978cb759 *man/image2lz.Rd
+5a25029d08f6e51540ed8074ddca98e6 *man/interp.surface.Rd
+d79f272cc8fa6772511ee05a637f85eb *man/lennon.Rd
+5274dd7f54fca85755d8ada04afe3951 *man/mKrig.MLE.Rd
+0255328c97243934c2abbc989dd99aea *man/mKrig.Rd
+3c920ff6444b4599b353d360a528c032 *man/mKrigMLE.Rd
+59eb40475e19369dd784b751bb48568f *man/minitri.Rd
+2e3bdb59c44d062b918a4a103e3847fa *man/ozone.Rd
+16ad5e76018a6bd874e6bfcc8c9b53fa *man/ozone2.Rd
+99a29507f61f478c69c2fe4576f9eff9 *man/plot.Krig.Rd
+10b139ab6a7f4052fbcfd5608b8a5e34 *man/plot.surface.Rd
+ac6e06d5698f882e31056b0e984e78ae *man/poly.image.Rd
+6fa7e35bd353cca518a1fc89ccd89365 *man/predict.Krig.Rd
+2f62f0f067b7dd647f3e4f3a3b266e7a *man/predictSE.Krig.Rd
+af541195e392ccd1819c70c080b25d88 *man/predictSurface.Rd
+9a2932c0ed0d6d004ce324a216d8460d *man/print.Krig.Rd
+8a457076584973c5d9d8df10cc8c5c30 *man/pushpin.Rd
+3b745821cb849d53dfc43546005dcc14 *man/qsreg.Rd
+cd1fe98845991fcb3736c18c5f0d521c *man/quilt.plot.Rd
+7fbe5f072c1837e5cc1d203cff9d47f0 *man/rat.diet.Rd
+819ec2802c38ef5935c678859369cdfd *man/rdist.Rd
+1c199d25d26b5eb5a81bab3a86bc55ef *man/rdist.earth.Rd
+b39ae03fdb7b078fe00fad625d7bd7f2 *man/registeredC.Rd
+90b12449210fddbacb901b5f2e129bae *man/ribbon.plot.Rd
+2a2f8f1706c12606af60173dd227a52a *man/set.panel.Rd
+a7c1a81242e5a2f058397b56937b996f *man/sim.Krig.Rd
+acd58d0906d31a04e803f3c0ff7629da *man/sim.rf.Rd
+35038d539ae38f8113dd38cd00489ecf *man/smooth.2d.Rd
+32e8bea9db72ddcbe3410d820f5f6699 *man/spam2lz.Rd
+9c1ac584934977ae798836f0eebfe3b8 *man/spatialProcess.Rd
+69a06dba4b74b0cf6aca40d3e53986e9 *man/splint.Rd
+3c441c799329e764e49e62d2e75e561b *man/sreg.Rd
+2a80efec23dce56460167ba107480bfe *man/stats.Rd
+eff6eca96d31a592ca8d68e05dfaa686 *man/stats.bin.Rd
+de7606e79a914fa4ebe27be712bd2991 *man/summary.Krig.Rd
+35f254abd7834beccf841e25fbc4c5d8 *man/summary.ncdf.Rd
+0f7201cd5e37bb172b4fc797b0be6db3 *man/supportsArg.Rd
+f77becc7374469605ac92283b1b0095e *man/surface.Krig.Rd
+a6268151da6ca6acd2c4fb7705bb567d *man/tim.colors.Rd
+3167cc67ceb348cee0616c0428d37fd5 *man/transformx.Rd
+3d68e74ec9b163c7471ce405e184bb2f *man/vgram.Rd
+434f5bcd372d81004d63e86926b80f03 *man/vgram.matrix.Rd
+465683316aed6356ff441192f37fb63b *man/world.Rd
+4a0fe540401e6a22a3cd5b4dd2ea0ffb *man/xline.Rd
+29d20a93573793f6f87a91e880fb1d43 *man/yline.Rd
+4de2918ac493cda891a8585879bf2bf2 *src/ExponentialUpperC.c
+2e78a94defa5e08ef9a665771e91244b *src/addToDiagC.c
+ca7ce08599c43df98991056e5555f849 *src/compactToMatC.c
+90869a1ce4e86c5bc10e0cd6de44e9e4 *src/compactToMatCOLD.c
+f7e1e30e1ee729451ded71d4138d3af5 *src/css.f
+692ddb7d68e584682ccecb648b470b52 *src/csstr.f
+c0e0c3df48bc22db87ac07de3cfb2c86 *src/cvrf.f
+c2deef707b8c650bcce46365befb0dd0 *src/dchold.f
+4b9deae4895c3f56338488d9e94183f9 *src/ddfind.f
+7e9c5235540b76b10eb690e0b34dfc58 *src/dlv.f
+41eb7d2d62a8b2e6d6e1309ad66dcce1 *src/dmaket.f
+69018ab167c8899a5f2da639a4d611b4 *src/drdfun.f
+8011dc17b0cb03d11a3d67ce217a8d87 *src/dsetup.f
+9ba3f9047570f84ed5b1aeeb167a3a60 *src/evlpoly.f
+c8580f65bc5d9325fb0090d840873b61 *src/evlpoly2.f
+a5568eed2020760526944464d9927166 *src/expfn.f
+46b4ea9c5fb75897f6e7a902389203ed *src/expfnC.c
+da45db84991588d6da31bd8d9689b8a2 *src/ifind.f
+75aa9985f55e951f2ddb3e10ecf53a40 *src/igpoly.f
+3e831eac556ec355f58f65bf37c969f3 *src/init.c
+303e24564ee4ff97c8a17506f8a09df2 *src/inpoly.f
+09a48ba957f68258d4320475da0ebb25 *src/mltdrb.f
+568de6e0ef023f7c628b3c50d8120226 *src/mltdtd.f
+e21562dc1c29f83270bd5dbf3279c054 *src/multW.f
+74cc95c857cc9f35f9ce410816c4fb73 *src/multebC.c
+fbdceb31b477258590baf52992be74d2 *src/multrb.f
+80473498345e4e2f33e04473d7f108a0 *src/radbas.f
+396e38a316a0072d6ebb9cadd9b1bd16 *src/radfun.f
+0a962a9537ebbacec71a563a99a269c6 *src/rcss.f
+1b75a7515957b02f4f1278a4144597d0 *src/rcssr.f
+b3128c10beb9c896803cc4b32446c006 *src/rcsswt.f
+de1284c3065c8932aa5bfae2c1121ec3 *src/rdist.f
+3b81ea311da648d0e94f957f0eb22357 *src/rdistC.c
+7015fcbce5d53b8d954d8aa95d383cbd *src/sortm.f
+48278542cc173835bf74e3769757a7b8 *tests/Krig.Z.test.R
+a8367476bbda02b30f84fefd0d854326 *tests/Krig.Z.test.Rout.save
+40bfd397c34e0e7f394f5c792312d6ab *tests/Krig.se.W.R
+a62bfb36cd9c8da13ab12b57d415f5ee *tests/Krig.se.W.Rout.save
+4a5404ec50c0aa1bb18e534bdfebe22f *tests/Krig.se.grid.test.R
+52c175565062135da81bfc98484ae13d *tests/Krig.se.grid.test.Rout.save
+d6ce9b47e1f116fe2bcfae6e7457d5a1 *tests/Krig.se.test.R
+69390b90c39506af38f728444b329e0d *tests/Krig.se.test.Rout.save
+c492d0b3ab30eb8fb9c7910a9b902e4d *tests/Krig.test.R
+a525568ad9ba78edf113c29306b1609c *tests/Krig.test.Rout.save
+103e3e38b493b875797faac55500b95d *tests/Krig.test.W.R
+eaa94551778c03b8df3757d6d3df1649 *tests/Krig.test.W.Rout.save
+31f626f25a4d67925f078616862249ca *tests/KrigGCVREML.test.R
+9f51befc16b605ffe663716f678e5868 *tests/KrigGCVREML.test.Rout.save
+2d4736d9c2c74fcc0123d302430cd316 *tests/Likelihood.test.R
+989e8c55cdd712f7062ea6a27e5bc5c6 *tests/Likelihood.test.Rout.save
+3a6cd052ae5010c908b34e170fe6c1c8 *tests/REMLest.test.R
+d6b6f78664b0e1b7689532058640027a *tests/REMLest.test.Rout.save
+57680c401eb274cc459abdaf0d86495f *tests/Tps.test.R
+1e56cef9894a2bc49cb7c4b9c450846b *tests/Tps.test.Rout.save
+430d09632d2239dd5f1be4865c9f2907 *tests/Wend.test.R
+1a16544cb6251e07b58907b6370ab725 *tests/Wend.test.Rout.save
+92e515349ad198c63fa0ac2d7286f729 *tests/cov.test.R
+212f1ba4eb4e72e53f909373ddee99c3 *tests/cov.test.Rout.save
+d74ff67efbef848212c1dbdfaeca0146 *tests/cov.test2.R
+7b355890d73fe5b050d7e1d7b544c482 *tests/cov.test2.Rout.save
+66b69576294c9e91b6b2c383d9a5c95c *tests/derivative.test.R
+adf18f5de2b0dd7bdd8f6fe098c53587 *tests/derivative.test.Rout.save
+2e8c28817705ef907235def38fade15e *tests/diag.multiply.test.R
+2596bf2ba2cdce2650a0351bee598d51 *tests/diag.multiply.test.Rout.save
+5a4f08f2e9491d0fafb872de9308277c *tests/diagonal2.test.R
+563b222985b45db1b7edc3ad0e22cdbd *tests/diagonal2.test.Rout.save
+7646bf9eeb7772f35d81bb7a37a562cc *tests/evlpoly.test.R
+82868887687f9d9dd206a750a5a1cd43 *tests/evlpoly.test.Rout.save
+ade82dbec2ec8fbb5a703f5b42118c2f *tests/fastTpsPredict.test.R
+369ba4d820200dd079e52ffa5faca4f8 *tests/fastTpsPredict.test.Rout.save
+7a4bd01772c880b9e7d77ad2b8e14721 *tests/mKrig.MLE.test.R
+a4a30da5d86894db685714335d94f2ef *tests/mKrig.MLE.test.Rout.save
+cd1f7b105144f3af952e7b45164c1c30 *tests/mKrig.Z.R
+3dd5f670ee85882ebcd4ec2ef93b587f *tests/mKrig.Z.Rout.save
+befeacddd2cad565532d00a423d5537d *tests/mKrig.parameters.test.R
+76d33d66fb42a0aa2d1d85a5e87ad8d6 *tests/mKrig.parameters.test.Rout.save
+e0f8768a800922994f1b37d184b6a799 *tests/mKrig.se.test.R
+ceb8f0d9070ad9260651922a258ebaaf *tests/mKrig.se.test.Rout.save
+464665f4df851b5ae130ad247aa6fcd7 *tests/mKrig.test.R
+25d0898ce84a71c08767ea496b54bff4 *tests/mKrig.test.Rout.save
+225e2eae7ae3e331e0daff95c117d9e4 *tests/mKrigMLETest.R
+142078b04f54211e8667f100c3b053c9 *tests/mKrigMLETest.Rout.save
+acd8e18c32642e1872ea0963fd46d948 *tests/mKrigREMLTest.R
+a562dfe0e25cb37fd7f3f8811f4e30f1 *tests/mKrigREMLTest.Rout.save
+2f9e672a944be1108247f4edfb003a39 *tests/misc.test.R
+c60c88266e33453d9ae44842f4cc114c *tests/misc.test.Rout.save
+72646975b924f4f778709f50dd2f159a *tests/spam.test.R
+9eb00c7eab6d09e346959b186e5e8afa *tests/spam.test.Rout.save
+736316db4d5d57d935926205263e7ab8 *tests/sreg.test.R
+29fca5174aa443d121c61394b2843c37 *tests/sreg.test.Rout.save
+36b818f1d1adcae9379e449d8880b16c *tests/vgram.test.R
+8bbdb5af67bdf36d6e6ae8abfd1d622c *tests/vgram.test.Rout.save
diff --git a/NAMESPACE b/NAMESPACE
new file mode 100644
index 0000000..b5d22d5
--- /dev/null
+++ b/NAMESPACE
@@ -0,0 +1,64 @@
+exportPattern("^[^\\.]")
+ importFrom("grDevices", "chull", "col2rgb", "colors", "palette", "rgb",
+ "trans3d")
+ importFrom("graphics", "abline", "arrows", "axis", "box", "boxplot",
+ "contour", "hist", "lines", "locator", "matlines",
+ "matplot", "mtext", "pairs", "par", "persp", "plot",
+ "points", "polygon", "rect", "segments", "text", "title")
+ importFrom("stats", "D", "approx", "coef", "cor", "dist", "fft", "mad",
+ "median", "optim", "optimize", "predict", "quantile",
+ "rnorm", "sd", "var")
+importFrom("utils", "object.size")
+S3method("[",spatialDesign)
+S3method(coef, Krig)
+S3method(fitted, Krig)
+S3method(image, plot)
+S3method(image, smooth)
+S3method(print, Krig)
+S3method(print, mKrig)
+S3method(print,spatialDesign)
+S3method(print,spatialProcess)
+S3method(print,sreg)
+S3method(print,qsreg)
+S3method(print,summary.Krig)
+S3method(print,summarySpatialDesign)
+S3method(print,summary.spatialProcess)
+S3method(print,summary.sreg)
+S3method(summary,Krig)
+S3method(summary,sreg)
+S3method(summary,ncdf)
+S3method(summary,qsreg)
+S3method(summary,spatialDesign)
+S3method(summary,spatialProcess)
+S3method(summary,mKrig)
+S3method(plot,Krig)
+S3method(plot,spatialProcess)
+S3method(plot,surface)
+S3method(plot,spatialDesign)
+S3method(plot,sreg)
+S3method(plot,qsreg)
+S3method(plot,vgram.matrix)
+S3method(plot,vgram)
+S3method(surface,Krig)
+S3method(surface,mKrig)
+S3method(surface,default)
+S3method(predict,fastTps)
+S3method(predict,interp.surface)
+S3method(predict,Krig)
+S3method(predict,mKrig)
+S3method(predict,qsreg)
+S3method(predict,sreg)
+S3method(predict,surface)
+S3method(predict,surface.default)
+S3method(predict,Tps)
+S3method(predictSE,Krig)
+S3method(predictSE,mKrig)
+S3method(predictSurface,default)
+S3method(predictSurface,fastTps)
+S3method(predictSurface,Krig)
+S3method(predictSurface,mKrig)
+S3method(predictSurfaceSE,default)
+
+useDynLib(fields,.registration=TRUE)
+import("spam", "methods","maps")
+
diff --git a/R/BD.R b/R/BD.R
new file mode 100644
index 0000000..76e3204
--- /dev/null
+++ b/R/BD.R
@@ -0,0 +1,72 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"BD" <- structure(list(KCl = c(30, 30, 20, 50, 25,
+ 10, 40, 15, 50, 10, 20, 45, 50, 35, 40, 35, 30, 15, 40, 30,
+ 10, 50, 20, 10, 25, 45, 30, 25, 45, 10, 45, 10, 30, 30, 20,
+ 50, 10, 40, 15, 20, 45, 50, 35, 35, 30, 15, 30, 10, 50, 20,
+ 25, 45, 30, 25, 45, 10, 45, 10, 30, 30, 25, 50, 10, 40, 40,
+ 15, 35, 15, 10, 40, 10, 25, 35, 35, 30, 25, 45, 50, 40, 20,
+ 50, 30, 15, 10, 50, 15, 25, 45, 40), MgCl2 = c(5, 5, 4, 7,
+ 7, 4, 4, 6, 6, 7, 4, 7, 5, 6, 5, 7, 7, 6, 3, 4, 4, 4, 3,
+ 3, 3, 4, 6, 3, 6, 5, 3, 6, 5, 5, 4, 7, 4, 4, 6, 4, 7, 5,
+ 6, 7, 7, 6, 4, 4, 4, 3, 3, 4, 6, 3, 6, 5, 3, 6, 5, 5, 7,
+ 6, 7, 5, 3, 6, 3, 5, 6, 6, 7, 7, 7, 5, 4, 6, 5, 6, 3, 7,
+ 7, 4, 3, 4, 3, 3, 4, 4, 7), KPO4 = c(25, 25, 20, 20, 30,
+ 25, 20, 45, 35, 25, 40, 30, 45, 20, 40, 45, 40, 25, 35, 45,
+ 30, 40, 30, 20, 45, 30, 30, 35, 20, 40, 40, 35, 25, 25, 20,
+ 20, 25, 20, 45, 40, 30, 45, 20, 45, 40, 25, 45, 30, 40, 30,
+ 45, 30, 30, 35, 20, 40, 40, 35, 25, 25, 30, 35, 25, 40, 35,
+ 40, 25, 20, 45, 25, 20, 35, 40, 30, 35, 40, 35, 30, 45, 40,
+ 25, 20, 30, 45, 40, 30, 25, 25, 25), dNTP = c(625, 625, 1500,
+ 250, 1500, 1250, 1250, 1250, 1500, 1250, 250, 1250, 1000,
+ 1000, 1500, 750, 1000, 250, 250, 1500, 1000, 750, 750, 500,
+ 500, 500, 250, 1000, 500, 500, 1250, 750, 625, 625, 1500,
+ 250, 1250, 1250, 1250, 250, 1250, 1000, 1000, 750, 1000,
+ 250, 1500, 1000, 750, 750, 500, 500, 250, 1000, 500, 500,
+ 1250, 750, 625, 625, 1500, 1500, 1250, 1500, 250, 1500, 1500,
+ 750, 250, 1500, 750, 1250, 500, 1250, 1250, 1000, 250, 500,
+ 1000, 750, 1000, 250, 500, 1250, 750, 1000, 500, 750, 500),
+ lnya = c(12.904207, 12.672946, 9.172639, 9.86786, 9.87817,
+ 6.423247, 6.131226, 6.011267, 8.44247, 12.072541, 11.252859,
+ 9.088173, 10.089967, 13.946539, 8.985946, 9.197255, 9.786954,
+ 6.398595, 8.051978, 4.969813, 5.609472, 4.94876, 5.874931,
+ 6.50279, 6.811244, 9.69892, 10.348173, 8.101678, 11.703546,
+ 13.745088, 8.830543, 11.643954, 13.034624, 12.479909,
+ 8.166216, 9.711116, 8.665613, 7.659171, 7.992945, 11.140411,
+ 9.588777, 8.074026, 13.478638, 10.410305, 10.817776,
+ 7.575585, 7.021084, 7.912057, 6.44254, 6.042633, 7.130899,
+ 9.680344, 8.318742, 7.654443, 9.595603, 12.456831, 8.064636,
+ 11.060369, 12.128111, 13.191889, 9.268609, 8.273847,
+ 12.441145, 8.958025, 8.538955, 7.886081, 8.422883, 8.565983,
+ 11.342137, 8.457443, 8.38936, 10.606585, 11.3679, 8.665613,
+ 8.773385, 9.384294, 9.78132, 12.25009, 9.510445, 13.311329,
+ 11.14331, 9.441452, 9.056023, 8.846497, 8.76873, 9.130214,
+ 12.657148, 9.239899, 10.210972)), .Names = c("KCl", "MgCl2",
+ "KPO4", "dNTP", "lnya"), class = "data.frame", row.names = c("1",
+ "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12",
+ "13", "14", "15", "16", "17", "18", "19", "20", "21", "22",
+ "23", "24", "25", "26", "27", "28", "29", "30", "31", "32",
+ "33", "34", "35", "36", "38", "39", "40", "43", "44", "45",
+ "46", "48", "49", "50", "52", "53", "54", "55", "57", "58",
+ "59", "60", "61", "62", "63", "64", "65", "66", "67", "68",
+ "69", "70", "71", "72", "73", "75", "76", "77", "78", "79",
+ "80", "81", "82", "83", "84", "85", "86", "87", "88", "89",
+ "90", "91", "92", "93", "94", "95", "96"))
diff --git a/R/ChicagoO3.R b/R/ChicagoO3.R
new file mode 100644
index 0000000..8a5c773
--- /dev/null
+++ b/R/ChicagoO3.R
@@ -0,0 +1,57 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"ChicagoO3" <- structure(list(x = structure(c(10.2420950223404,
+ 3.80376523711237, 9.10894898014071, 9.62401536295848, -2.42853799498812,
+ -12.626852374789, -0.419779101996896, -7.88824165286121,
+ 3.54623204570348, -16.9019033521803, -27.2032310085453, -6.18852258956094,
+ -5.26140310048807, -12.2147992685346, -25.5550185835271,
+ -15.6657440334172, 21.109995699806, 12.9204402129956, 33.8836419936979,
+ 28.1148985061338, -8.81404505952784, 6.475624941694, -12.2040623901154,
+ -18.5689928883614, 6.8907291046231, -14.9714234763093, -2.03401039835273,
+ 10.6958505981398, 12.287083222701, -4.939739538856, 11.4568748968428,
+ 20.2432463455088, 35.0486281566463, 28.0610414140067, 23.2873435403219,
+ -20.2294095400778, -19.6759373228389, -16.9777602637997,
+ -17.5312324810386, -18.4998088612067), .Dim = as.integer(c(20,
+ 2)), .Dimnames = list(c("170310032", "170310037", "170310050",
+ "170311002", "170311003", "170311601", "170314002", "170314003",
+ "170317002", "170436001", "170890005", "170970001", "170971002",
+ "170973001", "171110001", "171971008", "180891016", "180892008",
+ "181270020", "181270024"), c("East.West", "North.South"))),
+ y = c(36.4902936963152, 34.6396930821552, 31.6444005657229,
+ 34.4646956838262, 37.7204739668803, 40.1342965426748,
+ 37.0181086910068, 38.4001686365134, 44.0485589002946,
+ 38.4870329290307, 42.2402282830657, 40.0049235817847,
+ 42.1090485712195, 39.6319596353327, 42.8054712629932,
+ 44.1097465187358, 35.1186331327201, 46.898470931915,
+ 42.9564231070325, 46.6868555984414), lon.lat = structure(c(-87.546,
+ -87.671, -87.568, -87.558, -87.792, -87.99, -87.753,
+ -87.898, -87.676, -88.073, -88.273, -87.865, -87.847,
+ -87.982, -88.241, -88.049, -87.335, -87.494, -87.087,
+ -87.199, 41.757, 41.978, 41.708, 41.616, 41.984, 41.668,
+ 41.855, 42.039, 42.062, 41.813, 42.05, 42.177, 42.391,
+ 42.29, 42.221, 41.592, 41.6, 41.639, 41.631, 41.617),
+ .Dim = as.integer(c(20, 2)), .Dimnames = list(c("170310032",
+ "170310037", "170310050", "170311002", "170311003",
+ "170311601", "170314002", "170314003", "170317002",
+ "170436001", "170890005", "170970001", "170971002",
+ "170973001", "171110001", "171971008", "180891016",
+ "180892008", "181270020", "181270024"), c("lon",
+ "lat")))), .Names = c("x", "y", "lon.lat"))
diff --git a/R/Exponential.R b/R/Exponential.R
new file mode 100644
index 0000000..0070ead
--- /dev/null
+++ b/R/Exponential.R
@@ -0,0 +1,37 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Exponential" <- function(d, range = 1, alpha = 1/range, phi = 1.0) {
+ #
+ # Matern covariance function transcribed from Stein's book page 31
+ # nu==smoothness==.5, alpha == 1/range
+ #
+ # GeoR parameters map to kappa==smoothness and phi == range
+ # to make old code from Fuentes and also the package SEHmodel
+ # phi is accepted as the marginal variance of the process (see below)
+ # within fields this parameter is "rho"
+
+ #
+ # check for negative distances
+ if (any(d < 0))
+ stop("distance argument must be nonnegative")
+ #
+ return(phi*exp(-d * alpha))
+}
diff --git a/R/ExponentialUpper.R b/R/ExponentialUpper.R
new file mode 100644
index 0000000..6f85e47
--- /dev/null
+++ b/R/ExponentialUpper.R
@@ -0,0 +1,34 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+ExponentialUpper = function(distMat, range = 1, alpha = 1/range) {
+ # Evaluates the exponential covariance function over the upper triangle of the distance matrix
+
+ if(nrow(distMat) != ncol(distMat))
+ stop('distance matrix is non-symmetric. Should not be calling ExponentialUpper.')
+
+ return(.Call("ExponentialUpperC", as.double(distMat), as.integer(nrow(distMat)), as.double(alpha), PACKAGE = "fields"))
+
+ #convert ans to standard matrix
+ #ans = ans[[1]]
+ #dim(ans) = dim(distMat)
+
+ #return(ans)
+}
diff --git a/R/Krig.family.R b/R/Krig.family.R
new file mode 100644
index 0000000..b8a78b2
--- /dev/null
+++ b/R/Krig.family.R
@@ -0,0 +1,1534 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Krig" <- function(x, Y, cov.function = "stationary.cov",
+ lambda = NA, df = NA, GCV = FALSE, Z = NULL, cost = 1, knots = NA,
+ weights = NULL, m = 2, nstep.cv = 200, scale.type = "user",
+ x.center = rep(0, ncol(x)), x.scale = rep(1, ncol(x)), rho = NA,
+ sigma2 = NA, method = "REML", verbose = FALSE, mean.obj = NA,
+ sd.obj = NA, null.function = "Krig.null.function", wght.function = NULL,
+ offset = 0, na.rm = TRUE, cov.args = NULL,
+ chol.args = NULL, null.args = NULL, wght.args = NULL, W = NULL,
+ give.warnings = TRUE, ...)
+# the verbose switch prints many intermediate steps as an aid in debugging.
+#
+{
+ #
+ # create output list
+ out <- list()
+ ###########################################################
+ # First series of steps simply store pieces of the passed
+ # information to the output list (i.e. the Krig object)
+ ##########################################################
+ out$call <- match.call()
+ # turn off warning based on options
+ if( options()$warn < 0 ){
+ give.warnings<- FALSE
+ }
+ #
+ # save covariance function as its name
+ #
+ if( !is.character( cov.function)){
+ out$cov.function.name <- as.character(substitute(cov.function))
+ }
+ else{
+ out$cov.function.name<-cov.function
+ }
+ #
+ # save null space function as its name
+ #
+ out$null.function.name <- as.character(substitute(null.function))
+ #
+ # save weight function as its name if it is not a NULL
+ #
+ if (is.null(wght.function)) {
+ out$wght.function.name <- NULL
+ }
+ else {
+ out$wght.function.name <- as.character(substitute(wght.function))
+ }
+ out$W <- W
+ if (verbose) {
+ print(out$cov.function.name)
+ print(out$null.function.name)
+ print(out$wght.function.name)
+ }
+ #
+ # logical to indicate if the 'C' argument is present in cov.function
+ # -- a bit of esoteric R code!
+ C.arg.missing <- all(names(formals(get(out$cov.function.name))) !=
+ "C")
+ if (C.arg.missing)
+ stop("Need to have C argument in covariance function\nsee Exp.cov.simple as an example")
+ #
+ # save parameters values possibly passed to the covariance function
+ # also those added to call are assumed to be covariance arguments.
+ if (!is.null(cov.args))
+ out$args <- c(cov.args, list(...))
+ else out$args <- list(...)
+ #
+ # default values for null space function
+ out$null.args <- null.args
+ #
+ # set degree of polynomial null space if this is default
+ # mkpoly is used so often is it helpful to include m argument
+ # by default in Krig call.
+ if (out$null.function.name == "Krig.null.function") {
+ out$null.args <- list(m = m)
+ out$m <- m
+ }
+ #
+ # default values for Cholesky decomposition, these are important
+ # for sparse matrix decompositions used in Krig.engine.fixed.
+ if (is.null(chol.args)) {
+ out$chol.args <- list(pivot = FALSE)
+ }
+ else {
+ out$chol.args <- chol.args
+ }
+ # additional arguments for weight matrix.
+ out$wght.args <- wght.args
+ #
+ # the offset is the effective number of parameters used in the GCV
+ # calculations -- unless this is part of an additive model this
+ # is likely zero
+ out$offset <- offset
+ #
+ # the cost is the multiplier applied to the GCV eff.df
+ #
+ # lambda and df are two ways of parameterizing the smoothness
+ # and are related by a monotonic function that unfortunately
+ # depends on the locations of the data.
+ # lambda can be used directly in the linear algebra, df
+ # must be transformed to lambda numerically using the monotonic trransformation
+ # sigma2 is the error variance and rho the multiplier for the covariance
+ # method is how to determine lambda
+ # the GCV logical forces the code to do the more elaborate decompositions
+ # that faclitate estimating lambda -- even if a specific lambda value is
+ # given.
+ out$cost <- cost
+ out$lambda <- lambda
+ out$eff.df <- df
+ out$sigma2 <- sigma2
+ out$rho <- rho
+ out$method <- method
+ out$GCV <- GCV
+ #
+ # correlation model information
+ #
+ out$mean.obj <- mean.obj
+ out$sd.obj <- sd.obj
+ out$correlation.model <- !(is.na(mean.obj[1]) & is.na(sd.obj[1]))
+ #
+ # transformation info
+ out$scale.type <- scale.type
+ out$x.center <- x.center
+ out$x.scale <- x.scale
+ #
+ # verbose block
+ if (verbose) {
+ cat(" Cov function arguments in call ", fill = TRUE)
+ print(out$args)
+ cat(" covariance function used is : ", fill = TRUE)
+ print(out$cov.function.name)
+ }
+ ###############################################################
+ # Begin modifications and transformations of input information
+ # note that many of these manipulations follow a strategy
+ # of passing the Krig object (out) to a function and
+ # then appending the information from this function to
+ # the Krig object (usually also called "out").
+ #In this way the Krig object is built up
+ # in steps and the process is easier to follow.
+ ###############################################################
+ # various checks on x and Y including removal of NAs in Y
+ # Here is an instance of adding to the Krig object
+ # in this case also some onerous bookkeeping making sure arguments are consistent
+ out2 <- Krig.check.xY(x, Y, Z, weights, na.rm, verbose = verbose)
+ out <- c(out, out2)
+ # transform to correlation model (if appropriate)
+ # find replicates and collapse to means and pool variances.
+ # Transform unique x locations and knots.
+ if (out$correlation.model) {
+ out$y <- Krig.cor.Y(out, verbose = verbose)
+ }
+ out2 <- Krig.transform.xY(out, knots, verbose = verbose)
+ out <- c(out, out2)
+ # NOTE: knots have been transformed after this step
+ #############################################################
+ # Figure out what to do
+ #############################################################
+ #
+ # this functions works through the logic of
+ # what has been supplied for lambda
+ out2 <- Krig.which.lambda(out)
+ out[names(out2)] <- out2
+ # Make weight matrix for observations
+ # ( this is proportional to the inverse square root of obs covariance)
+ # if a weight function or W has not been passed then this is
+ # diag( out$weightsM) for W
+ # The checks represent a limitation of this model to
+ # the WBW type decoposition and no replicate observations.
+ out$nondiag.W <- (!is.null(wght.function)) | (!is.null(W))
+ # Do not continue if there there is a nondiagonal weight matrix
+ # and replicate observations.
+ if (out$nondiag.W) {
+ if (out$knot.model | out$fixed.model) {
+ stop("Non diagonal weight matrix for observations
+ not supported\nwith knots or fixed lambda.")
+ }
+ if (!is.na(out$shat.pure.error)) {
+ stop("Non diagonal weight matrix not implemented
+ with replicate locations")
+ }
+ }
+ # make weight matrix and its square root having passed checks
+ out <- c(out, Krig.make.W(out, verbose = verbose))
+ ########################################################
+ # You have reached the Engines where the actual computing happens!
+ ########################################################
+ # Do the intensive linear algebra to find the solutions
+ # this is where all the heavy lifting happens.
+ #
+ # Note that all the information is passed as a list
+ # including arguments to the cholesky decomposition
+ # used within Krig.engine.fixed
+ #
+ # The results are saved in the component matrices
+ #
+ # if method=='user' then just evaluate at single lambda
+ # fixed here means a fixed lambda
+ #
+ # For fixed lambda the decompositions with and without knots
+ # are surprisingly similar and so are in one engine.
+ ###########################################################
+ if (out$fixed.model) {
+ out$matrices <- Krig.engine.fixed(out, verbose = verbose)
+ # The trace of A matrix in fixed lambda case is not easily computed
+ # so set this to NA.
+ out$eff.df <- NA
+ }
+ #
+ # alternative are
+ # matrix decompositions suitable for
+ # evaluation at many lambdas to facilitate GCV/REML estimates etc.
+ #
+ if (!out$fixed.model) {
+ if (out$knot.model) {
+ # the knot model engine
+ out$matrices <- Krig.engine.knots(out, verbose = verbose)
+ out$pure.ss <- out$matrices$pure.ss
+ }
+ else {
+ # standard engine following the basic computations for thin plate splines
+ out$matrices <- Krig.engine.default(out, verbose = verbose)
+ }
+ }
+ #
+ # store basic information about decompositions
+ out$nt <- out$matrices$nt
+ out$np <- out$matrices$np
+ out$decomp <- out$matrices$decomp
+ #
+ # Now determine a logical vector to indicate coefficients tied to the
+ # the 'spatial drift' i.e. the fixed part of the model
+ # that is not due to the Z covariates.
+ # NOTE that the spatial drift coefficients must be the first columns of the
+ # M matrix
+ if (is.null(out$Z)) {
+ out$ind.drift <- rep(TRUE, out$nt)
+ }
+ else {
+
+ mZ <- ncol(out$ZM)
+ out$ind.drift <- c(rep(TRUE, out$nt - mZ), rep(FALSE,
+ mZ))
+ }
+ if (verbose) {
+ cat("null df: ", out$nt, "drift df: ", sum(out$ind.drift),
+ fill = TRUE)
+ }
+ #########################
+ # End of engine block
+ #########################
+ #################################################
+ # Do GCV and REML search over lambda if not fixed or if GCV variable is TRUE
+ # gcv.Krig, not named well, also does a search over likelihood for lambda.
+ #################################################
+ if (!out$fixed.model | out$GCV) {
+ if (verbose) {
+ cat("call to gcv.Krig", fill = TRUE)
+ }
+ gcv.out <- gcv.Krig(out, nstep.cv = nstep.cv, verbose = verbose,
+ cost = out$cost, offset = out$offset, give.warnings=FALSE)
+ out$gcv.grid <- gcv.out$gcv.grid
+ # save a handy summary table of the search results
+ out$lambda.est <- gcv.out$lambda.est
+ out$warningTable<- gcv.out$warningTable
+ if( verbose){
+ cat("summaries from grid search/optimization", fill=TRUE)
+ print(out$lambda.est)
+ print(out$warningTable)
+ }
+ if( give.warnings){
+ #NOTE: only print out grid search warning forthe method of interest.
+ printGCVWarnings( gcv.out$warningTable, method=method)
+ }
+ # assign the preferred lambda either from GCV/REML/MSE or the user value
+ # NOTE: gcv/reml can be done but the estimate is
+ # still evaluted at the passed user values of lambda (or df)
+ # If df is passed need to calculate the implied lambda value
+ if (out$method != "user") {
+ out$lambda <- gcv.out$lambda.est[out$method, 1]
+ out$eff.df <- out$lambda.est[out$method, 2]
+ }
+ else {
+ if (!is.na(out$eff.df)) {
+ out$lambda <- Krig.df.to.lambda(out$eff.df, out$matrices$D)
+ }
+ else {
+ out$eff.df <- Krig.ftrace(out$lambda, out$matrices$D)
+ }
+ }
+ }
+ ##########################
+ # end GCV/REML block
+ ##########################
+ #
+ # Now we clean up what has happened and stuff
+ # information into output object.
+ #
+ ##########################################
+ # find coefficients at prefered lambda
+ # and evaluate the solution at observations
+ ##########################################
+ # pass replicate group means -- no need to recalculate these.
+ out2 <- Krig.coef(out, yM = out$yM, verbose = verbose)
+ out <- c(out, out2)
+ #######################################################################
+ # fitted values and residuals and predicted values for full model and
+ # also on the null space (fixed
+ # effects). But be sure to do this at the nonmissing x's.
+ ##################################################################
+ out$fitted.values <- predict.Krig(out, x = out$x, Z = out$Z,
+ eval.correlation.model = FALSE)
+ out$residuals <- out$y - out$fitted.values
+ #
+ # this is just M%*%d note use of do.call using function name
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$x, Z = out$Z)))
+ out$fitted.values.null <- as.matrix(Tmatrix) %*% out$d
+ #
+ # verbose block
+ if (verbose) {
+ cat("residuals", out$residuals, fill = TRUE)
+ }
+ #
+ # find various estimates of sigma and rho
+ out2 <- Krig.parameters(out)
+ out <- c(out, out2)
+ ################################################
+ # assign the 'best' model as a default choice
+ # either use the user supplied values or the results from
+ # optimization
+ ################################################
+ passed.sigma2 <- (!is.na(out$sigma2))
+ if (out$method == "user" & passed.sigma2) {
+ out$best.model <- c(out$lambda, out$sigma2, out$rho)
+ }
+ else {
+ # in this case lambda is from opt. or supplied by user
+ out$best.model <- c(out$lambda, out$shat.MLE^2, out$rhohat)
+ }
+ # Note: values in best.model are used in subsquent functions as the choice
+ # for these parameters!
+ # set class
+ class(out) <- c("Krig")
+ return(out)
+}
+
+
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+Krig.check.xY <- function(x, Y, Z, weights, na.rm,
+ verbose = FALSE) {
+ #
+ # check for missing values in Y or X.
+ #
+ # save logical indicating where there are NA's
+ # and check for NA's
+ #
+ ind <- is.na(Y)
+ if (any(ind) & !na.rm) {
+ stop("Need to remove missing values or use: na.rm=TRUE in the call")
+ }
+ #
+ # coerce x to be a matrix
+ x <- as.matrix(x)
+ #
+ # coerce Y to be a vector
+ #
+ Y <- as.matrix(Y)
+ if (ncol(Y) != 1) {
+ stop("Krig can not handle matrix Y data. See mKrig.")
+ }
+ #
+ #default weights ( reciprocal variance of errors).
+ #
+ if (is.null(weights))
+ weights <- rep(1, length(Y))
+ #
+ # check that dimensions agree
+ #
+ if (length(Y) != nrow(x)) {
+ stop(" length of y and number of rows of x differ")
+ }
+ if (length(Y) != length(weights)) {
+ stop(" length of y and weights differ")
+ }
+ # if Z is not NULL coerce to be a matrix
+ # and check # of rows
+ if (verbose) {
+ print(Z)
+ }
+ if (!is.null(Z)) {
+ if (!is.matrix(Z)) {
+ Z <- matrix(c(Z), ncol = 1)
+ }
+ if (length(Y) != nrow(Z)) {
+ stop(" length of y and number of rows of Z differ")
+ }
+ }
+ # if NAs can be removed then remove them and warn the user
+ if (na.rm) {
+ ind <- is.na(Y)
+ if(all(ind)){
+ stop("Oops! All Y values are missing!")
+ }
+ if (any(ind)) {
+ Y <- Y[!ind]
+ x <- as.matrix(x[!ind, ])
+ if (!is.null(Z)) {
+ Z <- Z[!ind, ]
+ }
+ weights <- weights[!ind]
+ }
+ }
+ #
+ # check for NA's in x matrix -- there should not be any !
+ if (any(c(is.na(x)))) {
+ stop(" NA's in x matrix")
+ }
+ #
+ # check for NA's in Z matrix
+ if (!is.null(Z)) {
+ if (any(c(is.na(Z)))) {
+ stop(" NA's in Z matrix")
+ }
+ }
+ #
+ # verbose block
+ if (verbose) {
+ cat("Y:", fill = TRUE)
+ print(Y)
+ cat("x:", fill = TRUE)
+ print(x)
+ cat("weights:", fill = TRUE)
+ cat(weights, fill = TRUE)
+ }
+ #
+ # save x, weights and Y w/o NAs
+ N <- length(Y)
+ return(list(N = N, y = Y, x = x, weights = weights, Z = Z,
+ NA.ind = ind))
+}
+
+"Krig.coef" <- function(out, lambda = out$lambda,
+ y = NULL, yM = NULL, verbose = FALSE) {
+ #
+ # NOTE default value of lambda used from Krig object.
+ #
+ # Determine whether to collapse onto means of replicates ( using y)
+ # if the data has been passed use as the replicate means (yM) use that.
+ # If both y and YM are null then just use out$yM
+ # For readability of this function, all this tortured logic happens in
+ # Krig.ynew.
+ #
+ out2 <- Krig.ynew(out, y, yM)
+ temp.yM <- out2$yM
+ nt <- out$nt
+ np <- out$np
+ ndata <- ncol(temp.yM)
+ u <- NA
+ call.name <- out$cov.function.name
+ if (verbose) {
+ cat("dimension of yM in Krig.coef", fill = TRUE)
+ print(dim(temp.yM))
+ }
+ #
+ # case when knots= unqiue x's
+ # any lambda
+ #
+ if (out$decomp == "WBW") {
+ # pad u with zeroes that corresond to null space basis functions
+ # this makes it compatible with the DR decomposition.
+ u <- rbind(matrix(0, nrow = out$nt, ncol = ndata), t(out$matrices$V) %*%
+ qr.q2ty(out$matrices$qr.T, out$W2 %d*% temp.yM))
+ #
+ #old code beta <- out$matrices$G %*% ((1/(1 + lambda * out$matrices$D))%d*%u)
+ #
+ ind <- (nt + 1):np
+ D2 <- out$matrices$D[ind]
+ #
+ # note use of efficient diagonal multiply in next line
+ temp2 <- (D2/(1 + lambda * D2)) %d*% u[ind, ]
+ beta2 <- out$matrices$V %*% temp2
+ temp.c <- rbind(matrix(0, nrow = nt, ncol = ndata), beta2)
+ temp.c <- qr.qy(out$matrices$qr.T, temp.c)
+ temp.c <- out$W2 %d*% temp.c
+ temp <- temp.yM - do.call(call.name, c(out$args, list(x1 = out$knots,
+ x2 = out$knots, C = temp.c)))
+ temp <- out$W2 %d*% temp
+ temp.d <- qr.coef(out$matrices$qr.T, temp)
+ }
+ #
+ # case with knots
+ # any lambda
+ #
+ if (out$decomp == "DR") {
+ # X is the monster matrix ... X = [ M | K]
+ X <- cbind(do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM))), do.call(call.name,
+ c(out$args, list(x1 = out$xM, x2 = out$knots))))
+ u <- t(out$matrices$G) %*% t(X) %*% (out$weightsM %d*%
+ temp.yM)
+ beta <- out$matrices$G %*% ((1/(1 + lambda * out$matrices$D)) %d*%
+ u)
+ temp.d <- beta[1:nt, ]
+ temp.c <- beta[(nt + 1):np, ]
+ temp <- X %*% out$matrices$G %*% u
+ temp <- sum(out$weightsM * (temp.yM - temp)^2)
+ #### ????
+ out2$pure.ss <- temp + out2$pure.ss
+ }
+ #
+ # fixed lambda knots == unique x's
+ #
+ if (out$decomp == "cholesky") {
+ if (lambda != out$matrices$lambda) {
+ stop("New lambda can not be used with cholesky decomposition")
+ }
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$knots, Z = out$ZM)))
+ temp.d <- qr.coef(out$matrices$qr.VT, forwardsolve(out$matrices$Mc,
+ transpose = TRUE, temp.yM, upper.tri = TRUE))
+ temp.c <- forwardsolve(out$matrices$Mc, transpose = TRUE,
+ temp.yM - Tmatrix %*% temp.d, upper.tri = TRUE)
+ temp.c <- backsolve(out$matrices$Mc, temp.c)
+ }
+ #
+ # fixed lambda with knots
+ #
+ if (out$decomp == "cholesky.knots") {
+ if (lambda != out$matrices$lambda) {
+ stop("New lambda can not be used with cholesky decomposition")
+ }
+ # form K matrix
+ K <- do.call(call.name, c(out$args, list(x1 = out$xM,
+ x2 = out$knots)))
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM)))
+ wY <- out$weightsM * temp.yM
+ temp0 <- t(K) %*% (out$weightsM * Tmatrix)
+ temp1 <- forwardsolve(out$matrices$Mc, temp0, transpose = TRUE,
+ upper.tri = TRUE)
+ qr.Treg <- qr(t(Tmatrix) %*% (out$weightsM * Tmatrix) -
+ t(temp1) %*% temp1)
+ temp0 <- t(K) %*% wY
+ temp3 <- t(Tmatrix) %*% wY - t(temp1) %*% forwardsolve(out$matrices$Mc,
+ temp0, transpose = TRUE, upper.tri = TRUE)
+ temp.d <- qr.coef(qr.Treg, temp3)
+ temp1 <- t(K) %*% (wY - out$weightsM * (Tmatrix) %*%
+ temp.d)
+ temp.c <- forwardsolve(out$matrices$Mc, transpose = TRUE,
+ temp1, upper.tri = TRUE)
+ temp.c <- backsolve(out$matrices$Mc, temp.c)
+ }
+ return(list(c = temp.c, d = temp.d, shat.rep = out2$shat.rep,
+ shat.pure.error = out2$shat.pure.error, pure.ss = out2$pure.ss))
+}
+
+Krig.cor.Y <- function(obj, verbose = FALSE) {
+ # subtract mean
+ if (!is.na(obj$mean.obj[1])) {
+ Y <- obj$y - predict(obj$mean.obj, obj$x)
+ }
+ # divide by sd
+ if (!is.na(obj$sd.obj[1])) {
+ Y <- Y/predict(obj$sd.obj, obj$x)
+ }
+ Y
+}
+
+Krig.Amatrix <- function(object, x0 = object$x, lambda = NULL,
+ eval.correlation.model = FALSE, ...) {
+ if (is.null(lambda)) {
+ lambda <- object$lambda
+ }
+ M <- nrow(object$xM)
+ N <- nrow(x0)
+ # create output matrix
+ out <- matrix(NA, N, M)
+ #
+ # loop through unique data locations predicting response
+ # using unit vector
+ # NOTE that the y vector has already been collapsed onto means.
+ #
+ for (k in 1:M) {
+ ytemp <- rep(0, M)
+ ytemp[k] <- 1
+ out[, k] <- predict(object, x = x0, yM = ytemp, lambda = lambda,
+ eval.correlation.model = eval.correlation.model,
+ ...)
+ }
+ return(out)
+}
+"Krig.df.to.lambda" <- function(df, D, guess = 1,
+ tol = 1e-05) {
+ if (is.list(D)) {
+ D <- D$matrices$D
+ }
+ if (is.na(df))
+ return(NA)
+ if (df < sum(D == 0)) {
+ warning("df too small to match with a lambda value")
+ return(NA)
+ }
+ if (df > length(D)) {
+ warning(" df too large to match a lambda value")
+ return(NA)
+ }
+ l1 <- guess
+ for (k in 1:25) {
+ tr <- sum(1/(1 + l1 * D))
+ if (tr <= df)
+ break
+ l1 <- l1 * 4
+ }
+ l2 <- guess
+ for (k in 1:25) {
+ tr <- sum(1/(1 + l2 * D))
+ if (tr >= df)
+ break
+ l2 <- l2/4
+ }
+ info <- list(D = D, df = df, N = length(D))
+ out <- bisection.search(log(l1), log(l2), Krig.fdf, tol = tol,
+ f.extra = info)$x
+ +exp(out)
+}
+
+"Krig.engine.default" <- function(out, verbose = FALSE) {
+ #
+ # matrix decompositions for computing estimate
+ #
+ # Computational outline:( '.' is used for subscript)
+ #
+ # The form of the estimate is
+ # fhat(x) = sum phi.j(x) d.j + sum psi.k(x) c.k
+ #
+ # the {phi.j} are the fixed part of the model usually low order polynomials
+ # and is also referred to as spatial drift.
+ #
+ # the {psi.k} are the covariance functions evaluated at the unique observation
+ # locations or 'knots'. If xM.k is the kth unique location psi.k(x)= k(x, xM.k)
+ # xM is also out$knots in the code below.
+ #
+ # the goal is find decompositions that facilitate rapid solution for
+ # the vectors d and c. The eigen approach below was identified by
+ # Wahba, Bates Wendelberger and is stable even for near colinear covariance
+ # matrices.
+ # This function does the main computations leading to the matrix decompositions.
+ # With these decompositions the coefficients of the solution are found in
+ # Krig.coef and the GCV and REML functions in Krig.gcv.
+ #
+ # First is an outline calculations with equal weights
+ # T the fixed effects regression matrix T.ij = phi.j(xM.i)
+ # K the covariance matrix for the unique locations
+ # From the spline literature the solution solves the well known system
+ # of two eqautions:
+ # -K( yM - Td - Kc) + lambda *Kc = 0
+ # -T^t ( yM-Td -Kc) = 0
+ #
+ # Mulitple through by K inverse and substitute, these are equivalent to
+ #
+ # -1- -( yM- Td - Kc) + lambda c = 0
+ # -2- T^t c = 0
+ #
+ #
+ # A QR decomposition is done for T= (Q.1,Q.2)R
+ # by definition Q.2^T T =0
+ #
+ # equation -2- can be thought of as a constraint
+ # with c= Q.2 beta2
+ # substitute in -1- and multiply through by Q.2^T
+ #
+ # -Q.2^T yM + Q.2^T K Q.2 beta2 + lambda beta2 = 0
+ #
+ # Solving
+ # beta2 = {Q.2^T K Q.2 + lambda I )^ {-1} Q.2^T yM
+ #
+ # and so one sloves this linear system for beta2 and then uses
+ # c= Q.2 beta2
+ # to determine c.
+ #
+ # eigenvalues and eigenvectors are found for M= Q.2^T K Q.2
+ # M = V diag(eta) V^T
+ # and these facilitate solving this system efficiently for
+ # many different values of lambda.
+ # create eigenvectors, D = (0, 1/eta)
+ # and G= ( 0,0) %*% diag(D)
+ # ( 0,V)
+ # so that
+ #
+ # beta2 = G%*% ( 1/( 1+ lambda D)) %*% u
+ # with
+ #
+ # u = (0, V Q.2^T W2 yM)
+ #
+ # Throughout keep in mind that M has smaller dimension than G due to
+ # handling the null space.
+ #
+ # Now solve for d.
+ #
+ # From -1- Td = yM - Kc - lambda c
+ # (Q.1^T) Td = (Q.1^T) ( yM- Kc)
+ #
+ # ( lambda c is zero by -2-)
+ #
+ # so Rd = (Q.1^T) ( yM- Kc)
+ # use qr functions to solve triangular system in R to find d.
+ #
+ #----------------------------------------------------------------------
+ # What about errors with a general precision matrix, W?
+ #
+ # This is an important case because with replicated observations the
+ # problem will simplify into a smoothing problem with the replicate group
+ # means and unequal measurement error variances.
+ #
+ # the equations to solve are
+ # -KW( yM - Td - Kc) + lambda *Kc = 0
+ # -T^t W( yM-Td -Kc) =0
+ #
+ # Multiple through by K inverse and substitute, these are equivalent to
+ #
+ # -1b- -W( yM- Td - Kc) + lambda c = 0
+ # -2b- (WT)^t c = 0
+ #
+ # Let W2 be the symmetric square root of W, W= W2%*% W2
+ # and W2.i be the inverse of W2.
+ #
+ # -1c- -( W2 yM - W2 T d - (W2 K W2) W2.ic) + lambda W2.i c = 0
+ # -2c- (W2T)^t W2c = 0
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM)))
+ if (verbose) {
+ cat(" Model Matrix: spatial drift and Z", fill = TRUE)
+ print(Tmatrix)
+ }
+ # Tmatrix premultiplied by sqrt of wieghts
+ Tmatrix <- out$W2 %d*% Tmatrix
+ qr.T <- qr(Tmatrix)
+ if( qr.T$rank < ncol( Tmatrix)){
+ stop("Regression matrix for fixed part of model is colinear")}
+ #
+ #verbose block
+ if (verbose) {
+ cat("first 5 rows of qr.T$qr", fill = TRUE)
+ print(qr.T$qr[1:5, ])
+ }
+ #
+ # find Q_2 K Q_2^T where K is the covariance matrix at the knot points
+ #
+ tempM <- t(out$W2 %d*% do.call(out$cov.function.name, c(out$args,
+ list(x1 = out$knots, x2 = out$knots))))
+ tempM <- out$W2 %d*% tempM
+ tempM <- qr.yq2(qr.T, tempM)
+ tempM <- qr.q2ty(qr.T, tempM)
+ np <- nrow(out$knots)
+ nt <- (qr.T$rank)
+ if (verbose) {
+ cat("np, nt", np, nt, fill = TRUE)
+ }
+ #
+ # Full set of decompositions for
+ # estimator for nonzero lambda
+ tempM <- eigen(tempM, symmetric = TRUE)
+ D <- c(rep(0, nt), 1/tempM$values)
+ #
+ # verbose block
+ if (verbose) {
+ cat("eigen values:", fill = TRUE)
+ print(D)
+ }
+ #
+ # Find the transformed data vector used to
+ # evaluate the solution, GCV, REML at different lambdas
+ #
+
+ u <- c(rep(0, nt), t(tempM$vectors) %*% qr.q2ty(qr.T, c(out$W2 %d*%
+ out$yM)))
+ if (verbose) {
+ cat("u vector:", fill = TRUE)
+ print(u)
+ }
+ #
+ #
+ return(list(D = D, qr.T = qr.T, decomp = "WBW", V = tempM$vectors,
+ u = u, nt = nt, np = np))
+}
+
+"Krig.engine.fixed" <- function(out, verbose = FALSE,
+ lambda = NA) {
+ #
+ # Model:
+ # Y_k= f_k + e_k
+ # var( e_k) = sigma^2/W_k
+ #
+ # f= Td + h
+ # T is often a low order polynomial
+ # E(h)=0 cov( h)= rho *K
+ #
+ # let M = (lambda W^{-1} + K)
+ # the surface estimate depends on coefficient vectors d and c
+ # The implementation in Krig/fields is that K are the
+ # cross covariances among the observation locations and the knot locations
+ # H is the covariance among the knot locations.
+ # Thus if knot locs == obs locs we have the obvious collapse to
+ # the simpler form for M above.
+ #
+ # With M in hand ...
+ #
+ # set
+ # d = [(T)^t M^{-1} (T)]^{-1} (T)^t M^{-1} Y
+ # this is just the generalized LS estimate for d
+ #
+ # lambda= sigma**2/rho
+ # the estimate for c is
+ # c= M^{-1}(y - Td)
+ #
+ # This particular numerical strategy takes advantage of
+ # fast Cholesky factorizations for positive definite matrices
+ # and also provides a seamless framework for sparse matrix implementations
+ #
+ if (is.na(lambda))
+ lambda <- out$lambda
+ call.name <- out$cov.function.name
+ if (!out$knot.model) {
+ ####################################################
+ # case of knot locs == obs locs out$knots == out$xM
+ ####################################################
+ # create T matrix
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$knots, Z = out$ZM)))
+ if (verbose) {
+ cat("Tmatrix:", fill = TRUE)
+ print(Tmatrix)
+ }
+ np <- nrow(out$knots)
+ nt <- ncol(Tmatrix)
+ # form K
+ tempM <- do.call(call.name, c(out$args, list(x1 = out$knots,
+ x2 = out$knots)))
+ # form M
+ diag(tempM) <- (lambda/out$weightsM) + diag(tempM)
+ #
+ # find cholesky factor
+ # tempM = t(Mc)%*% Mc
+ # V= Mc^{-T}
+ # call cholesky but also add in the args supplied in Krig object.
+ Mc <- do.call("chol", c(list(x = tempM), out$chol.args))
+ VT <- forwardsolve(Mc, x = Tmatrix, transpose = TRUE,
+ upper.tri = TRUE)
+ qr.VT <- qr(VT)
+ # find GLS covariance matrix of null space parameters.
+ Rinv <- solve(qr.R(qr.VT))
+ Omega <- Rinv %*% t(Rinv)
+ #
+ # now do generalized least squares for d
+ # and then find c.
+ d.coef <- qr.coef(qr.VT, forwardsolve(Mc, transpose = TRUE,
+ out$yM, upper.tri = TRUE))
+ if (verbose) {
+ print(d.coef)
+ }
+ c.coef <- forwardsolve(Mc, transpose = TRUE, out$yM -
+ Tmatrix %*% d.coef, upper.tri = TRUE)
+ c.coef <- backsolve(Mc, c.coef)
+ # return all the goodies, include lambda as a check because
+ # results are meaningless for other values of lambda
+ return(list(qr.VT = qr.VT, d = c(d.coef), c = c(c.coef),
+ Mc = Mc, decomp = "cholesky", nt = nt, np = np, lambda.fixed = lambda,
+ Omega = Omega))
+ }
+ else {
+ ####################################################
+ # case of knot locs != obs locs
+ ####################################################
+ # create weighted T matrix
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM)))
+ nt <- ncol(Tmatrix)
+ np <- nrow(out$knots) + nt
+ # form H
+ H <- do.call(call.name, c(out$args, list(x1 = out$knots,
+ x2 = out$knots)))
+ # form K matrix
+ K <- do.call(call.name, c(out$args, list(x1 = out$xM,
+ x2 = out$knots)))
+ #
+ Mc <- do.call("chol", c(list(x = t(K) %*% (out$weightsM *
+ K) + lambda * H), out$chol.args))
+ # weighted Y
+ wY <- out$weightsM * out$yM
+ temp0 <- t(K) %*% (out$weightsM * Tmatrix)
+ temp1 <- forwardsolve(Mc, temp0, transpose = TRUE, upper.tri = TRUE)
+ qr.Treg <- qr(t(Tmatrix) %*% (out$weightsM * Tmatrix) -
+ t(temp1) %*% temp1)
+ temp0 <- t(K) %*% wY
+ temp3 <- t(Tmatrix) %*% wY - t(temp1) %*% forwardsolve(Mc,
+ temp0, transpose = TRUE, upper.tri = TRUE)
+ d.coef <- qr.coef(qr.Treg, temp3)
+ temp1 <- t(K) %*% (wY - out$weightsM * (Tmatrix) %*%
+ d.coef)
+ c.coef <- forwardsolve(Mc, transpose = TRUE, temp1, upper.tri = TRUE)
+ c.coef <- backsolve(Mc, c.coef)
+ list(qr.Treg = qr.Treg, d = c(d.coef), c = c(c.coef),
+ Mc = Mc, decomp = "cholesky.knots", nt = nt, np = np,
+ lambda.fixed = lambda, Omega = NA)
+ }
+ #
+ # should not get here.
+ #
+}
+
+"Krig.engine.knots" <- function(out, verbose = FALSE) {
+ #
+ # matrix decompostions for computing estimate when
+ # knots are present
+ # QR decomposition of null space regression matrix
+ Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM)))
+ qr.T <- qr(c(sqrt(out$weightsM)) * Tmatrix)
+ nt <- ncol(Tmatrix)
+ np <- nrow(out$knots) + nt
+ if (verbose) {
+ cat(nt, np, fill = TRUE)
+ }
+ # H is the penalty matrix in the ridge regression format
+ # first part is zero because no penalty on part of estimator
+ # spanned by T matrix
+ H <- matrix(0, ncol = np, nrow = np)
+ H[(nt + 1):np, (nt + 1):np] <- do.call(out$cov.function.name,
+ c(out$args, list(x1 = out$knots, x2 = out$knots)))
+ # X is the monster ...
+ X <- cbind(do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM))), do.call(out$cov.function.name,
+ c(out$args, list(x1 = out$xM, x2 = out$knots))))
+ if (verbose) {
+ cat("first lines of X", fill = TRUE)
+ print(X[1:5, ])
+ }
+ # sqrt(weightsM) * X
+ XTwX <- t(X * out$weightsM) %*% X
+ #
+ # then B= G(I-D)G^T
+ # New version of diagonalize may be more stable
+ out2 <- fields.diagonalize2((XTwX), H)
+ D <- out2$D
+ if (verbose) {
+ cat("D;", fill = TRUE)
+ cat(out2$D, fill = TRUE)
+ }
+ #
+ # G should satisfy:
+ # t(G) %*% XTwX %*%G = I and t(G)%*%H%*%G = D
+ #
+ # and
+ # solve( XtwX + lambda H) = G%*%diag( 1/(1+ lambda*D))%*%t(G)
+ #
+
+ # save XG to avoid an extra multiplication.
+ XG <- X %*% out2$G
+
+ u <- t(XG) %*% (out$weightsM * out$yM)
+ #
+ # adjust pure sum of squares to be that due to replicates
+ # plus that due to fitting all the basis functions without
+ # any smoothing. This will be the part of the RSS that does not
+ # change as lambda is varied ( see e.g. gcv.Krig)
+ #
+ pure.ss <- sum(out$weightsM * (out$yM - XG %*% u)^2) + out$pure.ss
+ if (verbose) {
+ cat("total pure.ss from reps, reps + knots ", fill = TRUE)
+ print(out$pure.ss)
+ print(pure.ss)
+ }
+
+ #
+ # in this form the solution is (d,c)= G( I + lambda D)^-1 u
+ # fitted.values = X ( d,c)
+ #
+ # output list
+ # last D eigenvalues are zero due to null space of penalty
+ # OLD code: D[(np - nt + 1):np] <- 0
+ # this should be enforced to machine precision from diagonalization.
+
+
+ list(u = u, D = D, G = out2$G, qr.T = qr.T, decomp = "DR",
+ nt = nt, np = np, pure.ss = pure.ss)
+}
+
+"Krig.fdf" <- function(llam, info) {
+ sum(1/(1 + exp(llam) * info$D)) - info$df
+}
+
+"Krig.fgcv" <- function(lam, obj) {
+ #
+ # GCV that is leave-one-group out
+ #
+ lD <- obj$matrices$D * lam
+ RSS <- sum(((obj$matrices$u * lD)/(1 + lD))^2)
+ MSE <- RSS/length(lD)
+ if ((obj$N - length(lD)) > 0) {
+ MSE <- MSE + obj$pure.ss/(obj$N - length(lD))
+ }
+ trA <- sum(1/(1 + lD))
+ den <- (1 - (obj$cost * (trA - obj$nt - obj$offset) + obj$nt)/length(lD))
+ # If the denominator is negative then flag this as a bogus case
+ # by making the GCV function 'infinity'
+ #
+ ifelse(den > 0, MSE/den^2, 1e20)
+}
+
+"Krig.fgcv.model" <- function(lam, obj) {
+ lD <- obj$matrices$D * lam
+ MSE <- sum(((obj$matrices$u * lD)/(1 + lD))^2)/length(lD)
+ trA <- sum(1/(1 + lD))
+ den <- (1 - (obj$cost * (trA - obj$nt - obj$offset) + obj$nt)/length(lD))
+ ifelse(den > 0, obj$shat.pure.error^2 + MSE/den^2, 1e20)
+}
+
+"Krig.fgcv.one" <- function(lam, obj) {
+ lD <- obj$matrices$D * lam
+ RSS <- obj$pure.ss + sum(((obj$matrices$u * lD)/(1 + lD))^2)
+ trA <- sum(1/(1 + lD))
+ den <- 1 - (obj$cost * (trA - obj$nt - obj$offset) + obj$nt)/obj$N
+ # If the denominator is negative then flag this as a bogus case
+ # by making the GCV function 'infinity'
+ #
+ ifelse(den > 0, (RSS/obj$N)/den^2, 1e+20)
+}
+
+"Krig.flplike" <- function(lambda, obj) {
+ # - log profile likelihood for lambda
+ # See section 3.4 from Nychka Spatial Processes as Smoothers paper.
+ # for equation and derivation
+ D2 <- obj$matrices$D[obj$matrices$D > 0]
+ u2 <- obj$matrices$u[obj$matrices$D > 0]
+ lD <- D2 * lambda
+ N2 <- length(D2)
+ # MLE estimate of rho for fixed lambda
+ rho.MLE <- (sum((D2 * (u2)^2)/(1 + lD)))/N2
+ #
+ # ln determinant of K + lambda*WI
+ lnDetCov <- -sum(log(D2/(1 + lD)))
+
+ -1 * (-N2/2 - log(2 * pi) * (N2/2) - (N2/2) * log(rho.MLE) -
+ (1/2) * lnDetCov)
+
+
+}
+
+"Krig.fs2hat" <- function(lam, obj) {
+ lD <- obj$matrices$D * lam
+ RSS <- obj$pure.ss + sum(((obj$matrices$u * lD)/(1 + lD))^2)
+ den <- obj$N - (sum(1/(1 + lD)) + obj$offset)
+ if (den < 0) {
+ return(NA)
+ }
+ else {
+ RSS/(den)
+ }
+}
+
+"Krig.ftrace" <- function(lam, D) {
+ sum(1/(1 + lam * D))
+}
+
+"Krig.make.W" <- function(out, verbose = FALSE) {
+ if (verbose) {
+ cat("W", fill = TRUE)
+ print(out$W)
+ }
+ if (out$nondiag.W) {
+ #
+ # create W from scratch or grab it from passed object
+ if (is.null(out$W)) {
+ if (verbose) {
+ print(out$wght.function.name)
+ }
+ W <- do.call(out$wght.function.name, c(list(x = out$xM),
+ out$wght.args))
+ # adjust W based on diagonal weight terms
+ #
+ W <- sqrt(out$weightsM) * t(sqrt(out$weightsM) *
+ W)
+ }
+ else {
+ W <- out$W
+ }
+ #
+ # symmetric square root
+ temp <- eigen(W, symmetric = TRUE)
+ W2 <- temp$vectors %*% diag(sqrt(temp$values)) %*% t(temp$vectors)
+ return(list(W = W, W2 = W2))
+ }
+ else {
+ #
+ # These are created only for use with default method to stay
+ # consistent with nondiagonal elements.
+ if (out$fixed.model) {
+ return(list(W = NULL, W2 = NULL))
+ }
+ else {
+ return(list(W = out$weightsM, W2 = sqrt(out$weightsM)))
+ }
+ }
+}
+
+"Krig.make.Wi" <- function(out, verbose = FALSE) {
+ #
+ # If a weight matrix has been passed use it.
+ #
+ # Note that in either case the weight matrix assumes that
+ # replicate observations have been collapses to the means.
+ #
+ if (out$nondiag.W) {
+ temp <- eigen(out$W, symmetric = TRUE)
+ Wi <- temp$vectors %*% diag(1/(temp$values)) %*% t(temp$vectors)
+ W2i <- temp$vectors %*% diag(1/sqrt(temp$values)) %*%
+ t(temp$vectors)
+ return(list(Wi = Wi, W2i = W2i))
+ }
+ else {
+ #
+ # These are created only for use with default method to stay
+ # consistent with nondiagonal elements.
+ return(list(Wi = 1/out$weightsM, W2i = 1/sqrt(out$weightsM)))
+ }
+}
+
+"Krig.make.u" <- function(out, y = NULL, yM = NULL,
+ verbose = FALSE) {
+ #
+ # Determine whether to collapse onto means of replicates ( using y)
+ # if the data has been passed use as the replicate means (yM) use that.
+ # If both y and YM are null then just use out$yM
+ # For readability of this function, all this tortured logic happens in
+ # Krig.ynew.
+ #
+ out2 <- Krig.ynew(out, y, yM)
+ temp.yM <- out2$yM
+ nt <- out$nt
+ np <- out$np
+ ndata <- ncol(temp.yM)
+ u <- NA
+ call.name <- out$cov.function.name
+ if (verbose) {
+ cat("dimension of yM in Krig.coef", fill = TRUE)
+ print(dim(temp.yM))
+ }
+ #
+ # case when knots= unqiue x's
+ # any lambda
+ #
+ if (out$decomp == "WBW") {
+ # pad u with zeroes that corresond to null space basis functions
+ # this makes it compatible with the DR decomposition.
+ u <- rbind(matrix(0, nrow = out$nt, ncol = ndata), t(out$matrices$V) %*%
+ qr.q2ty(out$matrices$qr.T, out$W2 %d*% temp.yM))
+ }
+ #
+ # case with knots
+ # any lambda
+ #
+ if (out$decomp == "DR") {
+ # X is the monster matrix ... X = [ M | K]
+ X <- cbind(do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM))), do.call(call.name,
+ c(out$args, list(x1 = out$xM, x2 = out$knots))))
+ u <- t(out$matrices$G) %*% t(X) %*% (out$weightsM %d*%
+ temp.yM)
+ }
+ return(list(u = u, shat.rep = out2$shat.rep, shat.pure.error = out2$shat.pure.error,
+ pure.ss = out2$pure.ss))
+}
+
+Krig.null.function <- function(x, Z = NULL, drop.Z = FALSE,
+ m) {
+ # default function to create matrix for fixed part of model
+ # x, Z, and drop.Z are required
+ # Note that the degree of the polynomial is by convention (m-1)
+ # returned matrix must have the columns from Z last!
+ #
+ if (is.null(Z) | drop.Z) {
+ return(fields.mkpoly(x, m = m))
+ }
+ else {
+ return(cbind(fields.mkpoly(x, m = m), Z))
+ }
+}
+
+"Krig.parameters" <- function(obj, mle.calc = obj$mle.calc) {
+ # if nondiag W is supplied then use it.
+ # otherwise assume a diagonal set of weights.
+ #
+ # NOTE: calculation of shat involves full set of obs
+ # not those colllapsed to the mean.
+ if (obj$nondiag.W) {
+ shat.GCV <- sqrt(sum((obj$W2 %d*% obj$residuals)^2)/(length(obj$y) -
+ obj$eff.df))
+ }
+ else {
+ shat.GCV <- sqrt(sum((obj$weights * obj$residuals^2)/(length(obj$y) -
+ obj$eff.df)))
+ }
+ if (mle.calc) {
+ rho.MLE <- sum(c(obj$c) * c(obj$yM))/obj$N
+ # set rho estimate to zero if negtive. Typically this
+ # is an issue of machine precision and very small negative value.
+ rho.MLE <- ifelse(rho.MLE < 0, 0, rho.MLE)
+
+ # commented out code for debugging ...
+ # if( rho.MLE< 0) {
+ # stop('problems computing rho.MLE')}
+ # commented out is the REML estimate -- lose null space df because of
+ # the restiction to orthogonal subspace of T.
+ # rhohat<- rho.MLE <- sum(obj$c * obj$yM)/(obj$N - obj$nt)
+ # .
+ rhohat <- rho.MLE
+ shat.MLE <- sqrt(rho.MLE * obj$lambda)
+ }
+ else {
+ rhohat <- rho.MLE <- shat.MLE <- NA
+ }
+ list(shat.GCV = shat.GCV, rho.MLE = rho.MLE, shat.MLE = shat.MLE,
+ rhohat = rhohat)
+}
+
+"Krig.replicates" <- function(out=NULL, x,y, Z=NULL, weights=rep( 1, length(y)),
+ verbose = FALSE) {
+ if( is.null(out)){
+ out<- list( x=x, y=y, N= length(y), Z=Z, weights=weights)
+ }
+ rep.info <- cat.matrix(out$x)
+ if (verbose) {
+ cat("replication info", fill = TRUE)
+ print(rep.info)
+ }
+ # If no replicates are found then reset output list to reflect this condition
+ uniquerows <- !duplicated(rep.info)
+ if (sum(uniquerows) == out$N) {
+ shat.rep <- NA
+ shat.pure.error <- NA
+ pure.ss <- 0
+ # coerce 'y' data vector as a single column matrix
+ yM <- as.matrix(out$y)
+ weightsM <- out$weights
+ xM <- as.matrix(out$x[uniquerows, ])
+ # coerce ZM to matrix
+ if (!is.null(out$Z)) {
+ ZM <- as.matrix(out$Z)
+ }
+ else {
+ ZM <- NULL
+ }
+ }
+ # collapse over spatial replicates
+ else {
+ rep.info.aov <- fast.1way(rep.info, out$y, out$weights)
+ shat.pure.error <- sqrt(rep.info.aov$MSE)
+ shat.rep <- shat.pure.error
+ # copy replicate means as a single column matrix
+ yM <- as.matrix(rep.info.aov$means)
+ weightsM <- rep.info.aov$w.means
+ xM <- as.matrix(out$x[uniquerows, ])
+ # choose some Z's for replicate group means
+ if (!is.null(out$Z)) {
+ ZM <- as.matrix(out$Z[uniquerows, ])
+ }
+ else {
+ ZM <- NULL
+ }
+ pure.ss <- rep.info.aov$SSE
+ if (verbose)
+ print(rep.info.aov)
+ }
+ return(list(yM = yM, xM = xM, ZM = ZM, weightsM = weightsM,
+ uniquerows = uniquerows, shat.rep = shat.rep, shat.pure.error = shat.pure.error,
+ pure.ss = pure.ss, rep.info = rep.info))
+}
+
+Krig.transform.xY <- function(obj, knots, verbose = FALSE) {
+ # find all replcates and collapse to unique locations and mean response
+ # and pooled variances and weights.
+ out <- Krig.replicates(obj, verbose = verbose)
+ if (verbose) {
+ cat("yM from Krig.transform.xY", fill = TRUE)
+ print(out$yM)
+ }
+ #
+ # save information about knots.
+ if (is.na(knots[1])) {
+ out$knots <- out$xM
+ out$mle.calc <- TRUE
+ out$knot.model <- FALSE
+ }
+ else {
+ out$mle.calc <- FALSE
+ out$knot.model <- TRUE
+ out$knots <- knots
+ }
+ #
+ # scale x, knot locations and save transformation info
+ #
+ out$xM <- transformx(out$xM, obj$scale.type, obj$x.center,
+ obj$x.scale)
+ out$transform <- attributes(out$xM)
+ out$knots <- scale(out$knots, center = out$transform$x.center,
+ scale = out$transform$x.scale)
+ #
+ #
+ #verbose block
+ #
+ if (verbose) {
+ cat("transform", fill = TRUE)
+ print(out$transform)
+ }
+ if (verbose) {
+ cat("knots in transformed scale", fill = TRUE)
+ print(knots)
+ }
+ return(out)
+}
+
+"Krig.updateY" <- function(out, Y, verbose = FALSE,
+ yM = NA) {
+ #given new Y values but keeping everything else the same finds the
+ #new u vector and pure error SS associated with the Kriging estimate
+ # the steps are
+ # 1) standardize if neccesary
+ # 2) find means, in the case of replicates
+ # 3) based on the decomposition, multiply a weighted version of yM
+ # with a large matrix extracted from teh Krig object out.
+ #
+ # The out object may be large. This function is written so that out is # #not changed with the hope that it is not copied locally in this #function .
+ # All of the output is accumulated in the list out2
+ #STEP 1
+ #
+ # transform Y by mean and sd if needed
+ #
+ if (out$correlation.model) {
+ Y <- (Y - predict(out$mean.obj, out$x))/predict(out$sd.obj,
+ out$x)
+ if (verbose)
+ print(Y)
+ }
+ #
+ #STEP 2
+ if (is.na(yM[1])) {
+ out2 <- Krig.ynew(out, Y)
+ }
+ else {
+ out2 <- list(yM = yM, shat.rep = NA, shat.pure.error = NA,
+ pure.ss = NA)
+ }
+ if (verbose) {
+ print(out2)
+ }
+ #
+ #STEP3
+ #
+ # Note how matrices are grabbed from the Krig object
+ #
+ if (verbose)
+ cat("Type of decomposition", out$decomp, fill = TRUE)
+ if (out$decomp == "DR") {
+ #
+ #
+ u <- t(out$matrices$G) %*% t(out$matrices$X) %*% (out$weightsM *
+ out2$yM)
+ #
+ # find the pure error sums of sqaures.
+ #
+ temp <- out$matrices$X %*% out$matrices$G %*% u
+ temp <- sum((out$W2 %d*% (out2$yM - temp))^2)
+ out2$pure.ss <- temp + out2$pure.ss
+ if (verbose) {
+ cat("pure.ss", fill = TRUE)
+ print(temp)
+ print(out2$pure.ss)
+ }
+ }
+ #####
+ ##### end DR decomposition block
+ #####
+ ####
+ #### begin WBW decomposition block
+ ####
+ if (out$decomp == "WBW") {
+ #### decomposition of Q2TKQ2
+ u <- c(rep(0, out$nt), t(out$matrices$V) %*% qr.q2ty(out$matrices$qr.T,
+ out$W2 %d*% out2$yM))
+ if (verbose)
+ cat("u", u, fill = TRUE)
+ #
+ # pure error in this case from 1way ANOVA
+ #
+ if (verbose) {
+ cat("pure.ss", fill = TRUE)
+ print(out2$pure.ss)
+ }
+ }
+ #####
+ ##### end WBW block
+ #####
+ out2$u <- u
+ out2
+}
+Krig.which.lambda <- function(out) {
+ #
+ # determine the method for finding lambda
+ # Note order
+ # default is to do 'gcv/REML'
+ out2 <- list()
+ # copy all all parameters to out2 just to make this
+ # easier to read.
+ out2$method <- out$method
+ out2$lambda.est <- NA
+ out2$lambda <- out$lambda
+ out2$eff.df <- out$eff.df
+ out2$rho <- out$rho
+ out2$sigma2 <- out$sigma2
+ if (!is.na(out2$lambda) | !is.na(out2$eff.df)) {
+ #
+ # this indicates lambda has been supplied and leads to
+ # the cholesky type computational approaches
+ # -- but only if GCV is FALSE
+ #
+ out2$method <- "user"
+ }
+ out2$GCV <- out$GCV
+ if (!is.na(out2$eff.df)) {
+ #
+ # this indicates df has been supplied and needs
+ # GCV to be true to compute the lambda
+ # that matches the df
+ #
+ out2$GCV <- TRUE
+ }
+ if (!is.na(out2$rho) & !is.na(out2$sigma2)) {
+ out2$method <- "user"
+ out2$lambda <- out2$sigma2/out2$rho
+ }
+ #
+ # NOTE: method='user' means that a value of lambda has been supplied
+ # and so GCV etc to determine lambda is not needed.
+ # gcv TRUE means that the decompositions will be done to
+ # evaluate the estimate at arbitrary lambda (and also be
+ # able to compute the effective degrees of freedom).
+ #
+ # The fixed lambda calculations are very efficient but
+ # do not make it feasible for GCV/REML or effective degrees of
+ # freedom calculations.
+ #
+ out2$fixed.model <- (out2$method == "user") & (!out2$GCV)
+ #
+ return(out2)
+}
+
+"Krig.ynew" <- function(out, y = NULL, yM = NULL) {
+ #
+ # calculates the collapsed y (weighted) mean vector based on the
+ # X matrix and weights from the out object.
+ # or just passes through the collapsed mean data if passed.
+ #
+ #
+ # If there are no replicated obs. then return the full vector
+ # pure error ss is zero
+ #
+ shat.rep <- NA
+ shat.pure.error <- NA
+ pure.ss <- 0
+ # if no y's are given then it is assumed that one should use the
+ # yM from the original data used to create the Krig object
+ if (is.null(yM) & is.null(y)) {
+ yM <- out$yM
+ }
+ #
+ # case when yM is passed no calculations are needed
+ #
+ if (!is.null(yM)) {
+ return(list(yM = as.matrix(yM), shat.rep = NA, shat.pure.error = NA,
+ pure.ss = 0))
+ }
+ #
+ # no reps case
+ #
+ if (length(unique(out$rep.info)) == out$N) {
+ return(list(yM = as.matrix(y), shat.rep = NA, shat.pure.error = NA,
+ pure.ss = 0))
+ }
+ #
+ # check that y is the right length
+ #
+ if (length(y) != out$N) {
+ stop(" the new y vector is the wrong length!")
+ }
+ #
+ # case when full y data is passed and replicate means need to be found
+ #
+ if (length(unique(out$rep.info)) < out$N) {
+ #
+ # calculate means by pooling Replicated obseravations but use the
+ # the right weighting.
+ #
+ rep.info.aov <- fast.1way(out$rep.info, y, out$weights)[c("means",
+ "MSE", "SSE")]
+ shat.pure.error <- sqrt(rep.info.aov$MSE)
+ shat.rep <- shat.pure.error
+ return(list(yM = rep.info.aov$means, shat.rep = shat.rep,
+ shat.pure.error = shat.pure.error, pure.ss = rep.info.aov$SSE))
+ }
+}
diff --git a/R/MLE.Matern.R b/R/MLE.Matern.R
new file mode 100644
index 0000000..e388b76
--- /dev/null
+++ b/R/MLE.Matern.R
@@ -0,0 +1,106 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+MLE.Matern <- function(x, y, smoothness, theta.grid = NULL,
+ ngrid = 20, verbose = FALSE, niter = 25, tol = 1e-05, Distance = "rdist",
+ m = 2, Dmax = NULL, ...) {
+ # remove missing values and print out a warning
+ bad <- is.na(y)
+ if (sum(bad) > 0) {
+ cat("removed ", sum(bad), " NAs", fill = TRUE)
+ x <- x[!bad, ]
+ y <- y[!bad]
+ }
+ # local function to find distance between locations
+ local.distance.function <- get(Distance)
+ #
+ objective.fn <- function(ltheta, info) {
+ minus.lPLike <- Krig(info$x, info$y, Covariance = "Matern",
+ smoothness = info$smoothness, theta = exp(ltheta),
+ method = "REML", nstep.cv = 80, give.warnings = FALSE,
+ Distance = Distance, m = m, ...)$lambda.est[6, 5]
+ return(minus.lPLike)
+ }
+ # list to pass to the objective function
+ info <- list(x = x, y = y, smoothness = smoothness)
+ #
+ # if grid for ranges is missing use some quantiles of pairwise distances among data.
+ # this will only work if the likelihood at endpoints is smaller than middle.
+ # (i.e. convex)
+ if (is.null(theta.grid)) {
+ theta.range <- quantile(local.distance.function(x, x),
+ c(0.03, 0.97))
+ theta.grid <- seq(theta.range[1], theta.range[2], , ngrid)
+ }
+ if (length(theta.grid) == 2) {
+ theta.grid <- seq(theta.grid[1], theta.grid[2], , ngrid)
+ }
+
+ ngrid <- length(theta.grid)
+ sighat <- rhohat <- trA <- theta <- rep(NA, ngrid)
+ minus.REML <- rep(NA, ngrid)
+
+ # grid search
+ for (j in 1:ngrid) {
+ minus.REML[j] <- objective.fn(log(theta.grid[j]), info)
+ }
+ temp <- cbind(theta.grid, -minus.REML)
+ dimnames(temp) <- list(NULL, c("theta", "logProfileLike"))
+ # best point for theta from grid search
+ IMIN <- (1:ngrid)[min(minus.REML) == minus.REML]
+ if (IMIN == 1 | IMIN == ngrid) {
+ cat("REML at end of search interval:", fill = TRUE)
+
+ return(list(smoothness = smoothness, pars = rep(NA, 3),
+ REML = NA, trA = NA, REML.grid = temp))
+ }
+ # starting triple for golden section search
+ lstart <- log(theta.grid)[IMIN + c(-1, 0, 1)]
+ # golden section search -- this assumes convex minus log likelihood
+ # note that search is in log scale.
+ out <- golden.section.search(lstart[1], lstart[2], lstart[3],
+ f = objective.fn, f.extra = info, niter = niter, tol = tol)$x
+ theta.MLE <- exp(out)
+
+ # one final call to Krig with the theta.MLE value to recover MLEs for rho and sigma
+
+ hold <- Krig(x, y, Covariance = "Matern", smoothness = smoothness,
+ theta = theta.MLE, method = "REML", m = m, Distance = Distance,
+ ...)
+
+ sigma.MLE <- hold$shat.MLE
+ rho.MLE <- hold$rhohat
+ trA <- hold$lambda.est[6, 2]
+ REML <- hold$lambda.est[6, 5]
+ out <- c(rho.MLE, theta.MLE, sigma.MLE)
+ names(out) <- c("rho", "theta", "sigma")
+ # evaluate variogram
+ if (is.null(Dmax)) {
+ Dmax <- (local.distance.function(cbind(range(x[, 1]),
+ range(x[, 2]))))[2, 1]
+ }
+ vg <- list()
+ vg$x <- seq(0, Dmax, , 200)
+ vg$y <- sigma.MLE^2 + rho.MLE * (1 - Matern(vg$x/theta.MLE,
+ smoothness = smoothness))
+ return(list(smoothness = smoothness, theta.MLE = out[2],
+ rho.MLE = out[1], sigma.MLE = out[3], pars = out, REML = -REML,
+ trA = trA, REML.grid = temp, vgram = vg))
+}
diff --git a/R/MLESpatialProcess.R b/R/MLESpatialProcess.R
new file mode 100644
index 0000000..11b53b2
--- /dev/null
+++ b/R/MLESpatialProcess.R
@@ -0,0 +1,115 @@
+
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+MLESpatialProcess <- function(x, y, weights = rep(1, nrow(x)), Z = NULL,
+ mKrig.args = NULL,
+ cov.function = "stationary.cov",
+ cov.args = list(Covariance = "Matern",
+ smoothness = 1),
+ lambda.start = .5,
+ theta.start = NULL,
+ theta.range = NULL,
+ gridN = 20,
+ optim.args = NULL,
+ na.rm = TRUE,
+ verbose = FALSE,
+ abstol = 1e-4,
+ REML = FALSE,
+ ...) {
+ if( verbose){
+ cat(" MLESpatialProcess extra arguments:" , full=TRUE)
+ print( names( list(...)))
+ }
+ # combine list(...) with cov.args and omit duplicates but favoring the ... value
+ ind<- match( names( cov.args), names(list(...) ) )
+ cov.args = c(cov.args[is.na(ind)], list(...))
+
+ ########################################################################
+ # evaluate likelihood for a grid of theta on log scale
+ # maximizing over lambda.
+ #
+ # if range or starting value for range is missing use quantiles of pairwise
+ # distances among data.
+ if( is.null( theta.range) ){
+ if( is.null( cov.args$Distance)){
+ pairwiseD<- dist(x)
+ }
+ else{
+ pairwiseD<- do.call(cov.args$Distance, list(x))
+ pairwiseD<- pairwiseD[col(pairwiseD) > row( pairwiseD) ]
+ }
+ theta.range<- quantile( pairwiseD, c(.02,.97))
+ }
+ thetaGrid<- seq( theta.range[1], theta.range[2], length.out=gridN )
+ #
+ par.grid<- list( theta= thetaGrid)
+ MLEGrid<- mKrigMLEGrid(x, y, weights = weights, Z= Z,
+ mKrig.args = mKrig.args,
+ cov.fun = cov.function,
+ cov.args = cov.args,
+ par.grid = par.grid,
+ lambda = lambda.start,
+ lambda.profile = TRUE,
+ na.rm = na.rm,
+ verbose = verbose,
+ REML = REML)
+ ##################################################################################
+ #refine MLE for lambda and theta use the best value of theta from grid search if
+ # starting value not passed.
+ if ( is.null(theta.start) ) {
+ ind<- which.max( MLEGrid$summary[,2] )
+ theta.start <- par.grid$theta[ind]
+ lambda.start<- MLEGrid$lambda.best
+ }
+ MLEJoint <- mKrigMLEJoint(x, y, weights = weights, Z = Z,
+ mKrig.args = mKrig.args,
+ cov.fun = cov.function,
+ cov.args = cov.args,
+ lambda.start = lambda.start,
+ cov.params.start = list(theta=theta.start),
+ optim.args = optim.args,
+ abstol = abstol,
+ na.rm = na.rm,
+ verbose = verbose,
+ REML = REML)
+
+ #####################################################################################
+ # evaluate likelihood on grid of log lambda with MLE for theta
+ #NOTE lambda.profile = FALSE makes this work.
+ lambdaGrid<- (10^(seq( -4,4,,gridN) ))*MLEJoint$pars.MLE[1]
+ par.grid<- list( theta= rep(MLEJoint$pars.MLE[2], gridN) )
+ if( verbose){ print( par.grid)}
+ MLEProfileLambda <- mKrigMLEGrid(x, y, weights = weights, Z= Z,
+ cov.fun = cov.function,
+ cov.args = cov.args,
+ mKrig.args = mKrig.args,
+ par.grid = par.grid,
+ lambda = lambdaGrid,
+ lambda.profile = FALSE,
+ na.rm = na.rm,
+ verbose = verbose,
+ REML = REML)
+ return(
+ list( summary= MLEJoint$summary, MLEGrid= MLEGrid, MLEJoint=MLEJoint,
+ MLEProfileLambda=MLEProfileLambda, call=match.call() )
+ )
+}
diff --git a/R/MLESpatialProcess.fast.R b/R/MLESpatialProcess.fast.R
new file mode 100644
index 0000000..d4fb3a9
--- /dev/null
+++ b/R/MLESpatialProcess.fast.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+MLESpatialProcess.fast = function(x, y, lambda.start=.5, theta.start = NULL,
+ cov.function = "stationary.cov",
+ cov.args = list(Covariance = "Matern", smoothness = 1),
+ Distance = "rdist", verbose=FALSE, optim.args=NULL, ...) {
+
+ warning("MLESpatialProcess.fast is deprecated. Use MLESpatialProcess instead.")
+
+ do.call("MLESpatialProcess", list(x=x,y=y, lambda.start=lambda.start, theta.start=theta.start,
+ cov.function=cov.function, cov.args=cov.args, Distance=Distance,
+ verbose=verbose, optim.args=optim.args, list(...)))
+}
diff --git a/R/MLEfast.R b/R/MLEfast.R
new file mode 100644
index 0000000..6716b23
--- /dev/null
+++ b/R/MLEfast.R
@@ -0,0 +1,134 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+MLE.objective.fn <- function(ltheta, info, value = TRUE) {
+ # marginal process covariance matrix
+ y <- as.matrix(info$y)
+ x <- info$x
+ smoothness <- info$smoothness
+ ngrid <- info$ngrid
+ # number of reps
+ M <- ncol(y)
+
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ N <- nrow(y)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ ys <- t(Q2) %*% y
+
+ N2 <- length(ys)
+ theta <- exp(ltheta)
+ K <- Matern(rdist(x, x)/theta, smoothness = smoothness)
+ Ke <- eigen(t(Q2) %*% K %*% Q2, symmetric = TRUE)
+ u2 <- t(Ke$vectors) %*% ys
+ # mean over replicates -- mean square for coefficients for
+ # a particular eigenfunction.
+ u2.MS <- c(rowMeans(u2^2))
+
+ D2 <- Ke$values
+ N2 <- length(D2)
+
+
+ # grid of lambda based on spacing of eigenvalues
+ ngrid <- min(ngrid, N2)
+ lambda.grid <- exp(seq(log(D2[1]), log(D2[N2]), , ngrid))
+ trA <- minus.pflike <- rep(NA, ngrid)
+ #grid search followed by golden section
+
+ # -log likelihood
+ temp.fn <- function(llam, info) {
+ lam.temp <- exp(llam)
+ u2 <- info$u2.MS
+ D2 <- info$D2
+ N2 <- length(u2.MS)
+ # MLE of rho
+ rho.MLE <- (sum((u2.MS)/(lam.temp + D2)))/N2
+ # ln determinant
+ lnDetCov <- sum(log(lam.temp + D2))
+ -1 * M * (-N2/2 - log(2 * pi) * (N2/2) - (N2/2) * log(rho.MLE) -
+ (1/2) * lnDetCov)
+ }
+
+ # information list for calling golden section search.
+ info <- list(D2 = D2, u2 = u2.MS, M = M)
+ out <- golden.section.search(f = temp.fn, f.extra = info,
+ gridx = log(lambda.grid), tol = 1e-07)
+
+ minus.LogProfileLike <- out$fmin
+ lambda.MLE <- exp(out$x)
+ rho.MLE <- (sum((u2.MS)/(lambda.MLE + D2)))/N2
+ sigma.MLE <- sqrt(lambda.MLE * rho.MLE)
+ trA <- sum(D2/(lambda.MLE + D2))
+ pars <- c(rho.MLE, theta, sigma.MLE, trA)
+ names(pars) <- c("rho", "theta", "sigma", "trA")
+ if (value) {
+ return(minus.LogProfileLike)
+ }
+ else {
+ return(list(minus.lPlike = minus.LogProfileLike, lambda.MLE = lambda.MLE,
+ pars = pars, mle.grid = out$coarse.search))
+ }
+
+}
+
+
+MLE.Matern.fast <- function(x, y, smoothness, theta.grid = NULL,
+ ngrid = 20, verbose = FALSE, m = 2, ...) {
+ # remove missing values and print out a warning
+ bad <- is.na(y)
+ if (sum(bad) > 0) {
+ cat("removed ", sum(bad), " NAs", fill = TRUE)
+ x <- x[!bad, ]
+ y <- y[!bad]
+ }
+
+ # list to pass to the objective function
+ # NOTE: large ngrid here is very cheap after the eigen decomposition
+ # has been done.
+ info <- list(x = x, y = y, smoothness = smoothness, ngrid = 80)
+
+ # if grid for ranges is missing use some quantiles of
+ # pairwise distances among data.
+ if (is.null(theta.grid)) {
+ theta.range <- quantile(rdist(x, x), c(0.03, 0.97))
+ theta.grid <- seq(theta.range[1], theta.range[2], , ngrid)
+ }
+ if (length(theta.grid) == 2) {
+ theta.grid <- seq(theta.grid[1], theta.grid[2], , ngrid)
+ }
+ else {
+ ngrid <- length(theta.grid)
+ }
+
+ # grid search/golden section search
+ # note that search is in log scale.
+ out <- golden.section.search(f = MLE.objective.fn, f.extra = info,
+ gridx = log(theta.grid))
+
+ theta.MLE <- exp(out$x)
+ REML <- -out$fmin
+ # one final call with the theta.MLE value to recover MLEs for rho and sigma
+ out2 <- MLE.objective.fn(log(theta.MLE), info, value = FALSE)
+ return(list(smoothness = smoothness, pars = out2$pars[1:3],
+ REML = REML, trA = out2$pars[4], REML.grid = cbind(theta.grid,
+ -1 * out$coarse.search[, 2])))
+
+
+}
diff --git a/R/Matern.R b/R/Matern.R
new file mode 100644
index 0000000..323bbc0
--- /dev/null
+++ b/R/Matern.R
@@ -0,0 +1,46 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Matern" <- function(d, range = 1, alpha = 1/range,
+ smoothness = 0.5, nu = smoothness, phi = 1.0) {
+ #
+ # Matern covariance function transcribed from Stein's book page 31
+ # nu==smoothness, alpha == 1/range
+ #
+ # GeoR parameters map to kappa==smoothness and phi == range
+ # check for negative distances
+ # phi is accepted as the marginal variance of the process (see below)
+ # within fields, however, this parameter is "rho" and we recommend
+ # not using phi.
+
+ if (any(d < 0))
+ stop("distance argument must be nonnegative")
+ d <- d * alpha
+ # avoid sending exact zeroes to besselK
+ d[d == 0] <- 1e-10
+ #
+ # the hairy constant ...
+ con <- (2^(nu - 1)) * gamma(nu)
+ con <- 1/con
+ #
+ # call to Bessel function from R base package
+ #
+ return(phi * con * (d^nu) * besselK(d, nu))
+}
diff --git a/R/Matern.parameters.R b/R/Matern.parameters.R
new file mode 100644
index 0000000..6aafeb6
--- /dev/null
+++ b/R/Matern.parameters.R
@@ -0,0 +1,44 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+Matern.cor.to.range <- function(d, nu, cor.target = 0.5,
+ guess = NULL, ...) {
+ # define local function for root finding
+ #
+ ftemp <- function(theta, f.extra) {
+ Matern(f.extra$d/theta, nu = f.extra$nu) - f.extra$cor.target
+ }
+ # inital guess is exponential
+ if (is.null(guess)) {
+ guess[1] <- guess[2] <- -d/log(cor.target)
+ }
+ # extra info for function
+ f.extra = list(d = d, nu = nu, cor.target = cor.target)
+ # find guesses that are above and below
+ while (ftemp(guess[2], f.extra) < 0) {
+ guess[2] <- guess[2] * 2
+ }
+ while (ftemp(guess[1], f.extra) > 0) {
+ guess[1] <- guess[1]/2
+ }
+ temp <- bisection.search(guess[1], guess[2], f = ftemp, f.extra = f.extra,
+ ...)
+ return(temp$x)
+}
diff --git a/R/QTps.R b/R/QTps.R
new file mode 100644
index 0000000..ce733d3
--- /dev/null
+++ b/R/QTps.R
@@ -0,0 +1,103 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+QTps <- function(x, Y, ..., f.start = NULL, psi.scale = NULL,
+ C = 1, alpha = 0.5, Niterations = 100, tolerance = 0.001,
+ verbose = FALSE) {
+ #
+ #
+ good <- !is.na(Y)
+ x <- as.matrix(x)
+ x <- x[good, ]
+ Y <- Y[good]
+ if (any(!good)) {
+ warning(paste(sum(!good), "missing value(s) removed from data"))
+ }
+ if (is.null(f.start)) {
+ f.start <- rep(median(Y), length(Y))
+ }
+ scale.Y <- mad(Y - f.start, na.rm = TRUE)
+ #
+ if (is.null(psi.scale)) {
+ psi.scale = scale.Y * 0.05
+ }
+ #
+ f.hat <- f.start
+ # create Tps object to reuse for iterative fitting
+ Tps.obj <- Tps(x, Y, ...)
+ lambda.method <- Tps.obj$method
+ conv.flag <- FALSE
+ conv.info <- rep(NA, Niterations)
+ for (k in 1:Niterations) {
+ Y.psuedo <- f.hat + C * psi.scale * qsreg.psi((Y - f.hat)/psi.scale,
+ C = C, alpha = alpha)
+ # find predicted for a fixed lambda or estimate a new value
+ f.hat.new <- predict(Tps.obj, y = Y.psuedo)
+ # convergence test
+ test.rmse <- mean(abs(f.hat.new - f.hat))/mean(abs(f.hat))
+ conv.info[k] <- test.rmse
+ if (verbose) {
+ cat(k, test.rmse, fill = TRUE)
+ }
+ if (test.rmse <= tolerance) {
+ conv.flag <- TRUE
+ Number.iterations <- k
+ break
+ }
+ f.hat <- f.hat.new
+ }
+ # One final complete fit at convergence.
+ if (verbose) {
+ if (conv.flag) {
+ cat("Converged at tolerance", tolerance, "in", Number.iterations,
+ "iterations", fill = TRUE)
+ }
+ else {
+ cat("Exceeded maximum number of iterations", Niterations,
+ fill = TRUE)
+ }
+ }
+ # One final complete fit at convergence.
+ f.hat <- f.hat.new
+ Y.psuedo <- f.hat + C * psi.scale * qsreg.psi((Y - f.hat)/psi.scale,
+ C = C, alpha = alpha)
+ obj <- Tps(x, Y.psuedo, ...)
+ # CV residuals based on psuedo-data)
+ # Use the linear approximation Y_k - f.cv_k = (Y_k- f_k)/( 1- A_kk)
+ # f.cv_k = f_k/( 1- A_kk) - ( A_kk)Y_k/( 1- A_kk)
+ #
+ # Note: we find f.cv based on psuedo data but then consider its deviation
+ # from the actual data
+ #
+ diag.A <- diag(Krig.Amatrix(obj))
+ f.cv <- obj$fitted.values/(1 - diag.A) - diag.A * Y.psuedo/(1 -
+ diag.A)
+ # leave-one-out estimate of f.hat
+ CV.psuedo <- mean(qsreg.rho(Y - f.cv, alpha = alpha, C = psi.scale))
+ # add extra stuff to the Krig object.
+ Qinfo <- list(yraw = Y, conv.info = conv.info, conv.flag = conv.flag,
+ CV.psuedo = CV.psuedo, psi.scale = psi.scale, alpha = alpha)
+ obj <- c(obj, list(Qinfo = Qinfo))
+ class(obj) <- "Krig"
+ return(obj)
+}
+
+
+
diff --git a/R/REMLtest.R b/R/REMLtest.R
new file mode 100644
index 0000000..502819f
--- /dev/null
+++ b/R/REMLtest.R
@@ -0,0 +1,209 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+MaternGLS.test <- function(x, y, smoothness = 1.5,
+ init = log(c(1, 0.2, 0.1))) {
+ # some simulations within fields to
+ # study the variability in estimates of the covariance parameters.
+ N <- length(y)
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ nu <- smoothness
+
+ loglmvn <- function(pars, nu, y, x, d) {
+ lrho = pars[1]
+ ltheta = pars[2]
+ lsig2 = pars[3]
+ # print( pars)
+ N <- length(y)
+ M <- (exp(lrho) * Matern(d, range = exp(ltheta), smoothness = nu) +
+ exp(lsig2) * diag(N))
+
+ X <- fields.mkpoly(x, 2)
+ Mi <- solve(M)
+ betahat <- solve(t(X) %*% Mi %*% X) %*% t(X) %*% Mi %*%
+ y
+ res <- y - X %*% betahat
+ cM <- chol(M)
+ lLike <- (N/2) * log(2 * pi) - (1/2) * (2 * sum(log(diag(cM)))) -
+ (1/2) * t(res) %*% Mi %*% res
+
+ ycept <- -lLike
+
+ if ((abs(lrho) > 20) | (abs(ltheta) > 10) | (abs(lsig2) >
+ 40)) {
+ return(ycept + 1000 * sum(abs(abs(pars) - 100)))
+ }
+ else {
+ return(ycept)
+ }
+ }
+
+ d <- rdist(x, x)
+ temp <- optim(init, loglmvn, method = "L-BFGS-B", nu = nu,
+ y = y, x = x, d = d)
+ out <- exp(temp$par)
+
+ return(list(smoothness = smoothness, pars = out, optim = temp))
+}
+
+
+
+MaternGLSProfile.test <- function(x, y, smoothness = 1.5,
+ init = log(c(0.05, 1))) {
+ # some simulations within fields to
+ # study the variability in estimates of the covariance parameters.
+ N <- length(y)
+
+ nu <- smoothness
+
+ loglmvn <- function(pars, nu, y, x, d) {
+ llam = pars[1]
+ ltheta = pars[2]
+ # print( pars)
+ N <- length(y)
+ theta <- exp(ltheta)
+ lambda <- exp(llam)
+ lLike <- mKrig(x, y, theta = theta, Covariance = "Matern",
+ smoothness = nu, lambda = lambda)$lnProfileLike
+ ycept <- -lLike
+ if ((abs(llam) > 20) | (abs(ltheta) > 10)) {
+ return(ycept + 1000 * sum(abs(abs(pars) - 100)))
+ }
+ else {
+ return(ycept)
+ }
+ }
+
+ d <- rdist(x, x)
+ temp <- optim(init, loglmvn, method = "L-BFGS-B", nu = nu,
+ y = y, x = x, d = d)
+ out <- exp(temp$par)
+ rho.MLE <- mKrig(x, y, theta = out[2], Covariance = "Matern",
+ smoothness = nu, lambda = out[1])$rho.MLE
+ sigma2.MLE <- out[1] * rho.MLE
+ return(list(smoothness = smoothness, pars = c(rho.MLE, out[2],
+ sigma2.MLE), optim = temp))
+}
+
+MaternQR.test <- function(x, y, smoothness = 1.5,
+ init = log(c(1, 0.2, 0.1))) {
+ # some simulations within fields to
+ # study the variability in estimates of the covariance parameters.
+ nu <- smoothness
+
+ loglmvn <- function(pars, nu, x, y) {
+ N <- length(y)
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ ys <- t(Q2) %*% y
+ N2 <- length(ys)
+ lrho = pars[1]
+ ltheta = pars[2]
+ lsig2 = pars[3]
+ d <- rdist(x, x)
+
+ A <- (exp(lrho)*Matern(d, range = exp(ltheta),
+ smoothness = nu) + exp(lsig2) * diag(N))
+ A <- t(Q2) %*% A %*% Q2
+ A <- chol(A)
+ w = backsolve(A, ys, transpose = TRUE)
+ ycept <- (N2/2) * log(2 * pi) + sum(log(diag(A))) + (1/2) *
+ t(w) %*% w
+
+ if ((abs(lrho) > 100) | (abs(ltheta) > 100) | (abs(ltheta) >
+ 100)) {
+ return(ycept + 1000 * sum(abs(abs(pars) - 100)))
+ }
+ else {
+ return(ycept)
+ }
+ }
+
+
+ temp <- optim(init, loglmvn, method = "L-BFGS-B", nu = nu,
+ x = x, y = y)
+ out <- exp(temp$par)
+ llike <- loglmvn(temp$par, nu, x, y)
+ return(list(smoothness = smoothness, pars = out, llike = llike,
+ optim = temp))
+}
+
+
+MaternQRProfile.test <- function(x, y, smoothness = 1.5,
+ init = log(c(1))) {
+ # some simulations within fields to
+ # study the variability in estimates of the covariance parameters.
+ nu <- smoothness
+ loglmvn <- function(pars, nu, x, y) {
+ ltheta = pars[1]
+ # print( exp(ltheta))
+ ycept <- Krig(x, y, Covariance = "Matern", theta = exp(ltheta),
+ smoothness = nu, method = "REML")$lambda.est[6, 5]
+ # print( c(exp(ltheta),ycept))
+ if ((abs(ltheta) > 100)) {
+ return(ycept + 1000 * sum(abs(abs(pars) - 100)))
+ }
+ else {
+ return(ycept)
+ }
+ }
+
+ temp <- optim(init, loglmvn, method = "L-BFGS-B", nu = nu,
+ x = x, y = y)
+ theta.est <- exp(temp$par[1])
+ out2 <- Krig(x, y, Covariance = "Matern", theta = theta.est,
+ smoothness = nu, method = "REML")
+ # MLE based on reduced degrees of freedom:
+
+ offset <- (out2$N/(out2$N - 3))
+
+ out3 <- c(out2$rho.MLE * offset, theta.est, out2$shat.MLE^2 *
+ offset)
+
+ return(list(obj = out2, smoothness = smoothness, pars = out3,
+ trA = out2$eff.df, optim = temp))
+}
+
+# this function has correct formula for REML likelihood
+REML.test <- function(x, y, rho, sigma2, theta, nu = 1.5) {
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ N <- length(y)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ ys <- t(Q2) %*% y
+ N2 <- length(ys)
+ A <- (rho * Matern(rdist(x, x), range = theta, smoothness = nu) +
+ sigma2 * diag(1, N))
+ A <- t(Q2) %*% A %*% Q2
+ Ac <- chol(A)
+ w <- backsolve(Ac, ys, transpose = TRUE)
+ REML.like <- (N2/2) * log(2 * pi) + (1/2) * 2 * sum(log(diag(Ac))) +
+ (1/2) * t(w) %*% w
+ REML.like <- -1 * REML.like
+ ccoef <- rho * Q2 %*% solve(A) %*% ys
+ return(list(REML.like = REML.like, A = A, ccoef = ccoef,
+ quad.form = t(w) %*% w, rhohat = (t(w) %*% w/N2) * rho,
+ det = 2 * sum(log(diag(Ac))), N2 = N2))
+}
+
+
diff --git a/R/RMprecip.R b/R/RMprecip.R
new file mode 100644
index 0000000..9e05c97
--- /dev/null
+++ b/R/RMprecip.R
@@ -0,0 +1,522 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"RMprecip" <- structure(list(x = structure(list(lon = c(-110.53,
+ -109.53, -109.55, -109.23, -110.1, -109.77, -109.4, -109.1,
+ -109.05, -110.73, -104.65, -103.15, -105.85, -108.53, -105.88,
+ -105.48, -105.52, -107.17, -102.18, -105.27, -102.45, -104.33,
+ -104.83, -106.13, -102.27, -104.22, -105.7, -102.68, -105.23,
+ -104.87, -106.12, -105.28, -104.85, -102.35, -107.52, -106.2,
+ -106.77, -107.97, -108.73, -104.7, -108.57, -107.6, -106.97,
+ -105.68, -106.35, -108.07, -104.88, -106.03, -108.97, -108.5,
+ -107.88, -102.78, -104.57, -105.32, -103.07, -102.83, -105.08,
+ -108.05, -103.8, -104.7, -104.03, -108.75, -108.93, -103.5,
+ -105.7, -107.32, -108.53, -108.58, -105.83, -105.85, -105.68,
+ -105.53, -104.7, -106.33, -105.35, -105.38, -106.97, -107.62,
+ -103.15, -107.25, -107.13, -106, -102.12, -102.3, -102.28,
+ -105.22, -102.68, -102.93, -102.25, -103.52, -105.1, -102.78,
+ -106.4, -103.48, -107.32, -105.48, -105.13, -102.62, -103.22,
+ -106.32, -103.02, -105.07, -105.95, -108.28, -107.47, -108.08,
+ -107.9, -106.75, -108.5, -106.15, -105.6, -103.83, -109.03,
+ -105.02, -108.28, -103.75, -103.7, -107.68, -107.02, -108.35,
+ -107.58, -104.65, -108.07, -104.5, -104.72, -107.1, -108.8,
+ -108.03, -107.75, -107.27, -103.7, -104.08, -104.98, -106.13,
+ -106.02, -106.43, -104.97, -102.52, -102.52, -107.23, -107.67,
+ -106.42, -102.7, -106.83, -103.2, -102.07, -102.58, -105.12,
+ -106.37, -104.12, -106.62, -107.87, -104.48, -104.33, -106.32,
+ -108.73, -107.58, -106.27, -104.8, -102.28, -105.2, -105.48,
+ -105.77, -106.78, -104.48, -102.23, -106.9, -108.73, -102.73,
+ -99.55, -99.77, -101.05, -100.95, -99.65, -101.53, -99.2,
+ -101.38, -99.63, -99.52, -99.73, -100.35, -101.07, -99.33,
+ -100.12, -99.58, -99.73, -100.42, -101.9, -99.57, -100.82,
+ -101.7, -100.48, -99.3, -99.33, -100.62, -100.45, -101.35,
+ -100.48, -99.97, -100.28, -99.4, -99.12, -101.25, -99.1,
+ -100, -101.37, -100.92, -99.57, -99.53, -99.18, -99.57, -101.38,
+ -100.33, -100.87, -100.07, -99.03, -99.9, -100.18, -99.93,
+ -99.83, -100.85, -100.52, -99.57, -99.57, -99.32, -99.3,
+ -100.23, -99.92, -101.77, -101.95, -101.18, -101.8, -101.92,
+ -100.08, -100.9, -101.75, -99.27, -100.18, -100.83, -101.77,
+ -101.77, -99.07, -101.35, -100.17, -99.88, -99.83, -99.42,
+ -101.25, -103.78, -99.87, -102.9, -99.87, -99.13, -100.18,
+ -99.47, -99.83, -101.53, -99.63, -102.08, -99.87, -103.1,
+ -99.68, -99.15, -100.17, -99.7, -103.08, -99.25, -103.42,
+ -102.43, -100.83, -100.5, -102.97, -102.28, -102.2, -99.92,
+ -101.52, -100.08, -102.17, -100.17, -101.93, -100.32, -99.22,
+ -103.88, -103.88, -100.97, -102.68, -102.7, -103.08, -99.37,
+ -101.63, -99.08, -103.67, -101.67, -101.98, -102.63, -102.63,
+ -104.03, -101.55, -99.32, -100.6, -100.68, -100.22, -101.68,
+ -99.4, -103.68, -100.4, -101.33, -99.13, -99.33, -100.7,
+ -100.75, -99.83, -101.72, -99.5, -102.33, -102.43, -101.12,
+ -101.35, -103.32, -100.25, -99.3, -100.65, -102.47, -103.6,
+ -102.98, -99.73, -100.38, -101.23, -99.38, -101.02, -100.97,
+ -100.68, -100.55, -101.17, -101.38, -100.73, -100.12, -104.33,
+ -106.43, -106.62, -106.07, -103.17, -108, -104.1, -105.25,
+ -107.97, -106.45, -105.6, -107.97, -105.38, -106.58, -104.95,
+ -106.32, -104.18, -106.63, -106.97, -107.52, -105.05, -107,
+ -105.27, -108.35, -106.18, -106.73, -106.08, -108.4, -108.78,
+ -105.43, -106.38, -105.77, -107.9, -103.62, -106.68, -107.4,
+ -104.37, -105.15, -105.2, -107.02, -106.3, -107.57, -104.58,
+ -103.37, -108.45, -103.92, -107.62, -104.27, -105.05, -107.87,
+ -103.73, -105.68, -104.43, -104.45, -105.4, -104.2, -103.33,
+ -106.37, -105.97, -108.73, -104.58, -105.97, -107.47, -105.57,
+ -106.58, -107.18, -105.97, -103.68, -104.93, -105.37, -106.77,
+ -108.78, -99.77, -100.53, -102.48, -99.62, -99.38, -99.8,
+ -99.62, -99.12, -100.05, -101.62, -99.4, -99.05, -101.22,
+ -102.97, -99.9, -99.35, -99.5, -99.17, -102.73, -99.63, -100.87,
+ -99.28, -99.38, -99.07, -103.65, -103.82, -103.92, -99.98,
+ -103.32, -103.82, -101.02, -99.32, -101.87, -103.6, -103.72,
+ -103.78, -101.52, -101.23, -103.82, -102.53, -103.47, -100.67,
+ -99.07, -99.43, -101.27, -103.18, -99.47, -99.45, -101.87,
+ -103.58, -103.47, -101.95, -99.87, -101.33, -102.17, -103.77,
+ -101.5, -102.47, -101.68, -102.72, -101.15, -101.65, -100.67,
+ -100.6, -103.45, -100.7, -103.45, -100.42, -103.23, -100.07,
+ -103.27, -99.08, -103.5, -101.6, -100.28, -102.13, -102.45,
+ -100.17, -103.1, -103.28, -102.55, -103.78, -99.45, -102.43,
+ -99.87, -100.48, -101.7, -100.5, -101.45, -102.25, -103,
+ -100.37, -101.37, -102.25, -102.55, -100.32, -101.97, -100.13,
+ -101.4, -102.4, -100.02, -100.28, -100.6, -100.63, -101.47,
+ -100.6, -101.38, -100.8, -100.82, -100.97, -100.25, -101.2,
+ -102.08, -109.07, -110.27, -109.62, -109.48, -109.55, -110.73,
+ -109.83, -109.75, -109.4, -109.08, -109.3, -110.4, -109.42,
+ -109.87, -110.17, -110.17, -110.72, -110.87, -109.07, -109.37,
+ -109.22, -109.67, -109.87, -109.55, -109.33, -110.07, -109.98,
+ -110.05, -109.68, -110.8, -109.98, -109.52, -110.68, -110.92,
+ -104.1, -104.65, -107.65, -108.05, -106.38, -110.93, -110.12,
+ -106.73, -108.52, -107.73, -110.43, -108.2, -106.68, -109.18,
+ -107.53, -109.28, -104.32, -106.33, -106.13, -104.82, -104.82,
+ -110.03, -109.08, -106.38, -109.07, -108.93, -104.2, -110.17,
+ -107.27, -108.6, -104.7, -105.12, -108.93, -105.4, -105.38,
+ -109.63, -104.98, -105.87, -106.42, -108.37, -106.62, -110.95,
+ -109.42, -107.48, -105.47, -105.88, -108.05, -108.95, -105.17,
+ -104.6, -110.77, -107.83, -106.63, -110.53, -110.2, -104.17,
+ -110.4, -108.73, -105.68, -105.62, -106.28, -106.83, -108.38,
+ -104.48, -106.2, -104.93, -110.72, -110.58, -110.35, -107.47,
+ -104.22, -110.83, -108.68, -104.48, -104.15, -109.87, -107,
+ -108.75, -107.92, -107.2, -104.28, -108.38, -104.93, -109.07,
+ -110.97, -106.82, -106.97, -106.83, -108.12, -110.67, -108.8,
+ -104.35, -108.98, -105.38, -107.42, -107.32, -108.2, -104.22,
+ -110.42, -104.62, -104.77, -107.98, -105.33, -104.95, -107.97,
+ -107.97, -110.7, -104.3, -104.77, -107.65, -106.22, -99.97,
+ -99.85, -100.73, -108.25, -99.37, -99.5, -100.98, -109.37,
+ -110.07, -106.17, -100.28, -107.8, -99.92, -104.433, -105.867,
+ -105.583, -105.82, -105.569, -105.887, -105.645, -105.544,
+ -105.761, -105.2, -105.067, -105.25, -105.664, -106.983,
+ -106.317, -106.083, -106.933, -106.233, -106.5, -106.367,
+ -106.9, -106.283, -106.781, -106.094, -106.67, -106.046,
+ -106.969, -106.677, -106.612, -106.608, -106.615, -106.542,
+ -106.59, -106.953, -106.548, -106.657, -106.321, -106.263,
+ -106.392, -107.133, -107.583, -107.85, -107.433, -107.533,
+ -107.3, -107.067, -107.067, -107.183, -107.267, -107.167,
+ -107.267, -107.167, -107.058, -107.357, -107.805, -107.689,
+ -107.675, -107.507, -107.512, -109.017, -108.833, -108.9,
+ -108.917, -108.058, -108.382, -108.195, -109.917, -109.65,
+ -109.233, -109.567, -109.783, -109.8, -109.667, -109.75,
+ -109.817, -109.95, -109.75, -109.317, -109.45, -109.167,
+ -109.1, -109.267, -109.55, -109.883, -109.967, -109.667,
+ -109.517, -109.267, -109.483, -110, -110.533, -110.15, -110.667,
+ -110.833, -110.217, -110.433, -110.05, -110.017, -110.133,
+ -110.2, -110.133, -110.917, -110.6, -110.917, -110.8, -110.533,
+ -110.583, -110.667, -110.683, -110.817, -110.683, -110.817,
+ -110.2, -110.483, -110.433, -110.683, -110.5, -110.617, -110.467,
+ -110.583, -110.8, -110, -110.883, -110.95, -110.75, -110.983),
+ lat = c(36.68, 36.15, 35.72, 36.42, 36.98, 35.07, 35.2, 36.9,
+ 35.68, 35.02, 37.38, 40.15, 37.43, 39.58, 39, 39.4, 37.43,
+ 38.47, 39.62, 40, 38.45, 40.65, 39.98, 38.83, 39.3, 39.7,
+ 39.65, 37.08, 38.42, 39.37, 37.75, 39.22, 39.65, 38.82,
+ 38.4, 39.38, 38.43, 39.25, 39.1, 38.82, 37.35, 40.45,
+ 38.87, 37.98, 37.67, 38.75, 39.77, 39.63, 40.23, 37.47,
+ 37.28, 38.48, 39.08, 39.63, 39.28, 40.67, 40.58, 37.23,
+ 40.22, 38.68, 38.13, 39.17, 38.7, 39.28, 39.7, 39.52,
+ 39.12, 39.07, 40.27, 40.25, 39.47, 37.72, 40.42, 39.88,
+ 39.95, 38.68, 38.53, 40.37, 38.45, 40.48, 37.77, 40.97,
+ 38.05, 40.58, 39.7, 39.57, 39.65, 38.07, 41, 38.8, 39.5,
+ 38.77, 40.07, 37.72, 38.05, 38.92, 39.75, 38.08, 38.07,
+ 39.23, 40.48, 40.17, 37.17, 37.35, 40.03, 40.52, 40.03,
+ 39.37, 37.2, 37.57, 39.65, 40.58, 37.82, 39.9, 38.13,
+ 38.22, 38.52, 38.02, 37.27, 39.1, 38.87, 39.52, 38.03,
+ 38.28, 38.27, 40.23, 40.08, 37.68, 38.15, 37.72, 38.03,
+ 38.87, 38.85, 38.08, 38.53, 38.4, 39.38, 40.93, 40.85,
+ 39.57, 37.8, 40.48, 37.38, 40.5, 40.62, 37.3, 39.3, 39.43,
+ 39.25, 38.42, 38.82, 37.95, 37.17, 37.25, 39.08, 38.37,
+ 37.37, 40.73, 37.62, 37.38, 40.42, 38.13, 39.9, 37.48,
+ 37, 40.07, 40.15, 37.55, 40.12, 38.47, 37.2, 39.8, 39.63,
+ 37.87, 39.62, 38.52, 39.35, 37.55, 38.12, 38.8, 37.8,
+ 39.38, 37.27, 38.9, 39.32, 39.63, 39.62, 37, 38.93, 37.98,
+ 39.37, 38.97, 37.6, 38.87, 38.6, 39.35, 37.18, 38.72,
+ 38.25, 38.07, 37.93, 39.67, 37.93, 38.18, 39.62, 38.48,
+ 37.05, 39.67, 39.95, 38.67, 38.58, 39.78, 37.28, 39.27,
+ 39.35, 39.18, 38.45, 39.83, 39.82, 39.7, 39.13, 39.83,
+ 37.82, 39.23, 39.73, 39.23, 39.07, 38.63, 37.33, 37.23,
+ 38.9, 39.77, 39.85, 39.18, 38.48, 38.9, 39.43, 39.47,
+ 37.48, 37.98, 38.47, 37.82, 37.58, 38.65, 39.02, 39.17,
+ 39.42, 39.07, 42.42, 42.55, 42.1, 41.62, 41.42, 41.42,
+ 40.37, 40.13, 40.05, 40.52, 41.07, 41.95, 41.67, 41.42,
+ 41.78, 40.27, 40.7, 42.83, 41.55, 42.7, 41.75, 40.22,
+ 40.67, 41.42, 42.05, 42.27, 40.48, 40.42, 40.68, 42.8,
+ 40.93, 40.02, 41.9, 40.08, 41.65, 42.68, 40.52, 42.68,
+ 42.5, 42.35, 40.43, 40.52, 40.7, 41.25, 41.22, 40.57,
+ 41.5, 41.15, 41.92, 40.85, 41.23, 40.2, 40.42, 40.38,
+ 42.92, 40.93, 41.95, 40.68, 42.27, 40.07, 42.6, 41.13,
+ 41.07, 41.08, 41.13, 40.13, 41.4, 41.3, 40.35, 41.13,
+ 41.22, 42.07, 40.32, 40.35, 42.72, 41.87, 41.13, 42.82,
+ 40.53, 40.13, 41.77, 40.18, 41.55, 42.58, 42.88, 40.83,
+ 40.42, 40.75, 40.12, 36.18, 36.23, 35.05, 36.1, 35.9,
+ 36.83, 35.53, 36.3, 36.67, 36.73, 36.73, 36.07, 36.13,
+ 36.92, 36.47, 35.63, 35.4, 35.23, 36.02, 35.1, 35.18,
+ 36.93, 36.57, 35.05, 36.33, 36.6, 35.98, 36.73, 35.52,
+ 35.9, 36.33, 35.58, 35.17, 36.6, 35.77, 35.03, 36.98,
+ 35.65, 35.53, 36.33, 35.88, 36.23, 36.55, 35.6, 35.33,
+ 35.82, 36.82, 35.07, 36.18, 36.32, 36.3, 35.58, 36.92,
+ 36.88, 36.7, 35.95, 35.12, 35.17, 35.65, 36.77, 36.37,
+ 35.17, 35.93, 36.42, 36.72, 35.8, 36.65, 35.2, 35.82,
+ 35.27, 35.97, 35.1, 36.13, 36.82, 36.73, 36.83, 35.42,
+ 35.2, 36.38, 36.77, 36.85, 36.6, 35.6, 35, 36.87, 36.9,
+ 36.72, 35.87, 35.13, 36.23, 36.93, 35.27, 36.82, 36.17,
+ 36.45, 43.5, 43.07, 44.7, 45, 44.52, 43.48, 44.22, 43.58,
+ 43.73, 43.97, 43.77, 44.38, 44, 44.83, 45, 43.3, 44.3,
+ 44.4, 44.25, 44.07, 43.23, 43.17, 43.83, 44.52, 44.87,
+ 44.33, 43.93, 43.43, 43.75, 43.92, 44.62, 43.43, 44.35,
+ 43.47, 43.23, 43.23, 44.9, 44.07, 44.45, 43.3, 43.12,
+ 43.88, 43.88, 44.73, 44.45, 43.18, 44.7, 43.4, 44.88,
+ 44.07, 44.05, 44.38, 44.62, 43.45, 43.97, 44.15, 44.12,
+ 44.7, 44.48, 44.25, 44.07, 43.38, 43.5, 35.23, 36.47,
+ 35.65, 35.53, 35.65, 35.92, 35.12, 36.23, 36.02, 36.43,
+ 35.85, 36.43, 36.25, 35.88, 36.12, 36.23, 35.23, 35.7,
+ 36.05, 36.1, 35.33, 36.4, 36.1, 36.45, 35.23, 36.2, 36.35,
+ 40.85, 40.32, 38.62, 37.62, 37.28, 37.52, 38.45, 38.15,
+ 38.65, 37.72, 38.8, 40.17, 40.93, 40.28, 39, 38.25, 38.37,
+ 39.73, 37.38, 40.37, 38.3, 40.55, 37.15, 38.58, 37.87,
+ 40.2, 37.62, 40.42, 40.08, 39.62, 40.3, 40.45, 39.55,
+ 42.75, 41.42, 41.15, 41.03, 44.38, 42.63, 42.88, 42.53,
+ 44.13, 41.58, 43.65, 43.23, 43.42, 44.35, 44.5, 44.77,
+ 43.37, 41.05, 42.92, 41.3, 41.15, 41.75, 41.43, 44.98,
+ 44.62, 44.53, 44.38, 44.93, 43.42, 44.87, 44.88, 44.58,
+ 44.12, 43.23, 42.18, 42.77, 43.57, 43.42, 44.47, 41.7,
+ 44.52, 41.18, 41.27, 42.12, 42.83, 44.28, 42.87, 44.5,
+ 44.7, 41.15, 44.68, 43.47, 42.5, 43.73, 41.8, 42.27,
+ 41.63, 44.55, 42.82, 41.32, 41.35, 44.85, 42.18, 44.85,
+ 42.75, 41.9, 44.27, 43.67, 43.85, 41.27, 42.35, 43.85,
+ 44.47, 43.25, 41.63, 41.17, 42.87, 43.02, 44.78, 44.2,
+ 41.8, 43.25, 43.02, 43.6, 41.6, 41.82, 41.45, 44.77,
+ 44.83, 43.23, 44.13, 42.47, 44.4, 44.05, 41.77, 44.07,
+ 43.77, 43.65, 42.08, 44.92, 44.1, 43.93, 41.68, 44.63,
+ 42.12, 44.02, 43.97, 44.97, 41.93, 38.68, 37.38, 40.03,
+ 37.77, 39.35, 39.47, 36.7, 35.17, 35.05, 35.57, 44.33,
+ 41.98, 42.37, 35.43, 44.55, 35.42, 44.475, 42.433, 42.283,
+ 40.414, 40.207, 40.532, 40.311, 40.035, 39.916, 37.209,
+ 37.331, 37, 35.829, 44.25, 42.733, 42.567, 41.167, 41.367,
+ 41.333, 41.333, 41, 41.467, 40.534, 40.347, 40.08, 40.875,
+ 40.848, 40.537, 39.075, 39.298, 39.317, 39.088, 38.82,
+ 38.894, 37.379, 36.956, 36.512, 36.716, 35.922, 44.167,
+ 44.683, 44.8, 44.5, 44.783, 44.567, 44.4, 43.883, 43.633,
+ 43.517, 41.117, 41.05, 41.3, 40.167, 39.765, 37.651,
+ 37.749, 37.934, 37.485, 37.714, 43.667, 42.567, 42.7,
+ 42.583, 39.058, 38.418, 37.892, 44.733, 44.8, 44.3, 44.783,
+ 44.65, 44.383, 43.7, 43.5, 43.933, 43.117, 43, 43.867,
+ 43.267, 43.033, 42.867, 42.65, 40.717, 40.617, 40.9,
+ 40.733, 39.317, 38.483, 37.8, 45, 44.717, 44.483, 44.2,
+ 44.133, 44.15, 43.933, 43.75, 43.25, 43.167, 43.133,
+ 43.383, 43.517, 42.95, 42.517, 42.25, 42.467, 42.767,
+ 42.533, 42.3, 42.817, 42.15, 42.517, 40.917, 40.95, 40.6,
+ 40.55, 40.917, 40.75, 40.717, 40.583, 40.867, 40.767,
+ 40.8, 40.683, 39.9, 39.967)), .Names = c("lon", "lat"),
+ row.names = c("020750", "021248", "023303", "025129", "025665",
+ "026190", "027488", "028468", "029410", "029439", "050102",
+ "050109", "050130", "050214", "050263", "050454", "050776",
+ "050797", "050834", "050848", "050895", "050945", "050950",
+ "051071", "051121", "051179", "051186", "051268", "051294",
+ "051401", "051458", "051528", "051547", "051564", "051609",
+ "051660", "051713", "051741", "051772", "051778", "051886",
+ "051932", "051959", "051964", "052184", "052192", "052220",
+ "052281", "052286", "052326", "052432", "052446", "052494",
+ "052790", "052932", "052944", "053005", "053016", "053038",
+ "053063", "053079", "053146", "053246", "053258", "053261",
+ "053359", "053488", "053489", "053496", "053500", "053530",
+ "053541", "053553", "053592", "053629", "053656", "053662",
+ "053738", "053828", "053867", "053951", "054054", "054076",
+ "054082", "054242", "054293", "054380", "054388", "054413",
+ "054444", "054452", "054603", "054664", "054726", "054734",
+ "054742", "054762", "054770", "054834", "054885", "054945",
+ "055116", "055322", "055327", "055414", "055446", "055484",
+ "055507", "055531", "055706", "055797", "055922", "055970",
+ "055984", "056012", "056131", "056136", "056203", "056258",
+ "056266", "056306", "056326", "056524", "056740", "056765",
+ "056797", "056832", "057017", "057020", "057050", "057167",
+ "057287", "057309", "057337", "057370", "057460", "057510",
+ "057513", "057515", "057618", "057656", "057848", "057866",
+ "057936", "057950", "057992", "058008", "058022", "058064",
+ "058157", "058184", "058204", "058429", "058434", "058501",
+ "058560", "058582", "058756", "058781", "058793", "058839",
+ "058931", "059175", "059181", "059216", "059243", "059265",
+ "059275", "059295", "140135", "140365", "140439", "140441",
+ "140676", "140836", "140865", "141029", "141104", "141141",
+ "141383", "141522", "141699", "141704", "141730", "141999",
+ "142086", "142213", "142432", "142452", "142980", "143153",
+ "143175", "143239", "143527", "143554", "143837", "143855",
+ "144073", "144087", "144161", "144333", "144357", "144464",
+ "144530", "144642", "144665", "144695", "144775", "144807",
+ "144821", "145115", "145127", "145171", "145355", "145483",
+ "145628", "145692", "145787", "145852", "145856", "145888",
+ "145906", "145920", "146192", "146374", "146435", "146637",
+ "146685", "146808", "146813", "147049", "147093", "147095",
+ "147140", "147271", "147397", "147832", "147904", "147922",
+ "148038", "148235", "148245", "148287", "148323", "148495",
+ "148498", "148648", "148988", "250030", "250050", "250130",
+ "250245", "250320", "250355", "250427", "250640", "250760",
+ "250810", "250865", "251130", "251145", "251200", "251345",
+ "251415", "251450", "251575", "251835", "251973", "252000",
+ "252065", "252100", "252145", "252645", "252647", "252690",
+ "252741", "252790", "253355", "253365", "253515", "253540",
+ "253595", "253605", "253615", "253690", "253710", "253715",
+ "253755", "253910", "254110", "254335", "254440", "254455",
+ "254604", "254865", "254900", "255020", "255090", "255250",
+ "255310", "255311", "255388", "255470", "255525", "255590",
+ "255655", "255702", "255780", "255925", "256065", "256075",
+ "256167", "256200", "256365", "256385", "256390", "256480",
+ "256585", "256880", "256970", "257002", "257110", "257415",
+ "257665", "257830", "258090", "258215", "258255", "258455",
+ "258628", "258650", "258755", "258760", "258920", "259020",
+ "259115", "259325", "290022", "290041", "290234", "290245",
+ "290377", "290692", "290858", "291000", "291063", "291180",
+ "291630", "291647", "291656", "291664", "291813", "291982",
+ "292030", "292100", "292241", "292250", "292510", "292608",
+ "292700", "292785", "292820", "292837", "293031", "293340",
+ "293422", "293488", "293511", "293586", "293682", "293706",
+ "294369", "294719", "294742", "294856", "294862", "294960",
+ "295084", "295290", "295490", "295516", "295560", "295937",
+ "296061", "296115", "296275", "296465", "296619", "296676",
+ "297279", "297280", "297323", "297638", "297867", "298015",
+ "298085", "298284", "298501", "298518", "298524", "298668",
+ "298845", "299031", "299085", "299156", "299330", "299496",
+ "299820", "299897", "340332", "340593", "340908", "341243",
+ "342849", "342944", "343070", "343358", "343489", "343628",
+ "343871", "344204", "344298", "344766", "345045", "345090",
+ "346035", "346139", "347534", "347952", "349017", "349172",
+ "349760", "390043", "390236", "390559", "390565", "390760",
+ "391124", "391246", "391539", "391621", "391972", "392087",
+ "392207", "392231", "392446", "392468", "392557", "392647",
+ "393069", "393076", "393217", "393452", "393574", "393775",
+ "393832", "393838", "393857", "393868", "394007", "394184",
+ "394516", "394596", "394630", "394834", "394983", "395154",
+ "395285", "395325", "395506", "395544", "395620", "395638",
+ "395870", "395891", "396054", "396170", "396212", "396292",
+ "396304", "396335", "396427", "396552", "396597", "396636",
+ "396736", "396790", "396937", "396947", "397073", "397882",
+ "397992", "398911", "399367", "399442", "410211", "410944",
+ "410958", "411000", "411033", "411412", "411778", "411946",
+ "412240", "412282", "412617", "413225", "413787", "413981",
+ "414140", "415247", "415770", "415875", "416070", "416477",
+ "416785", "416950", "416952", "416953", "418236", "418523",
+ "418692", "420050", "420074", "420336", "420738", "420788",
+ "421020", "421163", "421168", "421241", "421308", "422150",
+ "422253", "422864", "422996", "423418", "423600", "423611",
+ "423836", "424100", "424342", "424947", "425268", "425582",
+ "425733", "425805", "425969", "426053", "426123", "426568",
+ "427026", "427395", "429111", "429368", "480027", "480080",
+ "480270", "480484", "480540", "480552", "480603", "480695",
+ "480740", "480761", "480778", "480865", "481000", "481165",
+ "481175", "481220", "481284", "481547", "481570", "481610",
+ "481675", "481730", "481736", "481775", "481816", "481840",
+ "481850", "481905", "482375", "482399", "482415", "482466",
+ "482580", "482595", "482680", "482685", "482715", "482725",
+ "482881", "482995", "483031", "483045", "483100", "483170",
+ "483801", "483855", "483950", "484080", "484411", "484442",
+ "484760", "484910", "484925", "485055", "485105", "485252",
+ "485260", "485345", "485390", "485415", "485435", "485506",
+ "485525", "485770", "485830", "486120", "486395", "486428",
+ "486440", "486555", "486595", "486660", "486845", "487115",
+ "487200", "487240", "487260", "487376", "487388", "487473",
+ "487533", "487555", "487760", "487810", "487845", "487955",
+ "487990", "488155", "488160", "488209", "488315", "488385",
+ "488705", "488758", "488808", "488852", "488858", "488875",
+ "488995", "489025", "489205", "489207", "489459", "489580",
+ "489615", "489770", "489785", "489905", "489925", "053002",
+ "054934", "059096", "142164", "143665", "146787", "293142",
+ "347565", "349668", "416776", "481855", "483396", "488192",
+ "419662", "488124", "348652", "04E01S", "05G04S", "05G05S",
+ "05J10S", "05J18S", "05J37S", "05J39S", "05J42S", "05K06S",
+ "05M03S", "05M07S", "05N16S", "05P02S", "06E03S", "06G01S",
+ "06G02S", "06H09S", "06H13S", "06H19S", "06H20S", "06H22S",
+ "06H23S", "06J01S", "06J05S", "06J06S", "06J12S", "06J15S",
+ "06J29S", "06K04S", "06K06S", "06K30S", "06K40S", "06L02S",
+ "06L11S", "06M23S", "06N03S", "06N04S", "06N14S", "06P01S",
+ "07E06S", "07E18S", "07E21S", "07E23S", "07E33S", "07E34S",
+ "07E36S", "07F01S", "07F02S", "07F03S", "07H03S", "07H04S",
+ "07H05S", "07J04S", "07K12S", "07M05S", "07M12S", "07M27S",
+ "07M31S", "07M32S", "08F01S", "08G03S", "08G07S", "08G09S",
+ "08K04S", "08L02S", "08M07S", "09E07S", "09E08S", "09E09S",
+ "09E10S", "09E11S", "09E13S", "09F04S", "09F08S", "09F18S",
+ "09F21S", "09F23S", "09F24S", "09F25S", "09F27S", "09G03S",
+ "09G09S", "09J01S", "09J05S", "09J08S", "09J16S", "09K01S",
+ "09L03S", "09M02S", "10D07S", "10E03S", "10E06S", "10E09S",
+ "10E15S", "10E17S", "10F02S", "10F09S", "10F15S", "10F16S",
+ "10F17S", "10F19S", "10F23S", "10G02S", "10G08S", "10G12S",
+ "10G13S", "10G15S", "10G20S", "10G22S", "10G23S", "10G24S",
+ "10G25S", "10J01S", "10J04S", "10J10S", "10J18S", "10J20S",
+ "10J25S", "10J26S", "10J30S", "10J35S", "10J43S", "10J44S",
+ "10J52S", "10K01S", "10K02S"), class = "data.frame"),
+ elev = c(2196, 1710, 1937, 1976, 1696, 1756, 1806, 1580,
+ 2059, 1489, 1938, 1385, 2298, 2074, 2724, 2358, 2361,
+ 2324, 1113, 1672, 1199, 1488, 1519, 2434, 1272, 1586,
+ 3056, 1312, 1625, 1906, 2339, 2097, 1721, 1295, 2208,
+ 3514, 2438, 1823, 1763, 1888, 1885, 1963, 2705, 2474,
+ 2402, 1562, 1615, 2763, 1805, 2120, 2012, 1284, 2208,
+ 2135, 1525, 1295, 1525, 2321, 1320, 1693, 1324, 1366,
+ 1495, 1708, 2614, 1800, 1479, 1409, 2556, 2556, 2650,
+ 2475, 1418, 2367, 2425, 2499, 2329, 1903, 1382, 1943,
+ 2743, 2365, 1033, 1137, 1208, 2146, 1281, 1162, 1058,
+ 1559, 1677, 1305, 2233, 1292, 2711, 2593, 1718, 1106,
+ 1186, 3032, 1339, 1510, 2344, 2147, 2377, 1806, 1903,
+ 2379, 2123, 2339, 3240, 1510, 1976, 1635, 2139, 1312,
+ 1453, 2355, 2169, 1440, 1733, 1922, 2220, 1415, 1480,
+ 2464, 1610, 2690, 2133, 2898, 1271, 1833, 2758, 2345,
+ 2182, 2583, 1861, 1092, 1216, 1806, 2873, 2532, 1396,
+ 2085, 1202, 1128, 1342, 1780, 3049, 1525, 2809, 2643,
+ 1838, 1753, 2837, 1531, 2332, 2471, 1897, 1312, 1586,
+ 2396, 2761, 3243, 2312, 1077, 2405, 2092, 1260, 634,
+ 600, 881, 888, 717, 1052, 613, 1043, 726, 659, 656, 802,
+ 967, 635, 756, 644, 631, 833, 1104, 647, 866, 1115, 805,
+ 683, 613, 869, 824, 946, 775, 692, 814, 656, 519, 914,
+ 608, 708, 1007, 864, 595, 631, 613, 653, 1028, 763, 921,
+ 705, 558, 689, 799, 713, 719, 927, 774, 686, 695, 581,
+ 656, 812, 766, 1034, 1077, 903, 1025, 1098, 759, 905,
+ 1049, 546, 763, 888, 994, 1101, 625, 930, 796, 747, 714,
+ 567, 1013, 1357, 769, 1217, 793, 668, 823, 714, 658,
+ 903, 769, 1025, 763, 1117, 762, 664, 689, 720, 1007,
+ 689, 1119, 1165, 790, 829, 1302, 1196, 1214, 763, 939,
+ 811, 1083, 788, 991, 824, 610, 1360, 1478, 924, 1180,
+ 1160, 1302, 707, 999, 656, 1451, 1007, 1083, 1061, 1168,
+ 1235, 976, 688, 771, 836, 729, 991, 705, 1244, 860, 1052,
+ 573, 680, 850, 924, 814, 982, 601, 1034, 1168, 842, 933,
+ 1337, 820, 709, 781, 1138, 1209, 1247, 744, 756, 854,
+ 686, 811, 997, 894, 787, 952, 897, 857, 702, 1842, 1952,
+ 1619, 1731, 1373, 1720, 1372, 2547, 1766, 2432, 2336,
+ 1870, 2455, 2393, 1993, 1695, 1293, 1557, 2105, 1879,
+ 1568, 2065, 2516, 2202, 2095, 2074, 1705, 1577, 1973,
+ 2516, 2105, 2288, 1989, 1827, 1909, 1781, 2257, 2092,
+ 1935, 2214, 2233, 2196, 1845, 1344, 2438, 1693, 1760,
+ 1394, 2339, 2098, 1723, 2105, 2114, 2025, 2644, 1793,
+ 1289, 2138, 2207, 1510, 1805, 1945, 2025, 2129, 2278,
+ 2043, 2464, 1245, 1922, 1769, 2486, 1965, 750, 751, 1263,
+ 547, 598, 605, 671, 470, 677, 1009, 555, 473, 913, 1326,
+ 644, 622, 531, 568, 1305, 549, 825, 692, 580, 512, 1083,
+ 918, 976, 491, 994, 1864, 698, 506, 736, 1623, 1388,
+ 1847, 637, 735, 1049, 811, 1007, 484, 524, 658, 909,
+ 1007, 576, 570, 808, 1522, 1085, 744, 518, 659, 869,
+ 1601, 753, 991, 920, 827, 573, 714, 790, 854, 1571, 707,
+ 878, 506, 1019, 567, 903, 488, 1382, 673, 526, 741, 851,
+ 552, 981, 1052, 844, 1192, 558, 708, 601, 664, 1100,
+ 869, 958, 973, 1266, 714, 1037, 1165, 1218, 775, 1114,
+ 854, 967, 1196, 784, 741, 778, 840, 976, 885, 1052, 897,
+ 912, 915, 717, 946, 1126, 1659, 1955, 1259, 1841, 1315,
+ 1165, 1800, 1537, 1432, 2068, 1257, 1682, 1911, 1539,
+ 1241, 2012, 1313, 1860, 1647, 1446, 2048, 1945, 1296,
+ 1232, 2156, 1549, 1983, 1830, 1418, 1731, 1557, 1603,
+ 1646, 1867, 1629, 1833, 1903, 1170, 1832, 1897, 2080,
+ 1510, 2050, 1720, 1983, 1467, 1430, 1572, 2501, 1873,
+ 1644, 1618, 2461, 1868, 1617, 2150, 1229, 1196, 1528,
+ 1623, 1088, 2489, 1199, 1251, 1177, 1312, 1699, 1891,
+ 1470, 2121, 1327, 1244, 2169, 1354, 2254, 2080, 2010,
+ 1973, 1391, 1528, 1156, 1460, 2040, 1147, 1904, 1928,
+ 1421, 2120, 2013, 1400, 2368, 1698, 2215, 2175, 1281,
+ 1830, 1168, 1551, 2001, 1287, 1973, 2072, 2074, 1925,
+ 1315, 2248, 1658, 1495, 1579, 2187, 1818, 1332, 1199,
+ 2067, 1186, 1509, 1369, 2055, 1931, 2070, 1203, 1143,
+ 1470, 2098, 2400, 1449, 1964, 1861, 1464, 1476, 1326,
+ 1249, 1912, 1290, 1458, 2080, 1101, 1414, 1237, 1273,
+ 1899, 1290, 1789, 2464, 2332, 791, 653, 894, 1717, 541,
+ 528, 985, 1793, 1976, 2185, 763, 1305, 662, 1988, 2553,
+ 2409, 3262, 2622, 3085, 2896, 3021, 2951, 3201, 3049,
+ 3232, 2561, 2549, 2393, 2591, 2820, 3116, 2573, 3088,
+ 2729, 3064, 2561, 2909, 2707, 2957, 2652, 3201, 3232,
+ 2652, 2927, 3232, 2927, 3098, 3354, 2561, 2835, 3049,
+ 2896, 2890, 2851, 2860, 2921, 2402, 2707, 3006, 2500,
+ 2366, 2463, 2485, 2268, 2701, 2774, 3317, 2707, 3201,
+ 2988, 3317, 3537, 2736, 2756, 2652, 3043, 3049, 2866,
+ 2927, 2866, 2332, 2671, 2828, 2805, 2982, 2668, 2857,
+ 2546, 2543, 2866, 2912, 2936, 2628, 3079, 2768, 2662,
+ 2896, 2774, 2866, 2515, 3003, 2622, 2241, 2466, 2567,
+ 2393, 2215, 2817, 2143, 2921, 2360, 2512, 2418, 2668,
+ 2500, 2713, 2317, 2494, 2457, 2591, 2744, 2873, 2576,
+ 2390, 2317, 2790, 2896, 3079, 2409, 3079, 3323, 3329,
+ 3232, 2759, 3140, 2774, 3037, 2774, 2607), y = c(81,
+ 63, 36, 46, 33, 40, 97, 99, 58, 32, 78, 88, 23, 61, 84,
+ 179, 45, 41, 77, 134, 139, 110, 61, 43, 106, 122, 106,
+ 109, 113, 139, 45, 151, 90, 109, 58, 89, 49, 58, 45,
+ 119, 45, 39, 64, 61, 82, 49, 105, 51, 76, 62, 101, 193,
+ 172, 78, 74, 57, 130, 44, 46, 114, 127, 34, 66, 79, 101,
+ 42, 68, 46, 97, 70, 99, 73, 76, 72, 116, 73, 48, 44,
+ 180, 78, 69, 70, 225, 34, 125, 195, 91, 144, 57, 85,
+ 85, 129, 84, 104, 71, 47, 100, 191, 59, 57, 57, 74, 37,
+ 81, 102, 69, 84, 67, 48, 59, 114, 95, 82, 81, 83, 89,
+ 154, 65, 73, 44, 34, 124, 15, 105, 159, 86, 55, 54, 80,
+ 48, 131, 77, 136, 37, 68, 43, 136, 49, 40, 48, 87, 63,
+ 122, 88, 62, 155, 108, 148, 74, 101, 66, 94, 35, 126,
+ 38, 28, 102, 32, 108, 162, 95, 56, 103, 149, 74, 95,
+ 97, 59, 111, 170, 160, 99, 93, 145, 116, 129, 112, 116,
+ 139, 139, 139, 109, 143, 108, 70, 59, 122, 72, 130, 176,
+ 134, 102, 178, 161, 218, 111, 88, 168, 146, 216, 202,
+ 72, 258, 135, 104, 203, 106, 46, 68, 119, 157, 121, 79,
+ 109, 122, 126, 151, 92, 80, 71, 118, 124, 144, 112, 109,
+ 142, 95, 128, 136, 112, 83, 89, 70, 94, 163, 65, 89,
+ 132, 154, 145, 112, 151, 161, 104, 131, 135, 69, 120,
+ 59, 74, 35, 34, 65, 98, 96, 47, 77, 90, 34, 66, 49, 60,
+ 89, 38, 100, 33, 70, 34, 31, 55, 90, 71, 49, 65, 58,
+ 117, 86, 46, 126, 60, 53, 81, 87, 51, 133, 42, 49, 66,
+ 116, 71, 141, 62, 47, 40, 61, 55, 29, 120, 134, 84, 59,
+ 36, 49, 132, 18, 113, 130, 96, 41, 86, 117, 89, 59, 59,
+ 80, 106, 63, 54, 60, 53, 111, 73, 40, 37, 35, 46, 60,
+ 94, 120, 74, 52, 103, 43, 157, 92, 93, 57, 131, 35, 50,
+ 55, 155, 62, 78, 107, 48, 151, 54, 61, 156, 96, 72, 73,
+ 24, 51, 79, 44, 63, 63, 46, 76, 37, 47, 83, 28, 74, 72,
+ 66, 80, 48, 93, 83, 40, 80, 69, 70, 38, 164, 55, 66,
+ 74, 113, 57, 34, 91, 117, 74, 94, 113, 123, 134, 73,
+ 68, 78, 42, 38, 29, 67, 138, 19, 41, 100, 50, 69, 115,
+ 84, 67, 169, 93, 64, 103, 234, 122, 219, 146, 67, 175,
+ 140, 84, 190, 146, 107, 119, 91, 144, 174, 122, 108,
+ 94, 126, 138, 105, 49, 40, 18, 27, 42, 39, 164, 54, 66,
+ 51, 104, 86, 61, 79, 84, 27, 35, 13, 57, 104, 33, 45,
+ 45, 38, 38, 39, 61, 32, 75, 34, 77, 51, 51, 33, 115,
+ 63, 33, 29, 53, 37, 15, 51, 71, 8, 94, 46, 65, 22, 14,
+ 82, 43, 85, 52, 52, 34, 100, 92, 35, 43, 71, 28, 57,
+ 69, 36, 71, 129, 195, 112, 75, 87, 154, 83, 81, 83, 89,
+ 82, 57, 62, 93, 66, 52, 106, 81, 130, 91, 102, 76, 90,
+ 96, 81, 43, 57, 31, 74, 31, 26, 24, 28, 46, 71, 38, 106,
+ 92, 45, 28, 44, 32, 104, 14, 50, 37, 104, 30, 48, 72,
+ 38, 32, 68, 38, 107, 43, 61, 43, 52, 78, 148, 61, 13,
+ 45, 49, 97, 13, 103, 49, 55, 25, 21, 24, 53, 39, 88,
+ 19, 69, 72, 62, 35, 39, 10, 33, 26, 23, 89, 110, 12,
+ 32, 6, 43, 89, 44, 43, 39, 6, 89, 9, 61, 63, 31, 36,
+ 9, 53, 12, 28, 120, 18, 37, 41, 22, 41, 62, 140, 60,
+ 21, 50, 64, 37, 32, 6, 17, 52, 0, 63, 70, 49, 32, 23,
+ 47, 34, 76, 62, 49, 0, 23, 13, 41, 20, 22, 20, 42, 72,
+ 54, 42, 13, 53, 80, 34, 39, 41, 69, 42, 21, 25, 47, 57,
+ 16, 6, 36, 6, 61, 11, 13, 52, 33, 114, 109, 67, 138,
+ 95, 97, 28, 102, 159, 73, 27, 52, 74, 48, 19, 85, 30,
+ 79, 64, 91, 53, 91, 102, 61, 64, 58, 114, 89, 94, 30,
+ 43, 33, 66, 58, 64, 74, 58, 56, 81, 86, 86, 84, 71, 58,
+ 56, 56, 46, 64, 76, 48, 94, 102, 97, 76, 140, 23, 38,
+ 30, 30, 56, 53, 84, 36, 51, 36, 76, 46, 48, 97, 79, 165,
+ 91, 89, 94, 91, 43, 48, 61, 61, 56, 66, 79, 48, 51, 86,
+ 48, 64, 99, 69, 53, 102, 64, 86, 58, 61, 71, 58, 41,
+ 117, 160, 46, 127, 147, 122, 94, 48, 53, 38, 56, 64,
+ 58, 56, 94, 41, 81, 71, 58, 43, 64, 41, 38, 48, 61, 46,
+ 43, 66, 46, 38, 69, 86, 107, 89, 64, 157, 124, 124, 43,
+ 107, 61, 69, 124, 94)), .Names = c("x", "elev", "y"))
diff --git a/R/RadialBasis.R b/R/RadialBasis.R
new file mode 100644
index 0000000..f56ba4a
--- /dev/null
+++ b/R/RadialBasis.R
@@ -0,0 +1,54 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+RadialBasis <- function(d, M, dimension, derivative = 0) {
+ # compute the exponent for a thin-plate spline
+ # based on smoothness and dimension
+ p <- 2 * M - dimension
+ if (p <= 0) {
+ stop("M too small for thin plates spline, need: 2m-d >0")
+ }
+ if ((p - 1 < 0) & (derivative > 0)) {
+ stop("M is too small for derivatives, need: 2m-d < 1")
+ }
+ if (derivative == 0) {
+ if (dimension%%2 == 0) {
+ # factor of 2 from the log term
+ ifelse(d > 1e-14, radbas.constant(M, dimension) *
+ (d^p) * log(d), 0)
+ }
+ else {
+ radbas.constant(M, dimension) * (d^p)
+ }
+ }
+ else {
+ ## find derivative
+ if (dimension%%2 == 0) {
+ # factor of 2 from the log term
+ ifelse(d > 1e-14, radbas.constant(M, dimension) *
+ (d^(p - 1)) * (p * log(d) + 1), 0)
+ }
+ else {
+ con <- radbas.constant(M, dimension) * p
+ con * (d^(p - 1))
+ }
+ }
+ ##### should not get here!
+}
diff --git a/R/SUBSCRIPTINGSpatialDesign.R b/R/SUBSCRIPTINGSpatialDesign.R
new file mode 100644
index 0000000..9b60625
--- /dev/null
+++ b/R/SUBSCRIPTINGSpatialDesign.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"[.spatialDesign" <- function(x, ...) {
+ x$design[...]
+}
diff --git a/R/Tps.R b/R/Tps.R
new file mode 100644
index 0000000..81e5a12
--- /dev/null
+++ b/R/Tps.R
@@ -0,0 +1,53 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Tps" <- function(x, Y, m = NULL, p = NULL, scale.type = "range",
+ lon.lat = FALSE, miles = TRUE, method="GCV", GCV=TRUE, ...) {
+ x <- as.matrix(x)
+ d <- ncol(x)
+ if (is.null(p)) {
+ if (is.null(m)) {
+ m <- max(c(2, ceiling(d/2 + 0.1)))
+ }
+ p <- (2 * m - d)
+ if (p <= 0) {
+ stop(" m is too small you must have 2*m - dimension >0")
+ }
+ }
+# Tpscall <- match.call()
+ if (!lon.lat) {
+# Tpscall$cov.function <- "Thin plate spline radial basis functions (Rad.cov) "
+ obj<- Krig(x, Y, cov.function = Rad.cov, m = m, scale.type = scale.type,
+ p = p, method=method, GCV = GCV, ...)
+ }
+ else {
+ # a different coding of the radial basis functions to use great circle distance.
+# Tpscall$cov.function <- "Thin plate spline radial basis functions (RadialBasis.cov) using great circle distance "
+ obj<- Krig(x, Y, cov.function = stationary.cov, m = m, scale.type = scale.type,
+ method=method, GCV = GCV,
+ cov.args = list(Covariance = "RadialBasis",
+ M = m, dimension = 2, Distance = "rdist.earth",
+ Dist.args = list(miles = miles)), ...)
+
+ }
+ obj$call<- match.call()
+ class( obj) <- c("Krig", "Tps")
+ return(obj)
+}
diff --git a/R/US.R b/R/US.R
new file mode 100644
index 0000000..272e75d
--- /dev/null
+++ b/R/US.R
@@ -0,0 +1,24 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"US" <- function(...) {
+ map("state", ...)
+ invisible()
+}
diff --git a/R/add.image.R b/R/add.image.R
new file mode 100644
index 0000000..88a8af9
--- /dev/null
+++ b/R/add.image.R
@@ -0,0 +1,44 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"add.image" <- function(xpos, ypos, z, adj.x = 0.5,
+ adj.y = 0.5, image.width = 0.15, image.height = NULL, col = tim.colors(256),
+ ...) {
+ m <- nrow(z)
+ n <- ncol(z)
+ ucord <- par()$usr
+ pin <- par()$pin
+ # if height is missing scale according to width assuming pixels are
+ # square.
+ if (is.null(image.height)) {
+ image.height <- (n/m) * image.width
+ }
+ # find grid spacing in user coordinates.
+ dy <- image.width * (ucord[4] - ucord[3])
+ dx <- image.height * pin[2] * (ucord[2] - ucord[1])/(pin[1])
+ #
+ # dx and dy should have the correct ratio given different different scales
+ # and also different aspects to the plot window
+ #
+ # find grid to put image in right place.
+ xs <- seq(0, dx, , m + 1) + xpos - adj.x * dx
+ ys <- seq(0, dy, , n + 1) + ypos - adj.y * dy
+ image(xs, ys, z, add = TRUE, col = col, ...)
+}
diff --git a/R/arrow.plot.R b/R/arrow.plot.R
new file mode 100644
index 0000000..17eb7f2
--- /dev/null
+++ b/R/arrow.plot.R
@@ -0,0 +1,56 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"arrow.plot" <- function(a1, a2, u = NA, v = NA, arrow.ex = 0.05,
+ xpd = TRUE, true.angle = FALSE, arrowfun = arrows, ...) {
+ if (is.matrix(a1)) {
+ x <- a1[, 1]
+ y <- a1[, 2]
+ }
+ else {
+ x <- a1
+ y <- a2
+ }
+ if (is.matrix(a2)) {
+ u <- a2[, 1]
+ v <- a2[, 2]
+ }
+ ucord <- par()$usr
+ arrow.ex <- arrow.ex * min(ucord[2] - ucord[1], ucord[4] -
+ ucord[3])
+ if (true.angle) {
+ pin <- par()$pin
+ r1 <- (ucord[2] - ucord[1])/(pin[1])
+ r2 <- (ucord[4] - ucord[3])/(pin[2])
+ }
+ else {
+ r1 <- r2 <- 1
+ }
+ u <- u * r1
+ v <- v * r2
+ maxr <- max(sqrt(u^2 + v^2))
+ u <- (arrow.ex * u)/maxr
+ v <- (arrow.ex * v)/maxr
+ invisible()
+ old.xpd <- par()$xpd
+ par(xpd = xpd)
+ arrowfun(x, y, x + u, y + v, ...)
+ par(xpd = old.xpd)
+}
diff --git a/R/as.image.R b/R/as.image.R
new file mode 100644
index 0000000..7057ea2
--- /dev/null
+++ b/R/as.image.R
@@ -0,0 +1,84 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"as.image" <- function(Z, ind = NULL, grid = NULL,
+ x = NULL, weights = rep(1, length(Z)), na.rm = FALSE,
+ nx = 64, ny = 64, boundary.grid = FALSE, nrow = NULL, ncol = NULL,
+ FUN=NULL) {
+ # NOTE that throughout ind is a two column integer matrix of
+ # discretized locations in the image matrix.
+ # Thanks to J. Rougier for fixing bugs in this function.
+ #
+ # coerce Z to a vector
+ Z <- c(Z)
+ if( !is.null(ind)){
+ x<- ind
+ }
+ # set nx and ny if nrow ncol are passed
+ if( !is.null(nrow)&!is.null(ncol)){
+ nx<- nrow
+ ny<- ncol
+ }
+ #
+ # check for x or weights having missing values
+ # we do not like these ...
+ if (any(is.na(weights)) | any(is.na(c(x)))) {
+ stop("missing values in weights or x")
+ }
+ # discretize locations to grid boxes
+ # this function will also create a default grid based on range of
+ # locations if grid is NULL
+ #
+ temp <- discretize.image(x, m = nx, n = ny, grid = grid,
+ boundary.grid = boundary.grid)
+ grid <- temp$grid
+ # index is a two component list that indexes the x and y grid points.
+ # points outside of grid are assigned as NA
+ #
+ # empty image matrices to hold weights and weighted means
+ w<- z <- matrix( NA, nrow=temp$m, ncol=temp$n)
+ # find stats
+ tempw<- tapply( weights, temp$index, sum, na.rm=na.rm)
+ if( is.null(FUN)){
+# usual weighted means case:
+ tempz<- tapply( Z*weights, temp$index,sum, na.rm=na.rm )
+ tempz<- tempz/ tempw
+ }
+ else{
+# just apply FUN to values in the grid box -- no weighting!
+ tempz<- tapply( Z, temp$index, FUN )
+ }
+ # these are the indices that are represented by the locations
+ # they may not include the entire set ( 1:nx and 1:ny)
+ # so define what they do have.
+
+ # insert the tabled values into the right rows and columns.
+ # ix and iy are just the range of indexes for the grid, e.g. ix= 1:20 and iy= 1:30 for a
+ # 20X30 grid.
+ z[ temp$ix, temp$iy] <- tempz
+ w[ temp$ix, temp$iy] <- tempw
+ # save call
+ # xd created because it is a pain to do otherwise and handy to have
+ # these are the discretize locations with actual values
+ call <- match.call()
+ list(x = grid$x, y = grid$y, z = z, call = call, ind = cbind(temp$index[[1]], temp$index[[2]]) ,
+ weights = w, xd = cbind(grid$x[temp$index[[1]]], grid$y[temp$index[[2]]] ),
+ call = match.call(), FUN = FUN )
+}
diff --git a/R/as.surface.R b/R/as.surface.R
new file mode 100644
index 0000000..eee7370
--- /dev/null
+++ b/R/as.surface.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"as.surface" <- function(obj, z, order.variables = "xy") {
+ #
+ if (is.list(obj)) {
+ grid.list <- obj
+ }
+ if (is.matrix(obj)) {
+ grid.list <- attr(obj, "grid.list")
+ }
+ #
+ # OK now have a grid, parse this to figure
+ # nx and ny the x and y sequences and extract names
+ #
+ hold <- parse.grid.list(grid.list, order.variables = "xy")
+ #
+ # note that coercing z to a matrix is just reformatting
+ # using the standard ordering.
+ #
+ # output list is all the grid stuff and the matrix z.
+ c(hold, list(z = matrix(z, ncol = hold$ny, nrow = hold$nx)))
+}
diff --git a/R/bisection.search.R b/R/bisection.search.R
new file mode 100644
index 0000000..dc0a5c3
--- /dev/null
+++ b/R/bisection.search.R
@@ -0,0 +1,47 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"bisection.search" <- function(x1, x2, f, tol = 1e-07,
+ niter = 25, f.extra = NA, upcross.level = 0) {
+ f1 <- f(x1, f.extra) - upcross.level
+ f2 <- f(x2, f.extra) - upcross.level
+ if (f1 > f2)
+ stop(" f1 must be < f2 ")
+ iter <- niter
+ for (k in 1:niter) {
+ xm <- (x1 + x2)/2
+ fm <- f(xm, f.extra) - upcross.level
+ if (fm < 0) {
+ x1 <- xm
+ f1 <- fm
+ }
+ else {
+ x2 <- xm
+ f2 <- fm
+ }
+ if (abs(fm) < tol) {
+ iter <- k
+ break
+ }
+ }
+ xm <- (x1 + x2)/2
+ fm <- f(xm, f.extra) - upcross.level
+ list(x = xm, fm = fm, iter = iter)
+}
diff --git a/R/bplot.family.R b/R/bplot.family.R
new file mode 100644
index 0000000..d7c0d54
--- /dev/null
+++ b/R/bplot.family.R
@@ -0,0 +1,58 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+
+"bplot.xy" <- function(x, y, N = 10, breaks = pretty(x,
+ N, eps.correct = 1), plot = TRUE, ...) {
+ NBIN <- length(breaks) - 1
+ centers <- (breaks[1:NBIN] + breaks[2:(NBIN + 1)])/2
+ obj <- split(y, cut(x, breaks))
+ if (length(obj) == 0) {
+ stop("No points within breaks")
+ }
+ if (plot) {
+ bplot(obj, at = centers, show.names = FALSE, axes = TRUE,
+ ...)
+ axis(1)
+ }
+ else {
+ return(list(centers = centers, breaks = breaks, boxplot.obj = boxplot(obj,
+ plot = FALSE)))
+ }
+}
+
+bplot <- function(x, by, pos = NULL, at = pos, add = FALSE,
+ boxwex = 0.8, xlim = NULL, ...) {
+ if (!missing(by)) {
+ x <- split(c(x), as.factor(by))
+ }
+ if (!add & !is.null(at) & is.null(xlim)) {
+ xlim <- range(at)
+ }
+ if (!is.null(at)) {
+ boxwex <- boxwex * min(diff(sort(at)))
+ }
+ boxplot(x, at = at, xlim = xlim, add = add, boxwex = boxwex,
+ ...)
+
+}
+
+
diff --git a/R/cat.matrix.R b/R/cat.matrix.R
new file mode 100644
index 0000000..e7f8824
--- /dev/null
+++ b/R/cat.matrix.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"cat.matrix" <- function(mat, digits = 8) {
+ nc <- ncol(mat)
+ temp <- matrix(match(c(signif(mat, digits)), unique(c(signif(mat,
+ digits)))), ncol = nc)
+ temp2 <- format(temp[, 1])
+ if (nc > 1) {
+ for (k in 2:nc) {
+ temp2 <- paste(temp2, temp[, k], sep = "X")
+ }
+ }
+ match(temp2, unique(temp2))
+}
diff --git a/R/cat.to.list.R b/R/cat.to.list.R
new file mode 100644
index 0000000..176b3f2
--- /dev/null
+++ b/R/cat.to.list.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"cat.to.list" <- function(x, a) {
+ a <- as.character(a)
+ label <- unique(a)
+ out <- as.list(1:length(label))
+ names(out) <- label
+ for (k in 1:length(label)) {
+ out[[k]] <- x[label[k] == a]
+ if (length(out[[k]]) == 0)
+ out[[k]] <- NA
+ }
+ out
+}
diff --git a/R/ceiling2.R b/R/ceiling2.R
new file mode 100644
index 0000000..5df1260
--- /dev/null
+++ b/R/ceiling2.R
@@ -0,0 +1,29 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"ceiling2" <- function(m) {
+ if (m < 1)
+ return(NA)
+ M <- 1
+ while (M < m) {
+ M <- M * 2
+ }
+ M
+}
diff --git a/R/coef.Krig.R b/R/coef.Krig.R
new file mode 100644
index 0000000..5364626
--- /dev/null
+++ b/R/coef.Krig.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+coef.Krig <- function(object, ...) {
+ Krig.coef(object, ...)$d
+}
diff --git a/R/color.scale.R b/R/color.scale.R
new file mode 100644
index 0000000..6e6078b
--- /dev/null
+++ b/R/color.scale.R
@@ -0,0 +1,39 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+color.scale <- function(z, col = tim.colors(256),
+ zlim = NULL, transparent.color = "white", eps = 1e-08) {
+ #
+ # converts real values to a color scale of NC values.
+ # role of eps is to prevent values exactly at the end of the range from being
+ # missed
+ if (is.null(zlim)) {
+ zlim <- range(z, na.rm = TRUE)
+ }
+ z[(z < zlim[1]) | (z > zlim[2])] <- NA
+ NC <- length(col)
+ breaks <- seq(zlim[1] * (1 - eps), zlim[2] * (1 + eps), ,
+ NC + 1)
+ # the magic of R ...
+ icolor <- cut(c(z), breaks)@.Data
+ # returned values is a vector of character hex strings encoding the colors.
+ ifelse(is.na(icolor), transparent.color, col[icolor])
+}
+
diff --git a/R/colorbar.plot.R b/R/colorbar.plot.R
new file mode 100644
index 0000000..e103389
--- /dev/null
+++ b/R/colorbar.plot.R
@@ -0,0 +1,76 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"colorbar.plot" <- function(x, y, strip, strip.width = 0.1,
+ strip.length = 4 * strip.width, zrange = NULL, adj.x = 0.5,
+ adj.y = 0.5, col = tim.colors(256), horizontal = TRUE, ...) {
+ # coerce to be one column matrix if it is a vector
+ if (!is.matrix(strip)) {
+ strip <- matrix(c(strip), ncol = 1)
+ }
+ m <- nrow(strip)
+ n <- ncol(strip)
+ # find common range across strips if not specified
+ if (is.null(zrange)) {
+ zrange <- matrix(range(c(strip), na.rm = TRUE), nrow = n,
+ ncol = 2, byrow = TRUE)
+ }
+ # see help( par) for background on graphical settings
+ ucord <- par()$usr
+ pin <- par()$pin
+ if (horizontal) {
+ dy <- strip.width * (ucord[4] - ucord[3])
+ dx <- strip.length * pin[2] * (ucord[2] - ucord[1])/(pin[1])
+ }
+ else {
+ dx <- strip.width * (ucord[2] - ucord[1])
+ dy <- strip.length * pin[1] * (ucord[4] - ucord[3])/(pin[2])
+ }
+ #
+ # dx and dy should have the correct ratio given different different scales
+ # and also different aspects to the plot window
+ #
+ n <- ncol(strip)
+ m <- nrow(strip)
+ # create grids in x and y for strip(s) based on the users
+ # coordinates of the plot and th positioning argument (adj)
+ if (horizontal) {
+ xs <- seq(0, dx, , m + 1) + x - adj.x * dx
+ ys <- seq(0, dy, , n + 1) + y - adj.y * dy
+ }
+ else {
+ xs <- seq(0, dx, , n + 1) + x - adj.x * dx
+ ys <- seq(0, dy, , m + 1) + y - adj.y * dy
+ }
+ #
+ # plot image row by row to allow for different zlim's
+ # see image.add for a fields function that just plots the whole image at
+ # once.
+ for (k in 1:n) {
+ if (horizontal) {
+ image(xs, c(ys[k], ys[k + 1]), cbind(strip[, k]),
+ zlim = zrange[k, ], add = TRUE, col = col, ...)
+ }
+ else {
+ image(c(xs[k], xs[k + 1]), ys, rbind(strip[, k]),
+ zlim = zrange[k, ], add = TRUE, col = col, ...)
+ }
+ }
+}
diff --git a/R/compactToMat.R b/R/compactToMat.R
new file mode 100644
index 0000000..617b142
--- /dev/null
+++ b/R/compactToMat.R
@@ -0,0 +1,42 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+compactToMat = function(compactMat, diagVal=0, lower.tri=FALSE, upper.tri=TRUE) {
+ #compactMat: a symmetric matrix stored as a vector containing elements for the upper triangle
+ #portion of the true matrix
+ #diagVal: a number to put in the diagonal entries of the output matrix
+ #lower.tri: if TRUE, fills in lower tringular portion of the matrix
+ #upper.tri: if TRUE, fills in upper tringular portion of the matrix
+
+ if(class(compactMat) == 'dist') {
+ n <- attr(compactMat, "Size")
+ } else { # (n^2 - n)/2 = length(compactMat)
+ stop("input matrix is not compact or is not of class \"dist\"")
+
+ #or if class is not dist but input matrix is still compact, use:
+ #n = (1 + sqrt(1 + 8*length(compactMat)))/2
+ }
+
+ return(.Call("compactToMatC", as.double(compactMat),
+ as.integer(length(compactMat)),
+ as.integer(n), as.double(diagVal),
+ as.integer(lower.tri), as.integer(upper.tri),
+ PACKAGE="fields"))
+}
diff --git a/R/cover.design.R b/R/cover.design.R
new file mode 100644
index 0000000..3bdd4f4
--- /dev/null
+++ b/R/cover.design.R
@@ -0,0 +1,175 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"cover.design" <- function(R, nd, nruns = 1, nn = TRUE,
+ num.nn = 100, fixed = NULL, scale.type = "unscaled", R.center,
+ R.scale, P = -20, Q = 20, start = NULL, DIST = NULL, return.grid = TRUE,
+ return.transform = TRUE, max.loop = 20, verbose = FALSE) {
+ if (!is.null(start) && is.matrix(start)) {
+ if (any(duplicated(start)))
+ stop("Error: start must not have duplicate rows")
+ test <- duplicated(start, R)
+ if (sum(test) < nrow(start))
+ stop("Error: Starting design must be a subset of R")
+ }
+ R.orig <- R
+ R <- as.matrix(R)
+ # some checks on inputs
+ if (nd >= nrow(R)) {
+ stop(" number of design points >= the number of candidates")
+ }
+ if (any(duplicated.array(R)))
+ stop("Error: R must not have duplicate rows")
+ if (num.nn >= (nrow(R) - nd)) {
+ nn <- FALSE
+ warning("Number of nearst neighbors (nn) reduced to the actual number of candidates")
+ }
+ if (is.null(DIST))
+ DIST <- function(x, y) {
+ rdist(x, y)
+ }
+ id <- 1:nrow(R)
+ if (!is.null(start))
+ nd <- length(start)
+ if (is.null(fixed))
+ n <- nd
+ else {
+ n <- nd + length(fixed)
+ }
+ R <- transformx(R, scale.type, R.center, R.scale)
+ transform <- attributes(R)
+ saved.crit <- rep(NA, nruns)
+ saved.designs <- matrix(NA, nrow = nruns, ncol = n)
+ saved.hist <- list(1:nruns)
+ if (verbose) {
+ cat(dim(R), fill = TRUE)
+ }
+ #
+ # do nruns with initial desing drawn at random
+ #
+ # in this code Dset are the indices of the design
+ # Cset are the complement set of indices indicating the candidate points
+ # no used in the design
+ #
+ for (RUNS in 1:nruns) {
+ if (is.null(start)) {
+ if (!is.null(fixed)) {
+ Dset <- sample((1:nrow(R))[-fixed], nd)
+ Dset <- c(Dset, fixed)
+ }
+ else Dset <- sample(1:nrow(R), nd)
+ }
+ else {
+ if (length(start) > nd)
+ stop("Error: the start matrix must have nd rows")
+ Dset <- start
+ if (!is.null(fixed))
+ Dset <- c(Dset, fixed)
+ }
+ design.original <- R.orig[Dset, ]
+ Dset.orginal <- Dset
+ Cset <- id[-Dset]
+ dist.mat <- DIST(R[Cset, ], R[Dset, ])
+ rs <- dist.mat^P %*% rep(1, n)
+ crit.i <- crit.original <- sum(rs^(Q/P))^(1/Q)
+ CRIT <- rep(NA, length(Cset))
+ CRIT.temp <- rep(NA, length(Cset))
+ hist <- matrix(c(0, 0, crit.i), ncol = 3, nrow = 1)
+ loop.counter <- 1
+ repeat {
+ for (i in 1:nd) {
+ # loop over current design points looking for a productive swap
+ Dset.i <- matrix(R[Dset[i], ], nrow = 1)
+ if (verbose) {
+ cat("design point", i, Dset.i, fill = TRUE)
+ }
+ partial.newrow <- sum(DIST(Dset.i, R[Dset[-i],
+ ])^P)
+ rs.without.i <- rs - c(DIST(Dset.i, R[-Dset,
+ ])^P)
+ if (nn)
+ vec <- (1:length(Cset))[order(dist.mat[, i])[1:num.nn]]
+ else vec <- 1:length(Cset)
+ for (j in vec) {
+ # loop over possible candidates to swap with design point
+ Cset.j <- matrix(R[Cset[j], ], nrow = 1)
+ newcol <- c(DIST(Cset.j, R[c(-Dset, -Cset[j]),
+ ])^P)
+ CRIT[j] <- (sum((rs.without.i[-j] + newcol)^(Q/P)) +
+ (DIST(Cset.j, Dset.i)^P + partial.newrow)^(Q/P))^(1/Q)
+ if (verbose) {
+ cat(j, " ")
+ }
+ }
+ best <- min(CRIT[!is.na(CRIT)])
+ best.spot <- Cset[CRIT == best][!is.na(Cset[CRIT ==
+ best])][1]
+ if (verbose) {
+ cat(i, "best found ", best, " at", best.spot,
+ fill = TRUE)
+ }
+ crit.old <- crit.i
+ # check if the best swap is really better thatn what you already have.
+ if (best < crit.i) {
+ if (verbose) {
+ cat(i, "best swapped ", fill = TRUE)
+ }
+ crit.i <- best
+ hist <- rbind(hist, c(Dset[i], best.spot, crit.i))
+ Dset[i] <- best.spot
+ Cset <- id[-Dset]
+ dist.mat <- DIST(R[Cset, ], R[Dset, ])
+ rs <- (dist.mat^P) %*% rep(1, n)
+ }
+ }
+ if ((crit.i == crit.old) | (loop.counter >= max.loop))
+ break
+ loop.counter <- loop.counter + 1
+ }
+ saved.crit[RUNS] <- crit.i
+ saved.designs[RUNS, ] <- Dset
+ saved.hist[[RUNS]] <- hist
+ }
+ ret <- (1:nruns)[saved.crit == min(saved.crit)]
+ if (length(ret) > 1) {
+ print("Greater than 1 optimal design; keeping first one......")
+ ret <- ret[1]
+ }
+ crit.i <- saved.crit[ret]
+ hist <- saved.hist[[ret]]
+ nhist <- nrow(hist)
+ nloop <- nruns
+ hist <- cbind(c(0:(nrow(hist) - 1)), hist)
+ dimnames(hist) <- list(NULL, c("step", "swap.out", "swap.in",
+ "new.crit"))
+ out.des <- R[saved.designs[ret, ], ]
+ out.des <- unscale(out.des, transform$x.center, transform$x.scale)
+ out <- list(design = out.des, call = match.call(), best.id = c(saved.designs[ret,
+ ]), fixed = fixed, opt.crit = crit.i, start.design = design.original,
+ start.crit = crit.original, history = hist, other.designs = saved.designs,
+ other.crit = saved.crit, DIST = DIST, nn = nn, num.nn = num.nn,
+ P = P, Q = Q, nhist = nhist - 1, nloop = (nloop - 1)/n)
+ if (return.grid)
+ out$grid <- R.orig
+ if (return.transform)
+ out$transform <- transform
+ class(out) <- "spatialDesign"
+ out
+}
diff --git a/R/cubic.cov.R b/R/cubic.cov.R
new file mode 100644
index 0000000..4b817e3
--- /dev/null
+++ b/R/cubic.cov.R
@@ -0,0 +1,50 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+cubic.cov <- function(x1, x2=NULL, theta = 1, C = NA, marginal = FALSE) {
+ # comments in Exp.simple.cov for more details about the
+ # required parts of this covariance
+
+ if (is.matrix(x1)) {
+ if (ncol(x1) != 1) {
+ stop(" x is a matrix this is a 1-d covariance")
+ }
+ }
+ if( is.null( x2) ){
+ x2<- x1
+ }
+ # local function
+ fun.temp <- function(u, v) {
+ 1 + ifelse(u < v, v * (u^2)/2 - (u^3)/6, u * (v^2)/2 -
+ (v^3)/6)
+ }
+ if (is.na(C[1]) & !marginal) {
+ # cross covariance matrix
+ return(outer(c(x1), c(x2), FUN = fun.temp))
+ }
+ if (!is.na(C[1])) {
+ # product of cross covariance with a vector
+ return(outer(c(x1), c(x2), FUN = fun.temp) %*% C)
+ }
+ if (marginal) {
+ # marginal variance
+ return((x1^3)/3)
+ }
+}
diff --git a/R/describe.R b/R/describe.R
new file mode 100644
index 0000000..f62718c
--- /dev/null
+++ b/R/describe.R
@@ -0,0 +1,41 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"describe" <- function(x) {
+ lab <- c("N", "mean", "Std.Dev.", "min", "Q1", "median",
+ "Q3", "max", "missing values")
+ if (missing(x)) {
+ return(lab)
+ }
+ temp <- rep(0, length(lab))
+ xt <- x[!is.na(x)]
+ ix <- order(xt)
+ n <- length(xt)
+ if (!is.numeric(xt) || all(is.na(x))) {
+ return(c(n, rep(NA, length(lab) - 2), length(x) - length(xt)))
+ }
+ if (n == 1) {
+ return(c(n, xt[1], NA, rep(xt[1], 5), length(x) - length(xt)))
+ }
+ else {
+ return(c(n, mean(xt), sqrt(var(xt)), min(xt), quantile(xt,
+ c(0.25, 0.5, 0.75)), max(xt), length(x) - length(xt)))
+ }
+}
diff --git a/R/discretize.image.R b/R/discretize.image.R
new file mode 100644
index 0000000..deb53df
--- /dev/null
+++ b/R/discretize.image.R
@@ -0,0 +1,72 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"discretize.image" <- function(x, m = 64, n = 64,
+ grid = NULL, expand = c(1, 1), boundary.grid = FALSE, na.rm=TRUE) {
+ #
+ # set up discretized grid based on x
+ #
+ out <- list()
+
+ if (length(expand) == 1)
+ expand <- rep(expand, 2)
+ if (is.null(grid)) {
+ grid <- list()
+ xr <- range(x[, 1], na.rm = na.rm)
+ deltemp <- (xr[2] - xr[1]) * (expand[1] - 1) * 0.5
+ grid$x <- seq(xr[1] - deltemp, xr[2] + deltemp, , m)
+ yr <- range(x[, 2], na.rm = na.rm)
+ deltemp <- (yr[2] - yr[1]) * (expand[2] - 1) * 0.5
+ grid$y <- seq(yr[1] - deltemp, yr[2] + deltemp, , n)
+ }
+ # find cut points for boundaries assuming midpoints
+ if (!boundary.grid) {
+ xcut <- fields.convert.grid(grid$x)
+ ycut <- fields.convert.grid(grid$y)
+ }
+ else {
+ # cut points given boundaries
+ xcut <- grid$x
+ ycut <- grid$y
+ }
+ # locate bin ids for each location
+ index <- list( as.numeric(cut(x[, 1], xcut)), as.numeric(cut(x[, 2], ycut)))
+ m <- length(xcut) - 1
+ n <- length(ycut) - 1
+ grid <- grid
+
+
+ tempHist<- table( index[[1]], index[[2]])
+
+ ix<- as.numeric(dimnames( tempHist)[[1]])
+ iy<- as.numeric(dimnames( tempHist)[[2]])
+# 2 d histogram of locations
+ hist<- matrix( 0, m,n)
+ hist[ix,iy] <- tempHist
+#
+ if (!boundary.grid) {
+ # compute discretized locations
+ loc <- cbind( grid$x[ index[[1]] ], grid$y[ index[[2]] ] )
+ }
+ else {
+ out$loc <- NA
+ }
+ return( list( m=m,n=n, grid=grid, index=index, ix= ix, iy=iy, hist=hist, loc=loc) )
+}
diff --git a/R/double.exp.R b/R/double.exp.R
new file mode 100644
index 0000000..b5840d5
--- /dev/null
+++ b/R/double.exp.R
@@ -0,0 +1,24 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+double.exp <- function(x) {
+ # double exponential weight function
+ 0.5 * exp(-abs(x))
+}
diff --git a/R/drape.color.R b/R/drape.color.R
new file mode 100644
index 0000000..0966980
--- /dev/null
+++ b/R/drape.color.R
@@ -0,0 +1,67 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"drape.color" <- function(z, col = tim.colors(64),
+ zlim = NULL, breaks, transparent.color = "white", midpoint = TRUE,
+ eps = 1e-08) {
+ # range if zlim not supplied
+ if (is.null(zlim)) {
+ zlim <- range(c(z), na.rm = TRUE)
+ }
+ # set any values outside of range to NA ( i.e. the transparent.color)
+ z[(z < zlim[1]) | (z > zlim[2])] <- NA
+ NC <- length(col)
+ M <- nrow(z)
+ N <- ncol(z)
+ # if midpoint is TRUE find average z value for a facet and
+ # overwrite z with matrix where row and column are one less
+ # (reflecting that these are box centers not corners)
+ if (midpoint) {
+ z <- (z[1:(M - 1), 1:(N - 1)] + z[2:M, 1:(N - 1)] + z[1:(M -
+ 1), 2:N] + z[2:M, 2:N])/4
+ M <- M - 1
+ N <- N - 1
+ }
+ if (missing(breaks)) {
+ breaks <- NA
+ }
+ if (is.na(breaks[1])) {
+ # spacing for grid to assign colors
+ # +-eps included so that if z== zlim[1 or 2] it gets a color
+ # if statement is for when the limit is exactly zero
+ # thanks to Rosa Trancoso for finding this bug
+ zrange <- zlim[2] - zlim[1]
+ lower <- ifelse(abs(zlim[1]) != 0, (zlim[1] - abs(zlim[1]) *
+ eps), -eps * zrange)
+ upper <- ifelse(abs(zlim[2]) != 0, (zlim[2] + abs(zlim[1]) *
+ eps), eps * zrange)
+ breaks <- seq(lower, upper, , NC + 1)
+ }
+ if (length(breaks) != NC + 1) {
+ stop("must have one more break than colour")
+ }
+ # the magic of R ...
+ icolor <- cut(c(z), breaks)@.Data
+ # returned values is a vector of character hex strings encoding the colors.
+ hold <- ifelse(is.na(icolor), transparent.color, col[icolor])
+ # points not assigned a bin from breaks get an NA
+ # NA are converted to transparent color
+ list(color.index = matrix(hold, nrow = M, ncol = N), breaks = breaks)
+}
diff --git a/R/drape.plot.R b/R/drape.plot.R
new file mode 100644
index 0000000..55eb34b
--- /dev/null
+++ b/R/drape.plot.R
@@ -0,0 +1,68 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"drape.plot" <- function(x, y, z, z2 = NULL, col = tim.colors(64),
+ zlim = range(z, na.rm = TRUE), zlim2 = NULL, add.legend = TRUE,
+ horizontal = TRUE, theta = 30, phi = 20, breaks = NA, ...) {
+ #
+ # Thanks to JiHO for making corrections and useful extensions to this function
+ #
+ # if x is a list, discard y and z and extract them from x
+ if (is.list(x)) {
+ z <- x$z
+ y <- x$y
+ x <- x$x
+ }
+ NC <- length(col)
+ M <- nrow(z)
+ N <- ncol(z)
+ # if z2 is passed ( values for coloring facets ) use it
+ # if not use the z matrix that is also used to draw the
+ # perspective plot.
+ if (!is.null(z2)) {
+ M2 <- nrow(z2)
+ N2 <- ncol(z2)
+ if ((M != M2) | (N != N2)) {
+ stop("draping matrix dimensions must match z")
+ }
+ }
+ else {
+ z2 <- z
+ }
+ # if zlim2 has not been passed, set reasonable limits.
+ # if z2 is passed, set it to the range of z2
+ # if z2 is not passed, z2=z so we set it to the range of z (equal to zlim)
+ if (is.null(zlim2)) {
+ zlim2 <- range(c(z2), na.rm = TRUE)
+ }
+ # determine the colors for facets based on z2, the color scale and
+ # the zlim2 z limits
+ drape.info <- drape.color(z2, col = col, zlim = zlim2, breaks = breaks)
+ # draw filled wireframe and save perspective information
+ pm <- persp(x, y, z, theta = theta, phi = phi, col = drape.info$color.index,
+ zlim = zlim, ...)
+ # Note that zlim2 defines limits of color scale
+ if (add.legend) {
+ image.plot(zlim = zlim2, legend.only = TRUE, col = col,
+ horizontal = horizontal, breaks = drape.info$breaks)
+ }
+ # return pm if an assignment is made (see help file)
+ invisible(pm)
+}
diff --git a/R/dyadic.2check.R b/R/dyadic.2check.R
new file mode 100644
index 0000000..5403caf
--- /dev/null
+++ b/R/dyadic.2check.R
@@ -0,0 +1,36 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+dyadic.2check <- function(m, n, cut.p = 2) {
+ # checks that n is of the form
+ # n=p*2^m where p <= cut.p
+ m2 <- as.integer(m)
+ n2 <- as.integer(n)
+ while ((n2 > cut.p) & (m2 > cut.p)) {
+ if ((m2%%2 != 0) | (n2%%2 != 0)) {
+ cat(n, "and", m, "must equal p*2^L where p is less than or equal to ",
+ cut.p, fill = TRUE)
+ return(FALSE)
+ }
+ m2 <- m2/2
+ n2 <- n2/2
+ }
+ return(TRUE)
+}
diff --git a/R/dyadic.check.R b/R/dyadic.check.R
new file mode 100644
index 0000000..da5e513
--- /dev/null
+++ b/R/dyadic.check.R
@@ -0,0 +1,34 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+dyadic.check <- function(n, cut.p = 2) {
+ # checks that n is of the form
+ # n=p*2^m where p <= cut.p
+ n2 <- as.integer(n)
+ while (n2 > cut.p) {
+ if (n2%%2 != 0) {
+ cat(n, "must equal p*2^m where p is less than or equal to ",
+ cut.p, fill = TRUE)
+ return(FALSE)
+ }
+ n2 <- n2/2
+ }
+ return(TRUE)
+}
diff --git a/R/evlpoly.R b/R/evlpoly.R
new file mode 100644
index 0000000..1b50945
--- /dev/null
+++ b/R/evlpoly.R
@@ -0,0 +1,34 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fields.evlpoly <- function(x, coef) {
+ # evaluates polynomial at x values with coefficients coef[i] and powers i-1
+ #
+ n <- length(x)
+ J <- length(coef)
+ results <- rep(0, n)
+ temp <- .Fortran("evlpoly",PACKAGE="fields",
+ x = as.double(x),
+ n = as.integer(n),
+ coef = as.double(coef),
+ j = as.integer(J),
+ results = as.double(results))$results
+ return(temp)
+}
diff --git a/R/evlpoly2.R b/R/evlpoly2.R
new file mode 100644
index 0000000..2c09232
--- /dev/null
+++ b/R/evlpoly2.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fields.evlpoly2 <- function(x, coef, ptab) {
+ # evaluates polynomial at x values with coefficients coef[i] and powers i-1
+ #
+ n <- nrow(x)
+ nd <- ncol(x)
+ J <- nrow(ptab)
+ if (length(coef) != J) {
+ stop("coefficients not same length as ptab rows")
+ }
+ results <- rep(0, n)
+ temp <- .Fortran("evlpoly2",PACKAGE="fields",
+ x = as.double(x),
+ n = as.integer(n),
+ nd = as.integer(nd),
+ ptab = as.integer(ptab),
+ j = as.integer(J),
+ coef = as.double(coef),
+ results = as.double(results))$results
+ return(temp)
+}
diff --git a/R/exp.cov.R b/R/exp.cov.R
new file mode 100644
index 0000000..fa66f25
--- /dev/null
+++ b/R/exp.cov.R
@@ -0,0 +1,102 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Exp.cov" <- function(x1, x2=NULL, theta = 1, p=1,
+ distMat = NA, C = NA, marginal = FALSE, onlyUpper=FALSE) {
+
+ if (!is.matrix(x1))
+ x1 <- as.matrix(x1)
+ if (is.null(x2))
+ x2 <- x1
+ if (!is.matrix(x2))
+ x2 <- as.matrix(x2)
+ if (length(theta) > 1)
+ stop("Non-scalar theta as input to Exp.cov is depracated. Use the V argument in stationary.cov or scale
+ the input locations beforehand.")
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ # scale the coordinates by theta if distance matrix isn't precomputed
+ # a more general scaling by a matrix is done in stationary.cov
+ if(is.na(distMat[1]) || !is.na(C[1])) {
+ x1 <- x1*(1/theta)
+ x2 <- x2*(1/theta)
+ }
+ #
+ # there are three main possible actions listed below:
+ #
+ # if no cross covariance matrix and marginal variance not desired
+ if (is.na(C[1]) && !marginal) {
+
+ #compute distance matrix if necessary
+ if(is.na(distMat[1]))
+ distMat = rdist(x1, x2, compact=TRUE)
+ else
+ distMat = distMat*(1/theta)
+
+ #only exponentiate by p if p != 1
+ if(p != 1)
+ distMat = distMat^p
+
+ if(inherits(distMat, "dist")) {
+ #distMat is in compact form, so evaluate over all distMat and convert to matrix form
+
+ if(onlyUpper)
+ return(compactToMat(exp(-distMat), diagVal=1))
+ else
+ #if onlyUpper==FALSE, fill in lower triangle of covariance matrix as well
+ return(compactToMat(exp(-distMat), diagVal=1, lower.tri=TRUE))
+
+ }
+ else {
+ #distMat is an actual matrix
+
+ #only evaluate upper triangle of covariance matrix if possible
+ if(onlyUpper && nrow(distMat) == ncol(distMat))
+ return(ExponentialUpper(distMat))
+ else
+ return(exp(-distMat))
+ }
+
+ }
+ #
+ # multiply cross covariance matrix by C
+ # in this case implemented in C
+ #
+ else if (!is.na(C[1])) {
+ return(.Call("multebC",
+ nd = as.integer(d),
+ x1 = as.double(x1),
+ n1 = as.integer(n1),
+ x2 = as.double(x2),
+ n2 = as.integer(n2),
+ par = as.double(p),
+ c = as.double(C),
+ work = as.double(rep(0, n2)) , PACKAGE="fields")
+ )
+ }
+ #
+ # return marginal variance ( 1.0 in this case)
+ else if (marginal) {
+ return(rep(1, nrow(x1)))
+ }
+
+ #not possible to reach this point
+}
diff --git a/R/exp.earth.cov.R b/R/exp.earth.cov.R
new file mode 100644
index 0000000..b0b657b
--- /dev/null
+++ b/R/exp.earth.cov.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Exp.earth.cov" <- function(x1, x2, theta = 1) {
+ exp(-rdist.earth(x1, x2)/theta)
+}
diff --git a/R/exp.image.cov.R b/R/exp.image.cov.R
new file mode 100644
index 0000000..dbbf534
--- /dev/null
+++ b/R/exp.image.cov.R
@@ -0,0 +1,59 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Exp.image.cov" <- function(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, ...) {
+ if (is.null(cov.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ M <- ceiling2(2 * m)
+ N <- ceiling2(2 * n)
+ xg <- make.surface.grid(list((1:M) * dx, (1:N) * dy))
+ center <- matrix(c((dx * M)/2, (dy * N)/2), nrow = 1,
+ ncol = 2)
+ out <- Exp.cov(xg, center, ...)
+ out <- as.surface(xg, c(out))$z
+ temp <- matrix(0, nrow = M, ncol = N)
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+ cov.obj <- list(m = m, n = n, grid = grid, N = N, M = M,
+ wght = wght, call = match.call())
+ if (setup) {
+ return(cov.obj)
+ }
+ }
+ temp <- matrix(0, nrow = cov.obj$M, ncol = cov.obj$N)
+ if (missing(ind1)) {
+ temp[1:cov.obj$m, 1:cov.obj$n] <- Y
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[1:cov.obj$m,
+ 1:cov.obj$n])
+ }
+ else {
+ if (missing(ind2)) {
+ temp[ind1] <- Y
+ }
+ else {
+ temp[ind2] <- Y
+ }
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[ind1])
+ }
+}
diff --git a/R/exp.simple.cov.R b/R/exp.simple.cov.R
new file mode 100644
index 0000000..2085d8a
--- /dev/null
+++ b/R/exp.simple.cov.R
@@ -0,0 +1,62 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+Exp.simple.cov <- function(x1, x2=NULL, theta = 1, C = NA,
+ marginal = FALSE) {
+ # this is a simple exponential covariance function
+ # with the calling format and behaviour used in fields.
+ #
+ # different locations are the different rows of x1 and x2.
+ # this function can return three different results
+ # depending on the values of C and marginal.
+ # The three cases:
+ # 1) cross covaraince matrix
+ # 2) cross covariance matrix times a vector (C)
+ # 3) the diagonal elements of covariance matrix at locations x1.
+ if( !is.null(x2)){
+ x2<- x1
+ }
+ # CASE 1:
+ if (is.na(C[1]) & !marginal) {
+ # rdist finds the cross distance matrix between the
+ # locations at x1, x2.
+ #
+ return(exp(-rdist(x1, x2)/theta))
+ }
+ # CASE 2:
+ # or return multiplication of cov( x2,x1) with vector C
+ if (!is.na(C[1])) {
+ return(exp(-rdist(x1, x2)/theta) %*% C)
+ #
+ # if the rows of X1 are large
+ # this line could be replaced by a call to C or FORTRAN
+ # to make the multiply use less memory.
+ #
+ # there are also other algorithms for fast multiplies when
+ # X2 is on a grid.
+ #
+ }
+ # CASE 3
+ # return marginal variance (in this case it is trivial a constant vector
+ # with 1.0)
+ if (marginal) {
+ return(rep(1, nrow(x1)))
+ }
+}
diff --git a/R/fast.1way.R b/R/fast.1way.R
new file mode 100644
index 0000000..366d00c
--- /dev/null
+++ b/R/fast.1way.R
@@ -0,0 +1,46 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fast.1way" <- function(lev, y, w = rep(1, length(y))) {
+ # w are proportional to reciprocal variance.
+ if (!is.matrix(y)) {
+ y <- as.matrix(y)
+ }
+ N <- nrow(y)
+ NC <- ncol(y)
+ # ordered unique values of lev
+ tags <- lev[!duplicated(lev)]
+ NR <- length(tags)
+ # lev are now integer tags
+ lev <- match(lev, tags)
+ #
+ means <- matrix(NA, nrow = NR, ncol = NC)
+ # add together weights with same lev
+ w.means <- c(tapply(w, lev, sum))
+ for (k in 1:NC) {
+ # find weighted means for each lev
+ means[, k] <- (tapply(y[, k] * w, lev, sum)/w.means)
+ }
+ # find SS
+ SSE <- colSums((w * (y - means[lev, ])^2))
+ MSE <- SSE/(N - NR)
+ list(n = N, means = means, SSE = SSE, w.means = w.means,
+ MSE = MSE, lev = lev, tags = tags)
+}
diff --git a/R/fastTps.MLE.R b/R/fastTps.MLE.R
new file mode 100644
index 0000000..0023807
--- /dev/null
+++ b/R/fastTps.MLE.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fastTps.MLE <- function(x, y, weights = rep(1, nrow(x)),
+ Z = NULL, ..., par.grid=NULL, theta, lambda = NULL,
+ lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04) {
+
+ warning("fastTps.MLE is deprecated and might be removed in a future release. Use fastTpsMLE instead.")
+
+ do.call("fastTpsMLE", c(list(x, y, weights, Z), list(...), list(par.grid, theta, lambda,
+ lambda.profile, verbose,
+ relative.tolerance)))
+}
+
diff --git a/R/fastTps.family.R b/R/fastTps.family.R
new file mode 100644
index 0000000..f7d079f
--- /dev/null
+++ b/R/fastTps.family.R
@@ -0,0 +1,255 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fastTps" <- function(x, Y, m = NULL, p = NULL, theta,
+ lon.lat = FALSE, find.trA=TRUE,lambda=0, ...) {
+ x <- as.matrix(x)
+ d <- ncol(x)
+ if (is.null(p)) {
+ if (is.null(m)) {
+ m <- max(c(2, ceiling(d/2 + 0.1)))
+ }
+ p <- (2 * m - d)
+ if (p <= 0) {
+ stop(" m is too small you must have 2*m -d >0")
+ }
+ }
+ # special arguments to send to the wendland covariance/taper function.
+ # see nearest.dist for some explanation of 'method'
+ cov.args <- list(k = p, Dist.args = list(method = ifelse(!lon.lat,
+ "euclidean", "greatcircle")))
+ if( lambda==0){
+ warning("fastTps will interpolate observations")}
+ object<-mKrig(x, Y, cov.function = "wendland.cov", m = m, cov.args = cov.args,
+ theta = theta, find.trA = find.trA,lambda=lambda, ...)
+ object$call<- match.call()
+ class(object) <- c( "fastTps", "mKrig")
+ return( object)
+}
+
+predict.fastTps <- function(object, xnew = NULL, grid.list=NULL,
+ ynew = NULL, derivative = 0,
+ Z = NULL, drop.Z = FALSE, just.fixed = FALSE, xy=c(1,2), ...)
+ {
+ # the main reason to pass new args to the covariance is to increase
+ # the temp space size for sparse multiplications
+ # other optional arguments from mKrig are passed along in the
+ # list object$args
+ cov.args <- list(...)
+ # predict using grid.list or as default observation locations
+ if( !is.null(grid.list)){
+ xnew<- make.surface.grid( grid.list)
+ }
+ if( is.null(xnew) ) {
+ xnew <- object$x
+ }
+ if (!is.null(ynew)) {
+ coef.hold <- mKrig.coef(object, ynew)
+ c.coef <- coef.hold$c
+ d.coef <- coef.hold$d
+ }
+ else {
+ c.coef <- object$c
+ d.coef <- object$d
+ }
+ # fixed part of the model this a polynomial of degree m-1
+ # Tmatrix <- fields.mkpoly(xnew, m=object$m)
+ #
+ if (derivative == 0){
+ if (drop.Z | object$nZ == 0) {
+ # just evaluate polynomial and not the Z covariate
+ temp1 <- fields.mkpoly(xnew, m = object$m) %*% d.coef[object$ind.drift, ]
+ }
+ else{
+ if( is.null(Z)) {
+ Z <- object$Tmatrix[, !object$ind.drift]
+ }
+ temp1 <- cbind(fields.mkpoly(xnew, m = object$m), Z) %*% d.coef
+ }
+ }
+ else{
+ if (!drop.Z & object$nZ > 0) {
+ stop("derivative not supported with Z covariate included")
+ }
+ temp1 <- fields.derivative.poly(xnew, m = object$m, d.coef[object$ind.drift,
+ ])
+ }
+ if (just.fixed) {
+ return(temp1)
+ }
+
+ useFORTRAN<- (ncol(object$x)==2) & (object$args$k == 2) & (derivative==0) & (!is.null(grid.list))
+
+
+ # add nonparametric part.
+ # call FORTRAN under a specific case
+ if( useFORTRAN){
+
+ temp2<- multWendlandGrid(grid.list, object$knots, delta=object$args$theta, c.coef, xy=xy)
+
+ }
+ else{
+ temp2 <- do.call(object$cov.function.name, c(object$args,
+ list(x1 = xnew, x2 = object$knots, C = c.coef, derivative = derivative),
+ cov.args))
+ }
+# add two parts together
+ return((temp1 + temp2))
+}
+
+multWendlandGrid <- function( grid.list,center, delta, coef, xy= c(1,2) ){
+ xGrid<- grid.list[[xy[1]]]
+ yGrid<- grid.list[[xy[2]]]
+ mx<- length( xGrid)
+ my<- length( yGrid)
+# transform centers to correspond to integer spacing of grid:
+# i.e. 1:nx and 1:ny
+ dx<- (xGrid[mx] - xGrid[1]) / (mx-1)
+ dy<- (yGrid[my] - yGrid[1]) / (my-1)
+ centerScaled<- cbind( ((center[,1] - xGrid[1]) / dx) + 1,
+ ((center[,2] - yGrid[1]) / dy) + 1 )
+ deltaX<- delta/dx
+ deltaY<- delta/dy
+
+ nc<- nrow( center)
+ out<-.Fortran( "multWendlandG", PACKAGE="fields",
+ mx=as.integer(mx),
+ my=as.integer(my),
+ deltaX= as.double( deltaX),
+ deltaY= as.double( deltaY),
+ nc= as.integer(nc),
+ center=as.double(centerScaled),
+ coef=as.double(coef),
+ h= as.double(matrix(0,mx,my)),
+ flag=as.integer(1)
+ )
+ if( out$flag!= 0){
+ stop("error in multWendlandG FORTRAN")}
+ return( out$h)
+ }
+
+#
+#"sim.fastTps.approx"<- function(fastTpsObject,...){
+# sim.mKrig.approx( fastTpsObject,...)}
+#
+
+
+"sim.fastTps.approx" <- function(fastTpsObject, predictionPointsList,
+ simulationGridList=NULL, gridRefinement=5, gridExpansion=1 + 1e-07,
+ M = 1, delta=NULL, verbose = FALSE, ... ) {
+ # create grid if not passed
+ if( ncol( fastTpsObject$x) != 2){
+ stop("Only implemented for 2 dimensions")
+ }
+# coerce names of grid to be x and y
+ names(predictionPointsList) <- c( "x", "y")
+ nx<- length((predictionPointsList$x))
+ ny<- length((predictionPointsList$y))
+
+ simulationGridList<- makeSimulationGrid2( fastTpsObject, predictionPointsList ,
+ gridRefinement, gridExpansion)
+ nxSimulation<- length(simulationGridList$x)
+ nySimulation<- length(simulationGridList$y)
+ sigma <- fastTpsObject$sigma.MLE
+ rho <- fastTpsObject$rho.MLE
+ #
+ # set up various sizes of arrays
+ nObs <- nrow(fastTpsObject$x)
+ if (verbose) {
+ cat("nObs, sigma, rho", nObs, sigma, rho, fill = TRUE)
+ }
+ #
+ # set up object for simulating on a grid
+ #
+# print( system.time(
+ covarianceObject <- wendland.image.cov(
+ setup = TRUE, grid =simulationGridList,
+ cov.args=fastTpsObject$args )
+# ))
+ if (verbose) {
+ cat( "dim of full circulant matrix ", dim(covarianceObject$wght), fill = TRUE)
+ }
+ # output array
+ out <- matrix(NA, nx*ny, M )
+ #
+ # find conditional mean field from initial fit
+ # don't multiply by sd or add mean if this is
+ # a correlation model fit.
+ # (these are added at the predict step).
+ # from now on all predicted values are on the grid
+ # represented by a matrix
+ hHat<- predict.fastTps(fastTpsObject, grid.list=predictionPointsList,...)
+ # empty image object to hold simulated fields
+ hTrue<- c( simulationGridList, list( z= matrix(NA, nxSimulation,nySimulation)))
+ ##########################################################################################
+ ### begin the big loop
+ ##########################################################################################
+ xData<- fastTpsObject$x
+ weightsError<- fastTpsObject$weights
+ for (k in 1:M) {
+ # simulate full field
+ if( verbose){
+ cat( k, " ")}
+ hTrue$z <- sqrt(rho) * sim.rf(covarianceObject)
+ #
+ # NOTE: fixed part of model (null space) need not be simulated
+ # because the estimator is unbiased for this part.
+ # the variability is still captured because the fixed part
+ # is still estimated as part of the predict step below
+ #
+ # bilinear interpolation to approximate values at data locations
+ #
+ hData <- interp.surface(hTrue,xData)
+ hPredictionGrid<- c(interp.surface.grid(hTrue, predictionPointsList)$z)
+ ySynthetic <- hData + sigma * 1/sqrt(weightsError)* rnorm(nObs)
+ # predict at grid using these data
+ # and subtract from synthetic 'true' value
+
+ spatialError <- c(
+ predictSurface.fastTps(fastTpsObject,
+ grid.list = predictionPointsList,
+ ynew = ySynthetic, ...
+ )$z
+ ) - hPredictionGrid
+ # add the error to the actual estimate (conditional mean)
+ out[ , k] <- hHat + spatialError
+ }
+ return( list( predictionPointsList=predictionPointsList, Ensemble=out, call=match.call()) )
+}
+
+
+ makeSimulationGrid2<-function( fastTpsObject, predictionPointsList,
+ gridRefinement, gridExpansion){
+ nx<- length((predictionPointsList$x))
+ ny<- length((predictionPointsList$y))
+ nxSimulation<- nx*gridRefinement*gridExpansion
+ nySimulation<- ny*gridRefinement*gridExpansion
+ # range should include prediction grid and the data locations
+ xRange<- range(c(fastTpsObject$x[,1], predictionPointsList$x) )
+ yRange<- range(c(fastTpsObject$x[,2], predictionPointsList$y) )
+ midpointX<- (xRange[2] + xRange[1])/2
+ midpointY<- (yRange[2] + yRange[1])/2
+ deltaX<- gridExpansion*(xRange[2] - xRange[1])/2
+ deltaY<- gridExpansion*(yRange[2] - yRange[1])/2
+ return(
+ list( x= seq( midpointX - deltaX, midpointX + deltaX,, nxSimulation),
+ y= seq( midpointY - deltaY, midpointY + deltaY,, nySimulation) )
+ )
+}
diff --git a/R/fastTpsMLE.R b/R/fastTpsMLE.R
new file mode 100644
index 0000000..fb16a4a
--- /dev/null
+++ b/R/fastTpsMLE.R
@@ -0,0 +1,133 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fastTpsMLE <- function(x, y, weights = rep(1, nrow(x)),
+ Z = NULL, ..., par.grid=NULL, theta, lambda = NULL,
+ lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04) {
+ # these are all the arguments needed to call mKrig except
+ # lambda and those in par.grid.
+ fastTpsArgs <- c(list(x = x, Y = y, weights = weights, Z = Z,
+ theta=theta),
+ list(...))
+ lnProfileLike.max <- -1e+20
+ # find NG -- number of parameters to try
+ par.grid <- data.frame(par.grid)
+ if (nrow(par.grid) == 0) {
+ if (is.null(lambda)) {
+ NG <- 1
+ }
+ else {
+ NG <- length(lambda)
+ }
+ }
+ else {
+ NG <- nrow(par.grid)
+ }
+ # output matrix to summarize results
+ summary <- matrix(NA, nrow = NG, ncol = 8)
+ dimnames(summary) <- list(NULL, c("EffDf", "lnProfLike",
+ "GCV", "sigma.MLE", "rho.MLE", "llambda.MLE", "counts eval",
+ "counts grad"))
+ lambda.best <- NA
+ # default for lambda is 1.0 for first value and exp(llambda.opt) for subsequent ones
+ # this is controlled by NAs for lambda starting values.
+ if (is.null(lambda)) {
+ lambda <- rep(NA, NG)
+ }
+ # default starting value for lambda is 1 or log lambda is 0
+ llambda.opt <- 0
+ optim.counts <- c(NA, NA)
+ lnLike.eval <- list()
+ # Define the objective function as a tricksy call to mKrig
+ # if Y is a matrix of replicated data sets use the log likelihood for the complete data sets
+ temp.fn <- function(x) {
+ # NOTE: FULL refers to estimates collapsed across the replicates if
+ #Y is a matrix
+ # assign to hold only a few components returned by mKrig
+ hold <- do.call("fastTps", c(fastTpsArgs,
+ list(lambda = exp(x)),list(find.trA=FALSE), cov.args.temp ))[
+ c("lambda.fixed",
+ "rho.MLE.FULL",
+ "sigma.MLE.FULL",
+ "lnProfileLike.FULL")]
+ # add this evalution to an object (i.e. here a matrix) in the calling frame
+ temp.eval <- get("capture.evaluations")
+ assign("capture.evaluations", rbind(temp.eval, unlist(hold)),
+ envir = capture.env)
+ return(hold$lnProfileLike.FULL)
+ }
+ #
+ # begin loop over covariance arguments
+ for (k in 1:NG) {
+ llambda.start <- ifelse(is.na(lambda[k]), llambda.opt,
+ log(lambda[k]))
+ # list of covariance arguments from par.grid with right names (some R arcania!)
+ # note that this only works because 1) temp.fn will search in this frame for this object
+ # par.grid has been coerced to a data frame so one has a concept of a row subscript.
+ cov.args.temp <- as.list(par.grid[k, ])
+ names(cov.args.temp) <- names(par.grid)
+ if (lambda.profile) {
+ # set up matrix to store evaluations from within optim
+ capture.evaluations <- matrix(NA, ncol = 4, nrow = 1,
+ dimnames = list(NULL, c("lambda", "rho.MLE",
+ "sigma.MLE", "lnProfileLike.FULL")))
+ capture.env <- environment()
+ # call to optim
+ look <- optim(llambda.start, temp.fn, method = "BFGS",
+ control = list(fnscale = -1, parscale = 0.1,
+ ndeps = 0.05, reltol = relative.tolerance))
+ llambda.opt <- look$par
+ optim.counts <- look$counts
+ # call to 1-d search
+ # opt.summary <- optimize(temp.fn, interval= llambda.start + c(-8,8), maximum=TRUE)
+ # llambda.opt <- opt.summary$maximum
+ # optim.counts<- c(nrow(capture.evaluations)-1, NA)
+ # accumulate the new matrix of lnlambda and ln likelihoods (omitting first row of NAs)
+ lnLike.eval <- c(lnLike.eval, list(capture.evaluations[-1,
+ ]))
+ }
+ else {
+ # no refinement for lambda so just save the the 'start' value as final one.
+ llambda.opt <- llambda.start
+ }
+ # final fit at optimal value (or starting value if not refinement/maximization for lambda)
+ obj <- do.call("fastTps", c(fastTpsArgs, cov.args.temp,
+ list(lambda = exp(llambda.opt))))
+ if (obj$lnProfileLike.FULL > lnProfileLike.max) {
+ lnProfileLike.max <- obj$lnProfileLike.FULL
+ cov.args.MLE <- cov.args.temp
+ lambda.best <- exp(llambda.opt)
+ }
+ # save results of the kth covariance model evaluation
+ summary[k, 1:8] <- c(obj$eff.df, obj$lnProfileLike.FULL,
+ obj$GCV, obj$sigma.MLE.FULL, obj$rho.MLE.FULL, llambda.opt,
+ optim.counts)
+ if (verbose) {
+ cat("Summary: ", k, summary[k, 1:8], fill = TRUE)
+ }
+ }
+ return(list(summary = summary, par.grid = par.grid,
+ cov.args.MLE = cov.args.MLE,
+ mKrig.args = list(...), lambda.best = lambda.best,
+ lambda.MLE = lambda.best,
+ call = match.call(), lnLike.eval = lnLike.eval))
+}
+
diff --git a/R/fields.color.picker.R b/R/fields.color.picker.R
new file mode 100644
index 0000000..8564241
--- /dev/null
+++ b/R/fields.color.picker.R
@@ -0,0 +1,50 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fields.color.picker <- function() {
+ c(mar = c(0, 0, 3, 0))
+ # names of colors in default graphics options.
+ clab <- colors()
+ n <- length(clab)
+ N <- ceiling(sqrt(n))
+ M <- N
+ temp <- rep(NA, M * N)
+ temp[1:n] <- 1:n
+ z <- matrix(temp, M, N)
+ # matrix of all colors
+ image(seq(0.5, M + 0.5, , M + 1), seq(0.5, N + 0.5, , N +
+ 1), z, col = clab, axes = FALSE, xlab = "", ylab = "")
+ cat("Use mouse to identify color", fill = TRUE)
+ loc <- locator(1)
+ i <- round(loc$x)
+ j <- round(loc$y)
+ ind <- z[i, j]
+ points(i, j, col = clab[ind], cex = 4, pch = "O")
+ points(i, j, pch = "+", col = "black", cex = 1)
+ mtext(side = 3, text = clab[ind], col = clab[ind], line = 1,
+ cex = 2)
+ # write out RGB values to console
+ cat("ID ", ind, " name ", clab[ind], fill = TRUE)
+ cat("RGB", col2rgb(clab[ind])/256, fill = TRUE)
+ temp <- signif(col2rgb(clab[ind])/256, 3)
+ # This line is marginally in LaTeX format to define color
+ cat(clab[ind], " {rgb}{", temp[1], ",", temp[2], ",", temp[3],
+ "}", fill = TRUE)
+}
diff --git a/R/fields.convert.grid.R b/R/fields.convert.grid.R
new file mode 100644
index 0000000..d1c12a5
--- /dev/null
+++ b/R/fields.convert.grid.R
@@ -0,0 +1,34 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fields.convert.grid" <- function(midpoint.grid) {
+ # converts from midpoints of a grid to boundaries
+ # x are midpoints of grid
+ # this will handle unequally spaced points
+ x <- sort(midpoint.grid)
+ n <- length(x)
+ # interior boundaries
+ xi <- (x[2:n] + x[1:(n - 1)])/2
+ # first and last.
+ x1 <- x[1] - (x[2] - x[1])/2
+ xnp1 <- x[n] + (x[n] - x[(n - 1)])/2
+ #here you have it ...
+ c(x1, xi, xnp1)
+}
diff --git a/R/fields.derivative.poly.R b/R/fields.derivative.poly.R
new file mode 100644
index 0000000..f917803
--- /dev/null
+++ b/R/fields.derivative.poly.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fields.derivative.poly <- function(x, m, dcoef) {
+ # dimension of x locations
+ # goal is find partial derivative matrix
+ d <- ncol(x)
+ out <- fields.mkpoly(rbind(x[1, ]), m)
+ ptab <- attr(out, "ptab")
+ if (nrow(ptab) != length(dcoef)) {
+ stop(" rows of ptab not equal to length of dcoef")
+ }
+ hold <- matrix(NA, ncol = d, nrow = nrow(x))
+ for (k in 1:d) {
+ nonzero <- ptab[, k] != 0
+ ptemp <- matrix(ptab[nonzero, ], ncol = d)
+ dtemp <- dcoef[nonzero]
+ dtemp <- dtemp * ptemp[, k]
+ ptemp[, k] <- ptemp[, k] - 1
+ hold[, k] <- fields.evlpoly2(x, dtemp, ptemp)
+ }
+ return(hold)
+}
diff --git a/R/fields.diagonalize.R b/R/fields.diagonalize.R
new file mode 100644
index 0000000..8e645d2
--- /dev/null
+++ b/R/fields.diagonalize.R
@@ -0,0 +1,48 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+"fields.diagonalize" <- function(A, B) {
+
+ hold <- eigen(A, symmetric = TRUE)
+ # square root of A
+ hold2 <- (t(hold$vectors) * sqrt(1/hold$values))
+ #
+ # A.inv.sqrt = hold2
+ # A.inv = hold%*% t(hold2)
+ #
+ # eigen decomp of A.inv.sqrt B t( A.inv.sqrt)
+ #
+ hold <- eigen((hold2) %*% B %*% t(hold2), symmetric = TRUE)
+ # the magic G matrix used throughout fields.
+ G <- t(hold2) %*% hold$vectors
+ #
+ # Note:
+ # G simultaneously diagonalizes two matrices:
+ #
+ # G^T A G= I
+ # G^T B G= D
+ #
+ # and in terms of application we also have the useful
+ # diagonalization
+ #
+ # (A +lambda B)^{-1} = G( I + lambda D)^{-1} G^T
+ list(G = G, D = hold$values)
+}
diff --git a/R/fields.diagonalize2.R b/R/fields.diagonalize2.R
new file mode 100644
index 0000000..eb76e3e
--- /dev/null
+++ b/R/fields.diagonalize2.R
@@ -0,0 +1,49 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+"fields.diagonalize2" <- function(A, B, verbose = FALSE) {
+ M <- nrow(A)
+ hold.AB <- eigen(A + B, symmetric = TRUE)
+ if (verbose) {
+ cat("log 10 condition number of A +B in fields.diagonlize2",
+ fill = TRUE)
+ print(log10(max(hold.AB$values)/min(hold.AB$values)))
+ }
+ # inverse square root of A+B
+ hold.AB <- (t(hold.AB$vectors) * (1/sqrt(hold.AB$values)))
+ hold.B <- eigen(hold.AB %*% A %*% t(hold.AB), symmetric = TRUE)
+ G <- t(hold.B$vectors) %*% hold.AB
+ D.A <- hold.B$values
+ # remove some large temporary matrices.
+ remove(hold.AB)
+ remove(hold.B)
+ # crank on finding G and D.
+ G <- (1/sqrt(D.A)) * G
+ D <- colSums(t(G) * (B) %*% t(G))
+ # sort from largest to smallest and take transpose ---
+ # this will now matches old version in fields.diagonalize
+ D <- D[M:1]
+ G <- t(G[M:1, ])
+ # to test:
+ # test.for.zero( t(G) %*% (A) %*% (G), diag(1,M), tag='A test' )
+ # test.for.zero( t(G) %*% (B) %*% (G), diag(D,M), tag='B test' )
+ list(G = G, D = D)
+}
diff --git a/R/fields.duplicated.matrix.R b/R/fields.duplicated.matrix.R
new file mode 100644
index 0000000..1b6126a
--- /dev/null
+++ b/R/fields.duplicated.matrix.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fields.duplicated.matrix" <- function(mat, digits = 8) {
+ nc <- ncol(mat)
+ temp <- matrix(match(c(signif(mat, digits)), unique(c(signif(mat,
+ digits)))), ncol = nc)
+ temp2 <- format(temp[, 1])
+ if (nc > 1) {
+ for (k in 2:nc) {
+ temp2 <- paste(temp2, temp[, k], sep = "X")
+ }
+ }
+ match(temp2, unique(temp2))
+}
diff --git a/R/fields.mkpoly.R b/R/fields.mkpoly.R
new file mode 100644
index 0000000..f555c75
--- /dev/null
+++ b/R/fields.mkpoly.R
@@ -0,0 +1,42 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fields.mkpoly" <- function(x, m = 2) {
+ if (m < 0)
+ stop("'m' has to be zero or larger.")
+ if( m==0){
+# warning("There is no polynomial fixed component")
+ return( NULL)
+ }
+ if (!is.matrix(x))
+ x <- as.matrix(x)
+ d <- ncol(x)
+ n <- nrow(x)
+ nterms <- choose((m + d - 1), d)
+ temp <- .Fortran("dmaket",PACKAGE="fields", m = as.integer(m), n = as.integer(n),
+ dim = as.integer(d), des = as.double(x), lddes = as.integer(n),
+ npoly = as.integer(nterms), tmatrix = as.double(rep(0,
+ n * (nterms))), ldt = as.integer(n), wptr = as.integer(rep(0,
+ d * m)), info = as.integer(0), ptab = as.integer(rep(0,
+ nterms * d)), ldptab = as.integer(nterms))
+ temp2 <- matrix(temp$tmatrix, nrow = n)
+ attr(temp2, "ptab") <- matrix(temp$ptab, nrow = nterms, ncol = d)
+ temp2
+}
diff --git a/R/fields.rdist.near.R b/R/fields.rdist.near.R
new file mode 100644
index 0000000..78dac08
--- /dev/null
+++ b/R/fields.rdist.near.R
@@ -0,0 +1,51 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fields.rdist.near <- function(x1, x2, delta, max.points = NULL,
+ mean.neighbor = 50) {
+ if (!is.matrix(x1))
+ x1 <- as.matrix(x1)
+ if (missing(x2))
+ x2 <- x1
+ if (!is.matrix(x2))
+ x2 <- as.matrix(x2)
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ if (is.null(max.points)) {
+ Nmax <- n1 * mean.neighbor
+ }
+ else {
+ Nmax <- max.points
+ }
+ out <- .Fortran("ddfind",PACKAGE="fields",
+ nd = as.integer(d), x1 = as.double(x1),
+ n1 = as.integer(n1), x2 = as.double(x2), n2 = as.integer(n2),
+ D0 = as.double(delta), ind = as.integer(rep(0, Nmax *
+ 2)), rd = as.double(rep(-1, Nmax)), Nmax = as.integer(Nmax),
+ iflag = as.integer(1))
+ N <- out$Nmax
+ if (out$iflag == -1) {
+ cat("temp space set at", Nmax, fill = TRUE)
+ stop("Ran out of space, increase max.points")
+ }
+ return(list(ind = matrix(out$ind, Nmax, 2)[1:N, ], ra = out$rd[1:N],
+ da = c(n1, n2)))
+}
diff --git a/R/fields.style.R b/R/fields.style.R
new file mode 100644
index 0000000..79e6409
--- /dev/null
+++ b/R/fields.style.R
@@ -0,0 +1,25 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fields.style" <- function() {
+ par(cex.axis = 1.2, cex.lab = 1.2, cex = 1.2, cex.sub = 1.2,
+ cex.main = 1.2, lwd = 1.5, bg = "transparent")
+ palette(c("orange1", "green2", "blue2", "red1"))
+}
diff --git a/R/fields.x.to.grid.R b/R/fields.x.to.grid.R
new file mode 100644
index 0000000..56b1943
--- /dev/null
+++ b/R/fields.x.to.grid.R
@@ -0,0 +1,41 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"fields.x.to.grid" <- function(x, nx = 80, ny = 80, xy = c(1, 2)) {
+ if (is.null(x)) {
+ stop("Need a an x matrix to determine ranges for grid")
+ }
+ M <- ncol(x)
+ grid.list <- as.list(1:M)
+ # add columns names
+ names(grid.list) <- dimnames(x)[[2]]
+ # cruise through x dimensions and find medians.
+ for (k in 1:M) {
+ grid.list[[k]] <- median(x[, k])
+ }
+ #
+ #
+ # overwrite with sequences for the two variables of surface
+ xr <- range(x[, xy[1]])
+ yr <- range(x[, xy[2]])
+ grid.list[[xy[1]]] <- seq(xr[1], xr[2], , nx)
+ grid.list[[xy[2]]] <- seq(yr[1], yr[2], , ny)
+ grid.list
+}
diff --git a/R/find.upcross.R b/R/find.upcross.R
new file mode 100644
index 0000000..63b6b3c
--- /dev/null
+++ b/R/find.upcross.R
@@ -0,0 +1,53 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"find.upcross" <- function(fun, fun.info, upcross.level = 0,
+ guess = 1, tol = 1e-05) {
+ l1 <- guess
+ tr <- 0
+ for (k in 1:50) {
+ tr <- fun(l1, fun.info) - upcross.level
+ if (tr >= 0)
+ break
+ else {
+ guess <- l1
+ }
+ l1 <- l1 * 2
+ }
+ if (tr < 0) {
+ warning("Failed to find the upcrossing")
+ return(NA)
+ }
+ tr <- 0
+ l2 <- guess
+ for (k in 1:50) {
+ tr <- fun(l2, fun.info) - upcross.level
+ if (tr <= 0)
+ break
+ l2 <- l2/2
+ }
+ if (tr > 0) {
+ warning("Failed to find the upcrossing")
+ return(NA)
+ }
+ out <- bisection.search(l2, l1, fun, tol = tol, f.extra = fun.info,
+ upcross.level = upcross.level)$x
+ (out)
+}
diff --git a/R/fitted.Krig.R b/R/fitted.Krig.R
new file mode 100644
index 0000000..b6776fc
--- /dev/null
+++ b/R/fitted.Krig.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+fitted.Krig <- function(object, ...) {
+ object$fitted.values
+}
diff --git a/R/flame.R b/R/flame.R
new file mode 100644
index 0000000..cb92f24
--- /dev/null
+++ b/R/flame.R
@@ -0,0 +1,44 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"flame" <- structure(list(x = structure(c(3, 5, 7,
+ 9, 11, 4, 6, 8, 10, 12, 4, 6, 8, 10, 12, 14, 5, 7, 9, 11,
+ 13, 15, 5, 7, 9, 11, 13, 15, 17, 7, 9, 11, 13, 15, 17, 19,
+ 8, 10, 12, 14, 16, 18, 20, 8, 10, 12, 14, 16, 18, 20, 22,
+ 10, 12, 14, 16, 18, 20, 22, 24, 12, 14, 16, 18, 20, 24, 12,
+ 14, 16, 18, 20, 22, 24, 26, 12, 14, 16, 18, 20, 22, 24, 26,
+ 14, 16, 18, 20, 22, 24, 26, 28, 15, 15, 15, 15, 15, 17, 17,
+ 17, 17, 17, 19, 19, 19, 19, 19, 19, 21, 21, 21, 21, 21, 21,
+ 23, 23, 23, 23, 23, 23, 23, 25, 25, 25, 25, 25, 25, 25, 27,
+ 27, 27, 27, 27, 27, 27, 29, 29, 29, 29, 29, 29, 29, 29, 31,
+ 31, 31, 31, 31, 31, 31, 31, 33, 33, 33, 33, 33, 33, 35, 35,
+ 35, 35, 35, 35, 35, 35, 37, 37, 37, 37, 37, 37, 37, 37, 39,
+ 39, 39, 39, 39, 39, 39, 39), .Dim = c(89, 2), .Dimnames = list(NULL,
+ c("Fuel", "O2"))), y = c(0.005, 0.017, 0.031, 0.041, 0.04,
+ 0.008, 0.021, 0.037, 0.042, 0.039, 0.007, 0.017, 0.032, 0.04,
+ 0.041, 0.035, 0.008, 0.018, 0.033, 0.041, 0.039, 0.034, 0.009,
+ 0.018, 0.029, 0.039, 0.041, 0.037, 0.033, 0.012, 0.023, 0.035,
+ 0.04, 0.04, 0.033, 0.033, 0.015, 0.025, 0.035, 0.041, 0.039,
+ 0.033, 0.03, 0.015, 0.022, 0.033, 0.039, 0.04, 0.035, 0.03,
+ 0.029, 0.019, 0.03, 0.037, 0.041, 0.038, 0.034, 0.029, 0.029,
+ 0.027, 0.034, 0.041, 0.039, 0.028, 0.026, 0.024, 0.032, 0.038,
+ 0.039, 0.035, 0.031, 0.027, 0.025, 0.023, 0.029, 0.037, 0.039,
+ 0.038, 0.033, 0.028, 0.024, 0.029, 0.036, 0.038, 0.039, 0.034,
+ 0.029, 0.025, 0.023)), .Names = c("x", "y"))
diff --git a/R/gauss.cov.R b/R/gauss.cov.R
new file mode 100644
index 0000000..def8291
--- /dev/null
+++ b/R/gauss.cov.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"gauss.cov" <- function(...) {
+ Exp.cov(..., p = 2)
+}
diff --git a/R/gcv.Krig.R b/R/gcv.Krig.R
new file mode 100644
index 0000000..af58572
--- /dev/null
+++ b/R/gcv.Krig.R
@@ -0,0 +1,232 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"gcv.Krig" <- function(out, lambda.grid = NA, cost = 1,
+ nstep.cv = 200, rmse = NA, verbose = FALSE, tol = 1e-05,
+ offset = 0, y = NULL, give.warnings = TRUE) {
+ nt <- out$nt
+ np <- out$np
+ N <- out$N
+ D <- out$matrices$D
+ # Yet another monster function called by Krig
+ # but there just many simple steps ...
+ #
+ # if a y data vector is not supplied then
+ # use the one in the Krig object
+ if (is.null(y)) {
+ u <- out$matrices$u
+ shat.pure.error <- out$shat.pure.error
+ pure.ss <- out$pure.ss
+ }
+ else {
+ #with new data need to update some statistics.
+ out2 <- Krig.make.u(out, y = y)
+ u <- out2$u
+ shat.pure.error <- out2$shat.pure.error
+ pure.ss <- out2$pure.ss
+ }
+ if (verbose) {
+ cat("u used:", fill = TRUE)
+ print(u)
+ }
+ #
+ # generate a reasonable grid of lambda based on equally spaced
+ # effective degrees of freedom
+ if (is.na(lambda.grid[1])) {
+ temp.df <- seq(nt, (np - offset) * 0.95, , nstep.cv)
+ temp.df[1] <- temp.df[1] + 0.001
+ for (k in 1:nstep.cv) {
+ lambda.grid[k] <- Krig.df.to.lambda(temp.df[k], D)
+ }
+ }
+ # make sure that the grid is in sorted order
+ lambda.grid <- sort(lambda.grid)
+ nl <- length(lambda.grid)
+ nd <- length(D)
+ V <- V.model <- V.one <- lplike <- trA <- shat <- rep(NA,
+ nl)
+ Dl <- rep(NA, nd)
+ #
+ # this is small little list used to pass information to the
+ # objective functions
+ info <- list(matrices = list(D = D, u = u), N = N, nt = nt,
+ cost = cost, pure.ss = pure.ss, shat.pure.error = shat.pure.error,
+ offset = offset)
+ #
+ # loop over lambda values for the grid search
+ for (k in 1:nl) {
+ #
+ # all the wonderful things calculated for each lambda
+ # note the use of the info list.
+ V[k] <- Krig.fgcv(lambda.grid[k], info)
+ V.one[k] <- Krig.fgcv.one(lambda.grid[k], info)
+ V.model[k] <- Krig.fgcv.model(lambda.grid[k], info)
+ lplike[k] <- Krig.flplike(lambda.grid[k], info)
+ shat[k] <- sqrt(Krig.fs2hat(lambda.grid[k], info))
+ trA[k] <- Krig.ftrace(lambda.grid[k], D)
+ }
+ #
+ # reformat as a matrix with all these values.
+ gcv.grid <- cbind(lambda.grid, trA, V, V.one, V.model, shat,
+ lplike)
+ gcv.grid <- as.data.frame(gcv.grid)
+ names(gcv.grid) <- c("lambda", "trA", "GCV", "GCV.one", "GCV.model",
+ "shat", "-lnLike Prof")
+ # find minima over grid ifelse used to avoid 0 length vector from which.min
+ IMIN<- rep( NA, 6)
+ IMIN[1]<- which.min( gcv.grid$GCV )
+ IMIN[2]<- ifelse( is.na(shat.pure.error), NA,
+ which.min(gcv.grid$GCV.model) )
+ IMIN[3]<- which.min( gcv.grid$GCV.one)
+ if( is.na( rmse)){
+ IMIN[4] <- NA
+ }
+ else{
+ rangeShat<- range( gcv.grid$shat)
+ IUpcross<- max( (1:nl)[gcv.grid$shat< rmse] )
+ IMIN[4]<- ifelse( (rangeShat[1]<= rmse)&(rangeShat[2] >=rmse),
+ IUpcross, NA)
+ }
+ IMIN[5]<- ifelse( is.na(shat.pure.error), NA,
+ which.min(abs(gcv.grid$shat-shat.pure.error)) )
+ IMIN[6]<- which.min( gcv.grid[["-lnLike Prof"]])
+ # NOTE IMIN indexes from smallest lambda to largest lambda in grid.
+ warningTable<- data.frame(
+ IMIN, IMIN == nl, IMIN==1,
+ gcv.grid$lambda[IMIN],
+ gcv.grid$trA[IMIN],
+ row.names = c("GCV","GCV.model", "GCV.one", "RMSE", "pure error", "REML")
+ )
+ warning<- (warningTable[,2]|warningTable[,3])&
+ (!is.na(warningTable[,1]))
+ indRefine<- (!warningTable[,2]) & (!warningTable[,3]) &
+ (!is.na(warningTable[,1]))
+ warningTable<- cbind( warning, indRefine, warningTable )
+ names( warningTable)<- c("Warning","Refine","indexMIN", "leftEndpoint", "rightEndpoint",
+ "lambda","effdf")
+ # now optimze the search producing refined optima
+ if (verbose)
+ print(gcv.grid)
+ # setup output matrix for refined values
+ lambda.est <- matrix(NA, ncol = 6, nrow = 6, dimnames = list(
+ c("GCV", "GCV.model", "GCV.one", "RMSE", "pure error", "REML"),
+ c("lambda", "trA", "GCV", "shat","-lnLike Prof" , "converge")))
+ # fill in grid search estimates
+ for( k in 1:6){
+ if( !is.na(IMIN[k])){
+ lambda.est[k,1]<- gcv.grid$lambda[IMIN[k]]
+ }
+ }
+ #
+ # now step through the many different ways to find lambda
+ # This is the key to these choices:
+ # 1- the usual GCV proposed by Craven/Wahba
+ # 2- GCV where data fitting is collapsed to the mean for
+ # each location and each location is omitted
+ # 3- True leave-one-out even with replicated observations
+ # 4- Match estimate of sigma to external value supplied (RMSE)
+ # 5- Match estimate of sigma from the estimate based the
+ # pure error sum of squares obtained by the observations
+ # replicated at the same locations
+ # 6- Maxmize the restricted maxmimum likelihood (REML)
+ # standard GCV w/o replicates
+ if( verbose){
+ print( warningTable)
+ }
+ if(indRefine[1]){
+ starts <- lambda.grid[IMIN[1] + c(-1,0,1)]
+ out <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=Krig.fgcv, f.extra = info, tol = tol)
+ lambda.est[1,1]<- out$x
+ lambda.est[1,6]<- out$iter
+ }
+ if( indRefine[2]) {
+ starts <- lambda.grid[IMIN[2] + c(-1,0,1)]
+ out <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=Krig.fgcv.model, f.extra = info, tol = tol)
+ lambda.est[2,1]<- out$x
+ lambda.est[2,6]<- out$iter
+ }
+ if( indRefine[3]) {
+ starts <- lambda.grid[IMIN[3] + c(-1,0,1)]
+ out <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=Krig.fgcv.one, f.extra = info, tol = tol)
+ lambda.est[3, 1] <-out$x
+ lambda.est[3,6]<- out$iter
+ }
+ if ( indRefine[6] ){
+ starts <- lambda.grid[IMIN[6] + c(-1,0,1)]
+ out <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=Krig.flplike, f.extra = info, tol = tol)
+ lambda.est[6,1]<- out$x
+ lambda.est[6,6]<- out$iter
+ }
+ if ( indRefine[4] ) {
+ guess<- gcv.grid$lambda[IMIN[4]]
+ lambda.rmse <- find.upcross(Krig.fs2hat, info,
+ upcross.level = rmse^2,
+ guess = guess, tol = tol * rmse^2)
+ lambda.est[4, 1] <- lambda.rmse
+ }
+ #
+ # matching estimate of sigma from reps.
+ if ( indRefine[5] ) {
+ guess <- gcv.grid$lambda[IMIN[5]]
+ lambda.pure.error <- find.upcross(Krig.fs2hat, info,
+ upcross.level = shat.pure.error^2, guess = guess,
+ tol = tol * shat.pure.error^2)
+ lambda.est[5, 1] <- lambda.pure.error
+ }
+ #
+ # OK done with all six methods
+ # NOTE that not all may
+ # fill in return matrix with all the right stuff
+ # fill in REML results
+ lam.ml <- lambda.est[6, 1]
+ lambda.est[6, 2] <- Krig.ftrace(lam.ml, D)
+ lambda.est[6, 3] <- Krig.fgcv(lam.ml, info)
+ lambda.est[6, 4] <- sqrt(Krig.fs2hat(lam.ml, info))
+ lambda.est[6, 5] <- Krig.flplike(lam.ml, info)
+ # fill in GCV results
+ for (k in 1:5) {
+ lam <- lambda.est[k, 1]
+ if (!is.na(lam)) {
+ lambda.est[k, 2] <- Krig.ftrace(lam, D)
+ if (k == 1 | k > 3) {
+ lambda.est[k, 3] <- Krig.fgcv(lam, info)
+ lambda.est[k, 5] <- Krig.flplike(lam, info)
+ }
+ if (k == 2) {
+ lambda.est[k, 3] <- Krig.fgcv.model(lam, info)
+ }
+ if (k == 3) {
+ lambda.est[k, 3] <- Krig.fgcv.one(lam, info)
+ }
+ lambda.est[k, 4] <- sqrt(Krig.fs2hat(lam, info))
+ }
+ }
+ # Note that the estimate by default is
+ # REML == restricted maximum likelihood.
+ if( give.warnings & any(warningTable$Warning)){
+ cat("Methods at endpoints of grid search:", fill=TRUE)
+ print(warningTable[warningTable$Warning,])
+ }
+ list(gcv.grid = gcv.grid, lambda.est = lambda.est, warningTable=warningTable)
+}
diff --git a/R/gcv.sreg.R b/R/gcv.sreg.R
new file mode 100644
index 0000000..78ad7af
--- /dev/null
+++ b/R/gcv.sreg.R
@@ -0,0 +1,193 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+gcv.sreg<- function(out, lambda.grid = NA, cost = 1,
+ nstep.cv = 80, rmse = NA, offset = 0, trmin = NA, trmax = NA,
+ verbose = FALSE, tol = 1e-05,
+ give.warnings=TRUE) {
+ shat.pure.error <- out$shat.pure.error
+ pure.ss <- out$pure.ss
+ nt <- 2
+ np <- out$np
+ N <- out$N
+ out$cost <- cost
+ out$offset <- offset
+ # find good end points for lambda coarse grid.
+ if (is.na(trmin))
+ trmin <- 2.05
+ if (is.na(trmax))
+ trmax <- out$np * 0.95
+ if (is.na(lambda.grid[1])) {
+ l2 <- sreg.df.to.lambda(trmax, out$xM, out$weightsM)
+ l1 <- sreg.df.to.lambda(trmin, out$xM, out$weightsM)
+ lambda.grid <- exp(seq(log(l2), log(l1), , nstep.cv))
+ }
+ if (verbose) {
+ cat("endpoints of coarse lamdba grid", fill = TRUE)
+ cat(l1, l2, fill = TRUE)
+ }
+ # build up table of coarse grid serach results for lambda
+ # in the matrix gcv.grid
+ nl <- length(lambda.grid)
+ V <- V.model <- V.one <- trA <- MSE <- RSS.model <- rep(NA,
+ nl)
+ # loop through lambda's and compute various quantities related to
+ # lambda and the fitted spline.
+ for (k in 1:nl) {
+ temp <- sreg.fit(lambda.grid[k], out, verbose = verbose)
+ RSS.model[k] <- temp$rss
+ trA[k] <- temp$trace
+ V[k] <- temp$gcv
+ V.one[k] <- temp$gcv.one
+ V.model[k] <- temp$gcv.model
+ }
+ # adjustments to columns of gcv.grid
+ RSS <- RSS.model + pure.ss
+ shat <- sqrt(RSS/(N - trA))
+ gcv.grid <- cbind(lambda.grid, trA, V, V.one, V.model, shat)
+ dimnames(gcv.grid) <- list(NULL, c("lambda", "trA", "GCV",
+ "GCV.one", "GCV.model", "shat"))
+ gcv.grid<- as.data.frame( gcv.grid)
+ if (verbose) {
+ cat("Results of coarse grid search", fill = TRUE)
+ print(gcv.grid)
+ }
+ lambda.est <- matrix(NA, ncol = 5, nrow = 5,
+ dimnames = list(
+ c("GCV","GCV.model", "GCV.one", "RMSE", "pure error"),
+ c("lambda","trA", "GCV", "shat", "converge")))
+ # now do various refinements for different flavors of finding
+ # a good value for lambda the smoothing parameter
+ ##### traditional leave-one-out
+ IMIN<- rep( NA, 5)
+ IMIN[1]<- which.min( gcv.grid$GCV )
+ IMIN[2]<- ifelse( is.na(shat.pure.error), NA,
+ which.min(gcv.grid$GCV.model) )
+ IMIN[3]<- which.min( gcv.grid$GCV.one)
+ if( is.na( rmse)){
+ IMIN[4] <- NA
+ }
+ else{
+ rangeShat<- range( gcv.grid$shat)
+ IUpcross<- max( (1:nl)[gcv.grid$shat< rmse] )
+ IMIN[4]<- ifelse( (rangeShat[1]<= rmse)&(rangeShat[2] >=rmse),
+ IUpcross, NA)
+ }
+ IMIN[5]<- ifelse( is.na(shat.pure.error), NA,
+ which.min(abs(gcv.grid$shat-shat.pure.error)) )
+ # NOTE IMIN indexes from smallest lambda to largest lambda in grid.
+ warningTable<- data.frame(
+ IMIN, IMIN == nl, IMIN==1,
+ gcv.grid$lambda[IMIN],
+ gcv.grid$trA[IMIN],
+ row.names = c("GCV","GCV.model", "GCV.one", "RMSE", "pure error") )
+ warning<- (warningTable[,2]|warningTable[,3])&
+ (!is.na(warningTable[,1]))
+ indRefine<- (!warningTable[,2]) & (!warningTable[,3]) &
+ (!is.na(warningTable[,1]))
+ warningTable<- cbind( warning, indRefine, warningTable )
+ names( warningTable)<- c("Warning","Refine","indexMIN", "leftEndpoint", "rightEndpoint",
+ "lambda","effdf")
+ if( verbose){
+ print(warningTable)
+ }
+ # fill in grid search estimates
+ for( k in 1:5){
+ if( !is.na(IMIN[k])){
+ lambda.est[k,1]<- gcv.grid$lambda[IMIN[k]]
+ }
+ }
+ # now optimze the search producing refined optima
+ #
+ # now step through the many different ways to find lambda
+ # This is the key to these choices:
+ # 1- the usual GCV proposed by Craven/Wahba
+ # 2- GCV where data fitting is collapsed to the mean for
+ # each location and each location is omitted
+ # 3- True leave-one-out even with replicated observations
+ # 4- Match estimate of sigma to external value supplied (RMSE)
+ # 5- Match estimate of sigma from the estimate based the
+ # pure error sum of squares obtained by the observations
+ # replicated at the same locations
+ #test<- sreg.fit(.1, out)
+ #print( test)
+ if(indRefine[1]){
+ starts <- lambda.grid[IMIN[1] + c(-1,0,1)]
+ outGs <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=sreg.fgcv, f.extra = out, tol = tol)
+ lambda.est[1,1]<- outGs$x
+ lambda.est[1,5]<- outGs$iter
+ }
+ if( indRefine[2]) {
+ starts <- lambda.grid[IMIN[2] + c(-1,0,1)]
+ outGs <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=sreg.fgcv.model, f.extra = out, tol = tol)
+ lambda.est[2,1]<- outGs$x
+ lambda.est[2,5]<- outGs$iter
+ }
+ if( indRefine[3]) {
+ starts <- lambda.grid[IMIN[3] + c(-1,0,1)]
+ outGs <- golden.section.search(ax=starts[1],bx=starts[2],cx=starts[3],
+ f=sreg.fgcv.one, f.extra = out, tol = tol)
+ lambda.est[3, 1] <-outGs$x
+ lambda.est[3,5]<- outGs$iter
+ }
+ if ( indRefine[4] ) {
+ guess<- gcv.grid$lambda[IMIN[4]]
+ lambda.rmse <- find.upcross(sreg.fs2hat, out,
+ upcross.level = rmse^2,
+ guess = guess, tol = tol * rmse^2)
+ lambda.est[4, 1] <- lambda.rmse
+ }
+ if ( indRefine[5] ) {
+ guess <- gcv.grid$lambda[IMIN[5]]
+ lambda.pure.error <- find.upcross(sreg.fs2hat, out,
+ upcross.level = shat.pure.error^2, guess = guess,
+ tol = tol * shat.pure.error^2)
+ lambda.est[5, 1] <- lambda.pure.error
+ }
+ if (verbose) {
+ cat("All forms of estimated lambdas so far", fill = TRUE)
+ print(lambda.est)
+ }
+ for (k in 1:5) {
+ lam <- lambda.est[k, 1]
+ if (!is.na(lam)) {
+ temp <- sreg.fit(lam, out)
+ lambda.est[k, 2] <- temp$trace
+ if ((k == 1) | (k > 3)) {
+ lambda.est[k, 3] <- temp$gcv
+ }
+ if (k == 2) {
+ lambda.est[k, 3] <- temp$gcv.model
+ }
+ if (k == 3) {
+ lambda.est[k, 3] <- temp$gcv.one
+ }
+ lambda.est[k, 4] <- temp$shat
+ }
+ }
+ if( give.warnings & any(warningTable$Warning)){
+ cat("Methods at endpoints of grid search:", fill=TRUE)
+ print(warningTable[warningTable$Warning,])
+ }
+ list(gcv.grid = gcv.grid, lambda.est = lambda.est,
+ warningTable=warningTable)
+}
diff --git a/R/golden.section.search.R b/R/golden.section.search.R
new file mode 100644
index 0000000..67c1181
--- /dev/null
+++ b/R/golden.section.search.R
@@ -0,0 +1,119 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+golden.section.search <- function(ax, bx, cx, f, niter = 25,
+ f.extra = NA, tol = 1e-05, gridx = NA) {
+
+ # check if an initial grid has been passed if so then do a
+ # search for the minimum on this grid first.
+ gridx <- sort(gridx)
+ NG <- length(gridx)
+ fgrid <- rep(NA, NG)
+ if (!is.na(gridx[1])) {
+ gridx <- sort(gridx)
+ NG <- length(gridx)
+ fgrid <- rep(NA, NG)
+ for (k in 1:NG) {
+ fgrid[k] <- f(gridx[k], f.extra)
+ }
+ # bail on search if objective function is an NA
+ if (any(is.na(fgrid))) {
+ warning("grid search has found some missing values in objective function")
+ return(list(x = NA, fmin = NA, iter = 0, tol = tol,
+ coarse.search = cbind(gridx, fgrid, deparse.level = 1)))
+ }
+ ind.bx <- which.min(fgrid)
+ # if minimum is at grid boundary print warning and return
+ if ((ind.bx == 1) | ind.bx == NG) {
+ warning("grid search gives minimun at boundary")
+ return(list(x = gridx[ind.bx], fmin = fgrid[ind.bx],
+ iter = 0, tol = tol, coarse.search = cbind(gridx,
+ fgrid, deparse.level = 1)))
+ }
+ # use grid results for initial values of golden section search
+ ax <- gridx[ind.bx - 1]
+ bx <- gridx[ind.bx]
+ cx <- gridx[ind.bx + 1]
+ }
+ else {
+ # if no grid search, sanity check on starting points
+ f1 <- f(ax, f.extra)
+ f2 <- f(bx, f.extra)
+ f3 <- f(cx, f.extra)
+ if ((f2 > f1) | (f2 > f3))
+ stop("starting values not convex")
+ }
+
+
+ r <- 0.61803399
+ con <- 1 - r
+ x0 <- ax
+ x3 <- cx
+ if (abs(cx - bx) > abs(bx - ax)) {
+ x1 <- bx
+ x2 <- bx + con * (bx - ax)
+ }
+ else {
+ x2 <- bx
+ x1 <- bx - con * (bx - ax)
+ }
+ f1 <- f(x1, f.extra)
+ f2 <- f(x2, f.extra)
+ iter <- niter
+ for (k in 1:niter) {
+ #cat( x1,f1, x2,f2, fill=TRUE)
+ if (f2 < f1) {
+ x0 <- x1
+ x1 <- x2
+ x2 <- r * x1 + con * x3
+ f0 <- f1
+ f1 <- f2
+ f2 <- f(x2, f.extra)
+ }
+ else {
+ x3 <- x2
+ x2 <- x1
+ x1 <- r * x2 + con * x0
+ f3 <- f2
+ f2 <- f1
+ f1 <- f(x1, f.extra)
+ }
+ if (abs(f2 - f1) < tol) {
+ iter <- k
+ break
+ }
+ }
+ if (f1 < f2) {
+ golden <- f1
+ xmin <- x1
+ }
+ else {
+ golden <- f2
+ xmin <- x2
+ }
+
+ if (iter == niter) {
+ warning("Maximum iterations reached")
+ }
+
+ list(x = xmin, fmin = golden, iter = iter, tol = tol, coarse.search = cbind(gridx,
+ fgrid, deparse.level = 1))
+}
+
diff --git a/R/image.family.R b/R/image.family.R
new file mode 100644
index 0000000..e091d31
--- /dev/null
+++ b/R/image.family.R
@@ -0,0 +1,399 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"imagePlotInfo" <- function(..., breaks = NULL, nlevel) {
+#NOTE:
+# image.plot.info
+# has been renamed as imagePlotInfo to avoid confusion with
+# an S3 method
+ temp <- list(...)
+ #
+ xlim <- NA
+ ylim <- NA
+ zlim <- NA
+ poly.grid <- FALSE
+ #
+ # go through various cases of what these can be
+ #
+ ##### x,y,z list is first argument
+ if (is.list(temp[[1]])) {
+ xlim <- range(temp[[1]]$x, na.rm = TRUE)
+ ylim <- range(temp[[1]]$y, na.rm = TRUE)
+ zlim <- range(temp[[1]]$z, na.rm = TRUE)
+ if (is.matrix(temp[[1]]$x) & is.matrix(temp[[1]]$y) &
+ is.matrix(temp[[1]]$z)) {
+ poly.grid <- TRUE
+ }
+ }
+ ##### check for polygrid first three arguments should be matrices
+ #####
+ if (length(temp) >= 3) {
+ if (is.matrix(temp[[1]]) & is.matrix(temp[[2]]) & is.matrix(temp[[3]])) {
+ poly.grid <- TRUE
+ }
+ }
+ ##### z is passed without an x and y (and not a poly.grid!)
+ #####
+ if (is.matrix(temp[[1]]) & !poly.grid) {
+ xlim <- c(0, 1)
+ ylim <- c(0, 1)
+ zlim <- range(temp[[1]], na.rm = TRUE)
+ }
+ ##### if x,y,z have all been passed find their ranges.
+ ##### holds if poly.grid or not
+ #####
+ if (length(temp) >= 3) {
+ if (is.matrix(temp[[3]])) {
+ xlim <- range(temp[[1]], na.rm = TRUE)
+ ylim <- range(temp[[2]], na.rm = TRUE)
+ zlim <- range(temp[[3]], na.rm = TRUE)
+ }
+ }
+ # if constant z values perturb the range (1e-8) by epsilon to
+ # avoid other problems in drawing legend later on
+ if( !is.na( zlim[1] ) ){
+ if( zlim[1] == zlim[2]){
+ if( zlim[1]==0){
+ zlim[1]<- -1e-8
+ zlim[2]<- 1e-8}
+ else{
+ delta<- .01*abs(zlim[1])
+ zlim[1]<- zlim[1] - delta
+ zlim[2]<- zlim[2] + delta
+ }
+ }
+ }
+ #### parse x,y,z if they are named arguments
+ # determine if this is polygon grid (x and y are matrices)
+ if (is.matrix(temp$x) & is.matrix(temp$y) & is.matrix(temp$z)) {
+ poly.grid <- TRUE
+ }
+# set limits from the usual $x $y $z format of image object
+ xthere <- match("x", names(temp))
+ ythere <- match("y", names(temp))
+ zthere <- match("z", names(temp))
+ if (!is.na(zthere))
+ zlim <- range(temp$z, na.rm = TRUE)
+ if (!is.na(xthere))
+ xlim <- range(temp$x, na.rm = TRUE)
+ if (!is.na(ythere))
+ ylim <- range(temp$y, na.rm = TRUE)
+
+# overwrite limits with passed values
+ if (!is.null(temp$zlim))
+ zlim <- temp$zlim
+ if (!is.null(temp$xlim))
+ xlim <- temp$xlim
+ if (!is.null(temp$ylim))
+ ylim <- temp$ylim
+# At this point xlim, ylim and zlim should be correct
+# using all the different possibilities and defaults for these values
+#
+# Now set up the breaks
+ if( is.null(breaks)){
+ midpoints<- seq( zlim[1], zlim[2],,nlevel)
+ delta<- (midpoints[2]- midpoints[1])/2
+ # nlevel +1 breaks with the min and max as midpoints
+ # of the first and last bins.
+
+ breaks <- c( midpoints[1]- delta, midpoints + delta)
+ }
+ list(xlim = xlim, ylim = ylim, zlim = zlim, poly.grid = poly.grid,
+ breaks=breaks)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+# NOTE:
+# image.plot.plt<- function(...){
+# this function has been renamed as imageplot.setup to avoid confusion with
+# an S3 method
+# imageplot.setup(...)}
+
+"imageplot.setup" <- function(x, add = FALSE, legend.shrink = 0.9,
+ legend.width = 1, horizontal = FALSE, legend.mar = NULL,
+ bigplot = NULL, smallplot = NULL, ...) {
+ old.par <- par(no.readonly = TRUE)
+ if (is.null(smallplot))
+ stick <- TRUE
+ else stick <- FALSE
+ if (is.null(legend.mar)) {
+ legend.mar <- ifelse(horizontal, 3.1, 5.1)
+ }
+ # compute how big a text character is
+ char.size <- ifelse(horizontal, par()$cin[2]/par()$din[2],
+ par()$cin[1]/par()$din[1])
+ # This is how much space to work with based on setting the margins in the
+ # high level par command to leave between strip and big plot
+ offset <- char.size * ifelse(horizontal, par()$mar[1], par()$mar[4])
+ # this is the width of the legned strip itself.
+ legend.width <- char.size * legend.width
+ # this is room for legend axis labels
+ legend.mar <- legend.mar * char.size
+ # smallplot is the plotting region for the legend.
+ if (is.null(smallplot)) {
+ smallplot <- old.par$plt
+ if (horizontal) {
+ smallplot[3] <- legend.mar
+ smallplot[4] <- legend.width + smallplot[3]
+ pr <- (smallplot[2] - smallplot[1]) * ((1 - legend.shrink)/2)
+ smallplot[1] <- smallplot[1] + pr
+ smallplot[2] <- smallplot[2] - pr
+ }
+ else {
+ smallplot[2] <- 1 - legend.mar
+ smallplot[1] <- smallplot[2] - legend.width
+ pr <- (smallplot[4] - smallplot[3]) * ((1 - legend.shrink)/2)
+ smallplot[4] <- smallplot[4] - pr
+ smallplot[3] <- smallplot[3] + pr
+ }
+ }
+ if (is.null(bigplot)) {
+ bigplot <- old.par$plt
+ if (!horizontal) {
+ bigplot[2] <- min(bigplot[2], smallplot[1] - offset)
+ }
+ else {
+ bottom.space <- old.par$mar[1] * char.size
+ bigplot[3] <- smallplot[4] + offset
+ }
+ }
+ if (stick & (!horizontal)) {
+ dp <- smallplot[2] - smallplot[1]
+ smallplot[1] <- min(bigplot[2] + offset, smallplot[1])
+ smallplot[2] <- smallplot[1] + dp
+ }
+ return(list(smallplot = smallplot, bigplot = bigplot))
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"crop.image" <- function(obj, loc = NULL, ...) {
+ if (is.null(loc)) {
+ image.plot(obj, ...)
+ loc <- get.rectangle()
+ }
+ # coerce to midpoints
+ m <- nrow(obj$z)
+ n <- ncol(obj$z)
+ nx <- length(obj$x)
+ ny <- length(obj$y)
+ if (nx != m) {
+ obj$x <- (obj$x[1:m] + obj$x[2:(m + 1)])/2
+ }
+ if (ny != n) {
+ obj$y <- (obj$y[1:n] + obj$x[2:(n + 1)])/2
+ }
+ # coerce loc to x,y list format if matrix or data frame
+ if (is.matrix(loc) | is.data.frame(loc)) {
+ if (ncol(loc) != 2) {
+ stop("loc must have two columns\n(for x and y coordinates )")
+ }
+ loc <- list(x = loc[, 1], y = loc[, 2])
+ }
+ x <- obj$x
+ y <- obj$y
+ N <- length(x)
+ xr <- range(loc$x)
+ xtest <- range(x)
+ if (xr[1] < xtest[1] | xr[2] > xtest[2]) {
+ stop("cropping outside ranges of x values")
+ }
+ x1 <- max((1:N)[xr[1] >= x])
+ x2 <- min((1:N)[xr[2] <= x])
+ N <- length(y)
+ yr <- range(loc$y)
+ ytest <- range(y)
+ if (yr[1] < ytest[1] | yr[2] > ytest[2]) {
+ stop("cropping outside ranges of y values")
+ }
+ y1 <- max((1:N)[yr[1] >= y])
+ y2 <- min((1:N)[yr[2] <= y])
+ list(x = obj$x[x1:x2], y = obj$y[y1:y2], z = obj$z[x1:x2,
+ y1:y2])
+}
+average.image <- function(obj, Q = 2) {
+ # fast method to sum over a QXQ block in image.
+ # Q is the number of elements to average over in each dimension
+ # e.g. Q=5 -- blocks of 25 values are averaged to one grid cell.
+ if (is.matrix(obj)) {
+ obj <- list(x = 1:nrow(obj), y = 1:ncol(obj), z = obj)
+ }
+ M <- length(obj$x)
+ N <- length(obj$y)
+ Mi <- trunc(M/Q)
+ Ni <- trunc(N/Q)
+ # space to hold results
+ z <- matrix(NA, nrow = Mi, ncol = N)
+ x2 <- rep(NA, Mi)
+ y2 <- rep(NA, Ni)
+ indQ <- 1:Q
+ # sum over block of rows and handle x grid values
+ for (j in 1:Mi) {
+ x2[j] <- mean(obj$x[indQ + (j - 1) * Q])
+ z[j, ] <- colMeans(obj$z[indQ + (j - 1) * Q, ], na.rm = TRUE)
+ }
+ # sum over blocks of columns and average y grid values
+ for (k in 1:Ni) {
+ y2[k] <- mean(obj$y[indQ + (k - 1) * Q])
+ z[, k] <- rowMeans(z[, indQ + (k - 1) * Q], na.rm = TRUE)
+ }
+ return(list(x = x2, y = y2, z = z[1:Mi, 1:Ni], Q = Q))
+}
+"get.rectangle" <- function() {
+ temp <- locator(2, type = "p", pch = "+")
+ rect(temp$x[1], temp$y[1], temp$x[2], temp$y[2])
+ temp
+}
+"half.image" <- function(obj) {
+ # coerce to list if a matrix
+ if (is.matrix(obj)) {
+ obj <- list(x = 1:nrow(obj), y = 1:ncol(obj), z = obj)
+ }
+ M <- length(obj$x)
+ N <- length(obj$y)
+ M2 <- trunc(M/2)
+ N2 <- trunc(N/2)
+ z <- matrix(NA, nrow = M2, ncol = N2)
+ ix <- (1:M2) * 2
+ iy <- (1:N2) * 2
+ x2 <- (obj$x[ix - 1] + obj$x[ix])/2
+ y2 <- (obj$y[iy - 1] + obj$y[iy])/2
+ return(list(x = x2, y = y2, z = (obj$z[ix - 1, iy] + obj$z[ix -
+ 1, iy - 1] + obj$z[ix, iy - 1] + obj$z[ix, iy])/4))
+}
+
+pushpin <- function(x, y, z, p.out, height = 0.05,
+ col = "black", text = NULL, adj = -0.1, cex = 1, ...) {
+ # project your x,y,z on to the uv plane of the plot
+ Sxy1 <- trans3d(x, y, z, p.out)
+ Sxy2 <- Sxy1
+ hold <- par()$usr
+ Sxy2$y <- (hold[4] - hold[3]) * height + Sxy2$y
+ # draw the pin
+ segments(Sxy1$x, Sxy1$y, Sxy2$x, Sxy2$y, col = "black")
+ points(Sxy2, col = col, pch = 19, cex = cex)
+ # add a label
+ if (!is.null(text)) {
+ text(Sxy2$x, Sxy2$y, label = text, adj = adj, cex = cex,
+ ...)
+ }
+}
+
+designer.colors <- function(n = 256, col = c("darkgreen",
+ "white", "darkred"), x = seq(0, 1,, length(col) ), alpha = 1) {
+# generate colors at equal spacings but interpolate to colors at x
+ xRange<- range(x)
+ xg <- seq(xRange[1], xRange[2],, n)
+# convert colors from names e.g. "magenta" to rgb in [0.1]
+ y.rgb <- t(col2rgb(col))/255
+# matrix to hold RGB color values
+ temp <- matrix(NA, ncol = 3, nrow = n)
+ nColors<- length( col)
+ if( nColors != length( x)){
+ stop("number of colors needs to be the same as length of x")}
+# linear or spline interpolation of RGB color values at x onto xg
+ for (k in 1:3) {
+ if( nColors > 2){
+ hold <- splint(x, y.rgb[, k], xg)}
+ else{
+ a<-(xRange[2]-xg)/(xRange[2] - xRange[1])
+ hold<- a*y.rgb[1, k] + (1-a)*y.rgb[2, k] }
+ # fix up to be in [0,1]
+ hold[hold < 0] <- 0
+ hold[hold > 1] <- 1
+ temp[, k] <- hold
+ }
+ # convert back to hex
+ if(alpha==1){
+ return( rgb(temp[, 1], temp[, 2], temp[, 3]))
+ }
+ else{
+ return( rgb(temp[, 1], temp[, 2], temp[, 3], alpha = alpha))
+ }
+}
+
+#boulder.colors<- c('darkred', 'darkorange',
+# 'white', 'darkgreen', 'darkblue')
+"two.colors" <- function(n = 256, start = "darkgreen",
+ end = "red", middle = "white", alpha = 1) {
+ designer.colors(n, c(start, middle, end), alpha = alpha)
+}
+
+fieldsPlotColors<- function( col, ...){
+ N<- length(col)
+ image.plot( 1:N, 1, matrix(1:N,N,1), col=col,axes=FALSE, xlab='', ylab='',...)}
+
+
+imageplot.info<- function (...)
+{
+ temp <- list(...)
+ xlim <- NA
+ ylim <- NA
+ zlim <- NA
+ poly.grid <- FALSE
+ if (is.list(temp[[1]])) {
+ xlim <- range(temp[[1]]$x, na.rm = TRUE)
+ ylim <- range(temp[[1]]$y, na.rm = TRUE)
+ zlim <- range(temp[[1]]$z, na.rm = TRUE)
+ if (is.matrix(temp[[1]]$x) & is.matrix(temp[[1]]$y) &
+ is.matrix(temp[[1]]$z)) {
+ poly.grid <- TRUE
+ }
+ }
+ if (length(temp) >= 3) {
+ if (is.matrix(temp[[1]]) & is.matrix(temp[[2]]) & is.matrix(temp[[3]])) {
+ poly.grid <- TRUE
+ }
+ }
+ if (is.matrix(temp[[1]]) & !poly.grid) {
+ xlim <- c(0, 1)
+ ylim <- c(0, 1)
+ zlim <- range(temp[[1]], na.rm = TRUE)
+ }
+ if (length(temp) >= 3) {
+ if (is.matrix(temp[[3]])) {
+ xlim <- range(temp[[1]], na.rm = TRUE)
+ ylim <- range(temp[[2]], na.rm = TRUE)
+ zlim <- range(temp[[3]], na.rm = TRUE)
+ }
+ }
+ if (is.matrix(temp$x) & is.matrix(temp$y) & is.matrix(temp$z)) {
+ poly.grid <- TRUE
+ }
+ xthere <- match("x", names(temp))
+ ythere <- match("y", names(temp))
+ zthere <- match("z", names(temp))
+ if (!is.na(zthere))
+ zlim <- range(temp$z, na.rm = TRUE)
+ if (!is.na(xthere))
+ xlim <- range(temp$x, na.rm = TRUE)
+ if (!is.na(ythere))
+ ylim <- range(temp$y, na.rm = TRUE)
+ if (!is.null(temp$zlim))
+ zlim <- temp$zlim
+ if (!is.null(temp$xlim))
+ xlim <- temp$xlim
+ if (!is.null(temp$ylim))
+ ylim <- temp$ylim
+ list(xlim = xlim, ylim = ylim, zlim = zlim, poly.grid = poly.grid)
+}
diff --git a/R/image.plot.R b/R/image.plot.R
new file mode 100644
index 0000000..dfa162b
--- /dev/null
+++ b/R/image.plot.R
@@ -0,0 +1,180 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"image.plot" <- function(..., add = FALSE,
+ breaks= NULL, nlevel = 64, col = NULL,
+ horizontal = FALSE, legend.shrink = 0.9, legend.width = 1.2,
+ legend.mar = ifelse(horizontal, 3.1, 5.1), legend.lab = NULL,
+ legend.line= 2,
+ graphics.reset = FALSE, bigplot = NULL, smallplot = NULL,
+ legend.only = FALSE, lab.breaks = NULL,
+ axis.args = NULL, legend.args = NULL, legend.cex=1.0, midpoint = FALSE, border = NA,
+ lwd = 1, verbose=FALSE) {
+ # Thanks to S. Koehler and S. Woodhead
+ # for comments on making this a better function
+ #
+ # save current graphics settings
+ old.par <- par(no.readonly = TRUE)
+ # set defaults for color scale
+ # note this works differently than the image function.
+ if( is.null(col)) {
+ col<- tim.colors(nlevel)}
+ else{
+ nlevel<- length( col)
+ }
+ # figure out zlim from passed arguments
+ # also set the breaks for colors if they have not been passed,
+ info <- imagePlotInfo(..., breaks=breaks, nlevel=nlevel)
+ # breaks have been computed if not passed in the call
+ breaks<- info$breaks
+ if( verbose){
+ print(info)
+ }
+ if (add) {
+ big.plot <- old.par$plt
+ }
+ if (legend.only) {
+ graphics.reset <- TRUE
+ }
+ if (is.null(legend.mar)) {
+ legend.mar <- ifelse(horizontal, 3.1, 5.1)
+ }
+ # figure out how to divide up the plotting real estate
+ temp <- imageplot.setup(add = add, legend.shrink = legend.shrink,
+ legend.width = legend.width, legend.mar = legend.mar,
+ horizontal = horizontal, bigplot = bigplot, smallplot = smallplot)
+ # bigplot has plotting region coordinates for image
+ # smallplot has plotting coordinates for legend strip
+ smallplot <- temp$smallplot
+ bigplot <- temp$bigplot
+ # draw the image in bigplot, just call the R base function
+ # or poly.image for polygonal cells
+ # note the logical switch
+ # for poly.grid is parsed out of call from image.plot.info
+ if (!legend.only) {
+ if (!add) {
+ par(plt = bigplot)
+ }
+ if (!info$poly.grid) {
+ image(..., breaks=breaks, add = add, col = col)
+ }
+ else {
+ poly.image(..., add = add, col = col, midpoint = midpoint,
+ border = border, lwd.poly = lwd)
+ }
+ big.par <- par(no.readonly = TRUE)
+ }
+ ##
+ ## check dimensions of smallplot
+ if ((smallplot[2] < smallplot[1]) | (smallplot[4] < smallplot[3])) {
+ par(old.par)
+ stop("plot region too small to add legend\n")
+ }
+ # Following code draws the legend using the image function
+ # and a one column image.
+ # What might be confusing is the values of the "image" are the same
+ # as the locations on the legend axis.
+ # Moreover the image values are in the middle of each breakpoint category
+ # thanks to Tobias Nanu Frechen and Matthew Flickinger
+ # for sorting out some problems with the breaks position in the legend.
+ ix <- 1:2
+ iy<- breaks
+ nBreaks<- length( breaks)
+ midpoints<- (breaks[1:(nBreaks-1)] + breaks[2:nBreaks] )/2
+ iz <- matrix(midpoints, nrow = 1, ncol = length(midpoints))
+ if( verbose){print(breaks)
+ print( midpoints)
+ print( ix)
+ print( iy)
+ print( iz)
+ print( col)}
+ #
+ # next par call sets up a new plotting region just for the legend strip
+ # at the smallplot coordinates
+ par(new = TRUE, pty = "m", plt = smallplot, err = -1)
+ # draw color scales the two cases are horizontal/vertical
+ # add a label if this is passed.
+ if (!horizontal) {
+ image(ix, iy, iz, xaxt = "n", yaxt = "n", xlab = "",
+ ylab = "", col = col, breaks=breaks)
+ }
+ else {
+ image(iy, ix, t(iz), xaxt = "n", yaxt = "n", xlab = "",
+ ylab = "", col = col, breaks=breaks)
+ }
+ # create the argument list to draw the axis
+ # this avoids 4 separate calls to axis and allows passing extra
+ # arguments.
+ if (!is.null(lab.breaks)) {
+ # axis with labels at break points
+ axis.args <- c(list(side = ifelse(horizontal, 1, 4),
+ mgp = c(3, 1, 0), las = ifelse(horizontal, 0, 2),
+ at = breaks, labels = lab.breaks), axis.args)
+ }
+ else {
+ # If lab.breaks is not specified ( with or without breaks), pretty
+ # tick mark locations and labels are computed internally,
+ # or as specified in axis.args at the function call
+ axis.args <- c(list(side = ifelse(horizontal, 1, 4),
+ mgp = c(3, 1, 0), las = ifelse(horizontal, 0, 2)),
+ axis.args)
+ }
+ #
+ # now add the axis to the legend strip.
+ # notice how all the information is in the list axis.args
+ do.call("axis", axis.args)
+ # add a box around legend strip
+ box()
+ #
+ # add a label to the axis if information has been supplied
+ # using the mtext function. The arguments to mtext are
+ # passed as a list like the drill for axis (see above)
+ #
+ if (!is.null(legend.lab)) {
+ legend.args <- list(text = legend.lab, side = ifelse(horizontal,
+ 1, 4), line = legend.line, cex=legend.cex)
+ # just guessing at a good default for line argument!
+ }
+ # add the label using mtext function
+ if (!is.null(legend.args)) {
+ do.call(mtext, legend.args)
+ }
+ #
+ # clean up graphics device settings
+ # reset to larger plot region with right user coordinates.
+ mfg.save <- par()$mfg
+ if (graphics.reset | add) {
+ par(old.par)
+ par(mfg = mfg.save, new = FALSE)
+ invisible()
+ }
+ else {
+ par(big.par)
+ par(plt = big.par$plt, xpd = FALSE)
+ par(mfg = mfg.save, new = FALSE)
+ # Suggestion from Karline Soetaert <Karline.Soetaert at nioz.nl>
+ # this is to reset margins to be based on the mar arguments
+ # par(mar = par("mar")) or
+ # par(mar = big.par$mar)
+ # unfortunately this causes problems by allowing plotting outside of the
+ # original plot region.
+ invisible()
+ }
+}
diff --git a/R/image.smooth.R b/R/image.smooth.R
new file mode 100644
index 0000000..c13439d
--- /dev/null
+++ b/R/image.smooth.R
@@ -0,0 +1,99 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"image.smooth" <- function(x, wght = NULL, dx = 1,
+ dy = 1, kernel.function = double.exp, theta = 1, grid = NULL,
+ tol = 1e-08, xwidth = NULL, ywidth = NULL, weights = NULL,
+ ...) {
+ # first part of this function is figuring what has been passed and
+ # what to do
+ if (is.list(x)) {
+ # assume that an image list format has been passed as x
+ Y <- x$z
+ grid <- list(x = x$x, y = x$y)
+ }
+ else {
+ Y <- x
+ }
+ if (!is.matrix(Y)) {
+ stop("Requires a matrix")
+ }
+ m <- nrow(Y)
+ n <- ncol(Y)
+ # use information in previous setup kernel function from a
+ # a call to setup.image.smooth and in the process override any
+ # passed arguments
+ if (!is.null(wght)) {
+ dx <- wght$dx
+ dy <- wght$dy
+ xwidth <- wght$xwidth
+ ywidth <- wght$ywidth
+ }
+ # set up grid if it is missing
+ if (is.null(grid)) {
+ grid <- list(x = (1:m) * dx, y = (1:n) * dy)
+ }
+ else {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ }
+ # padding of zeroes around actual image
+ # if less than m and n there may be spurious effects due to
+ # the periodicity from the fft.
+ # make sure that the span of the kernel is less than xwidth and ywidth.
+ # there will be substantial speedup if the kernel has a small support,
+ # Y is big (e.g. 512X512) and Mwidth and N widht are adjusted to suit.
+ if (is.null(xwidth)) {
+ xwidth <- dx * m
+ }
+ if (is.null(ywidth)) {
+ ywidth <- dy * n
+ }
+ # kernel wght function as fft
+ # reusing this saves an fft for each image smooth.
+ if (is.null(wght)) {
+ wght <- setup.image.smooth(nrow = m, ncol = n, xwidth = xwidth,
+ ywidth = ywidth, dx = dx, dy = dy, kernel.function = kernel.function,
+ theta = theta)
+ }
+ M <- nrow(wght$W)
+ N <- ncol(wght$W)
+ temp <- matrix(0, nrow = M, ncol = N)
+ temp2 <- matrix(0, nrow = M, ncol = N)
+ # pad with zeroes
+ if (!is.null(weights)) {
+ temp[1:m, 1:n] <- Y * weights
+ temp[is.na(temp)] <- 0
+ temp2[1:m, 1:n] <- ifelse(!is.na(Y), weights, 0)
+ }
+ else {
+ temp[1:m, 1:n] <- Y
+ temp[is.na(temp)] <- 0
+ temp2[1:m, 1:n] <- ifelse(!is.na(Y), 1, 0)
+ }
+ # temp and temp2 are numerator and denominator of Nadarya-Watson estimator.
+ temp <- Re(fft(fft(temp) * wght$W, inverse = TRUE))[1:m,
+ 1:n]
+ temp2 <- Re(fft(fft(temp2) * wght$W, inverse = TRUE))[1:m,
+ 1:n]
+ # try not to divide by zero!
+ temp <- ifelse((temp2 > tol), (temp/temp2), NA)
+ list(x = grid$x, y = grid$y, z = temp)
+}
diff --git a/R/in.poly.R b/R/in.poly.R
new file mode 100644
index 0000000..fc8805a
--- /dev/null
+++ b/R/in.poly.R
@@ -0,0 +1,73 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"in.poly" <- function(xd, xp, convex.hull = FALSE,
+ inflation = 1e-07) {
+ if (convex.hull) {
+ xp <- xp[chull(xp), ]
+ }
+ nd <- as.integer(nrow(xd))
+ np <- as.integer(nrow(xp))
+ #
+ # inflate convex hull slightly to include any points actually on the hull
+ #
+ if (convex.hull) {
+ xm <- matrix(c(mean(xp[, 1]), mean(xp[, 2])), nrow = np,
+ ncol = 2, byrow = TRUE)
+ xp <- (xp - xm) * (1 + inflation) + xm
+ }
+ # Note: inpoly FORTRAN has built in quick reject check to be inside
+ # the bounding rectangle of the polygon.
+ ind <- .Fortran("inpoly",PACKAGE="fields",
+ nd = as.integer(nd), as.single(xd[,
+ 1]), as.single(xd[, 2]), np = np, as.single(xp[, 1]),
+ as.single(xp[, 2]), ind = as.integer(rep(-1, nd)))$ind
+ as.logical(ind)
+}
+in.poly.grid <- function(grid.list, xp, convex.hull = FALSE,
+ inflation = 1e-07) {
+ # loop through rows of grid to fill out a logical matrix of
+ # being in (TRUE) or out (FALSE)
+ #
+ # this is to avoid the full target polygon if the convex hull is
+ # what is needed.
+ if (convex.hull) {
+ xp <- xp[chull(xp), ]
+ }
+ nx <- length(grid.list$x)
+ ny <- length(grid.list$y)
+ np <- as.integer(nrow(xp))
+ #
+ # inflate convex hull slightly to include any points actually on the hull
+ #
+ if (convex.hull) {
+ xm <- matrix(c(mean(xp[, 1]), mean(xp[, 2])), nrow = np,
+ ncol = 2, byrow = TRUE)
+ xp <- (xp - xm) * (1 + inflation) + xm
+ }
+ # Note: inpoly FORTRAN has built in quick reject check to be inside
+ # the bounding rectangle of the polygon.
+ ind <- .Fortran("igpoly",PACKAGE="fields",
+ nx = as.integer(nx), xg = as.single(grid.list$x),
+ ny = as.integer(ny), yg = as.single(grid.list$y), np = np,
+ as.single(xp[, 1]), as.single(xp[, 2]), ind = as.integer(rep(-1,
+ nx * ny)))$ind
+ return(matrix(as.logical(ind), nrow = nx, ncol = ny))
+}
diff --git a/R/interp.surface.R b/R/interp.surface.R
new file mode 100644
index 0000000..747f917
--- /dev/null
+++ b/R/interp.surface.R
@@ -0,0 +1,51 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"interp.surface" <- function(obj, loc) {
+
+ # obj is a surface or image object like the list for contour, persp or image.
+ # loc a matrix of 2 d locations -- new points to evaluate the surface.
+ x <- obj$x
+ y <- obj$y
+ z <- obj$z
+ nx <- length(x)
+ ny <- length(y)
+ # this clever idea for finding the intermediate coordinates at the new points
+ # is from J-O Irisson
+ lx <- approx(x, 1:nx, loc[, 1])$y
+ ly <- approx(y, 1:ny, loc[, 2])$y
+ lx1 <- floor(lx)
+ ly1 <- floor(ly)
+ # x and y distances between each new point and the closest grid point in the lower left hand corner.
+ ex <- lx - lx1
+ ey <- ly - ly1
+ # fix up weights to handle the case when loc are equal to
+ # last grid point. These have been set to NA above.
+ ex[lx1 == nx] <- 1
+ ey[ly1 == ny] <- 1
+ lx1[lx1 == nx] <- nx - 1
+ ly1[ly1 == ny] <- ny - 1
+ # bilinear interpolation finds simple weights based on the
+ # the four corners of the grid box containing the new
+ # points.
+ return(z[cbind(lx1, ly1)] * (1 - ex) * (1 - ey) + z[cbind(lx1 +
+ 1, ly1)] * ex * (1 - ey) + z[cbind(lx1, ly1 + 1)] * (1 -
+ ex) * ey + z[cbind(lx1 + 1, ly1 + 1)] * ex * ey)
+}
diff --git a/R/interp.surface.grid.R b/R/interp.surface.grid.R
new file mode 100644
index 0000000..7f068ee
--- /dev/null
+++ b/R/interp.surface.grid.R
@@ -0,0 +1,31 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"interp.surface.grid" <- function(obj, grid.list) {
+ x <- grid.list$x
+ y <- grid.list$y
+ M <- length(x)
+ N <- length(y)
+ out <- matrix(NA, nrow = M, ncol = N)
+ for (i in 1:M) {
+ out[i, ] <- interp.surface(obj, cbind(rep(x[i], N), y))
+ }
+ list(x = x, y = y, z = out)
+}
diff --git a/R/larry.colors.R b/R/larry.colors.R
new file mode 100644
index 0000000..a87febf
--- /dev/null
+++ b/R/larry.colors.R
@@ -0,0 +1,39 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+larry.colors<- function(){
+ ctemp<- matrix( c(
+ 182, 106, 40,
+ 205, 133, 63,
+ 225, 165, 100,
+ 245, 205, 132,
+ 245, 224, 158,
+ 255, 245, 186,
+ 255, 255, 255,
+ 205, 255, 205,
+ 153, 240, 178,
+ 83, 189, 159,
+ 110, 170, 200,
+ 5, 112, 176,
+ 2, 56, 88 ),
+ ncol=3, byrow=TRUE)
+ ctemp<- ctemp/255
+ rgb(ctemp[,1], ctemp[,2], ctemp[,3])
+}
diff --git a/R/mKrig.MLE.R b/R/mKrig.MLE.R
new file mode 100644
index 0000000..d3d2555
--- /dev/null
+++ b/R/mKrig.MLE.R
@@ -0,0 +1,157 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+mKrig.MLE <- function(x, y, weights = rep(1, nrow(x)), cov.fun="stationary.cov", cov.args = NULL,
+ Z = NULL, par.grid = NULL, lambda = NULL, lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04, ...) {
+
+ #check which optimization options the covariance function supports
+ supportsDistMat = supportsArg(cov.fun, "distMat")
+
+ #precompute distance matrix if possible so it only needs to be computed once
+ if(supportsDistMat) {
+
+ #Get distance function and arguments if available. Otherwise use 'dist' function
+ #to compute upper triangle of distance matrix
+ #
+ Dist.fun= c(cov.args, list(...))$Distance
+ Dist.args=c(cov.args, list(...))$Dist.args
+
+ if(is.null(Dist.fun))
+ Dist.fun = "dist"
+
+ distMat = do.call(Dist.fun, c(list(x), Dist.args))
+ }
+
+ # mKrig.args has all the arguments needed to call mKrig except lambda and cov.args
+ if(supportsDistMat)
+ cov.args = c(cov.args, list(distMat=distMat, onlyUpper=TRUE))
+ mKrig.args <- c(list(x = x, y = y, weights = weights, Z = Z, cov.fun=cov.fun),
+ list(...))
+
+ lnProfileLike.max <- -1e+20
+
+ # find NG -- number of parameters to try
+ par.grid <- data.frame(par.grid)
+ if (nrow(par.grid) == 0) {
+ if (is.null(lambda)) {
+ NG <- 1
+ }
+ else {
+ NG <- length(lambda)
+ }
+ }
+ else {
+ NG <- nrow(par.grid)
+ }
+
+ # output matrix to summarize results
+ summary <- matrix(NA, nrow = NG, ncol = 8)
+ dimnames(summary) <- list(NULL, c("EffDf", "lnProfLike",
+ "GCV", "sigma.MLE", "rho.MLE", "llambda.MLE", "counts eval",
+ "counts grad"))
+ lambda.best <- NA
+
+ # default for lambda is 1.0 for first value and exp(llambda.opt) for subsequent ones
+ # this is controlled by NAs for lambda starting values.
+ if (is.null(lambda)) {
+ lambda <- rep(NA, NG)
+ }
+
+ # default starting value for lambda is 1 or log lambda is 0
+ llambda.opt <- 0
+ optim.counts <- c(NA, NA)
+ lnLike.eval <- list()
+
+ # Define the objective function as a tricksy call to mKrig
+ # if Y is a matrix of replicated data sets use the log likelihood for the complete data sets
+ temp.fn <- function(x) {
+ # NOTE: FULL refers to estimates collapsed across the replicates if Y is a matrix
+ # assign to hold only a few components returned by mKrig
+ hold <- do.call("mKrig", c(mKrig.args, list(find.trA = FALSE, lambda = exp(x),
+ cov.args=c(cov.args.temp, cov.args)))
+ )[c("lambda.fixed", "rho.MLE.FULL", "sigma.MLE.FULL", "lnProfileLike.FULL")]
+
+ # add this evalution to an object (i.e. here a matrix) in the calling frame
+ temp.eval <- get("capture.evaluations")
+ assign("capture.evaluations", rbind(temp.eval, unlist(hold)),
+ envir = capture.env)
+ return(hold$lnProfileLike.FULL)
+ }
+ #
+ # begin loop over covariance arguments
+ for (k in 1:NG) {
+ llambda.start <- ifelse(is.na(lambda[k]), llambda.opt, log(lambda[k]))
+
+ # list of covariance arguments from par.grid with right names (some R arcania!)
+ # note that this only works because 1) temp.fn will search in this frame for this object
+ # par.grid has been coerced to a data frame so one has a concept of a row subscript.
+ cov.args.temp <- as.list(par.grid[k, ])
+ names(cov.args.temp) <- names(par.grid)
+
+ #optimize over lambda if lambda.profile is TRUE
+ if (lambda.profile) {
+ # set up matrix to store evaluations from within optim
+ capture.evaluations <- matrix(NA, ncol = 4, nrow = 1,
+ dimnames = list(NULL, c("lambda", "rho.MLE",
+ "sigma.MLE", "lnProfileLike.FULL")))
+ capture.env <- environment()
+
+ # call to optim
+ look <- optim(llambda.start, temp.fn, method = "BFGS",
+ control = list(fnscale = -1, parscale = 0.1,
+ ndeps = 0.05, reltol = relative.tolerance))
+ llambda.opt <- look$par
+ optim.counts <- look$counts
+
+ # call to 1-d search
+ # opt.summary <- optimize(temp.fn, interval= llambda.start + c(-8,8), maximum=TRUE)
+ # llambda.opt <- opt.summary$maximum
+ # optim.counts<- c(nrow(capture.evaluations)-1, NA)
+
+ # accumulate the new matrix of lnlambda and ln likelihoods (omitting first row of NAs)
+ lnLike.eval <- c(lnLike.eval, list(capture.evaluations[-1, ]))
+ }
+ else {
+ # no refinement for lambda so just save the the 'start' value as final one.
+ llambda.opt <- llambda.start
+ }
+
+ # final fit at optimal value (or starting value if not refinement/maximization for lambda)
+ obj <- do.call("mKrig", c(mKrig.args, list(lambda = exp(llambda.opt),
+ cov.args=c(cov.args.temp, cov.args))))
+ if (obj$lnProfileLike.FULL > lnProfileLike.max) {
+ lnProfileLike.max <- obj$lnProfileLike.FULL
+ cov.args.MLE <- cov.args.temp
+ lambda.best <- exp(llambda.opt)
+ }
+
+ # save results of the kth covariance model evaluation
+ summary[k, 1:8] <- c(obj$eff.df, obj$lnProfileLike.FULL,
+ obj$GCV, obj$sigma.MLE.FULL, obj$rho.MLE.FULL, llambda.opt,
+ optim.counts)
+ if (verbose) {
+ cat("Summary: ", k, summary[k, 1:8], fill = TRUE)
+ }
+ }
+ return(list(summary = summary, par.grid = par.grid, cov.args.MLE = cov.args.MLE,
+ mKrig.args = list(...), lambda.best = lambda.best, lambda.MLE = lambda.best,
+ call = match.call(), lnLike.eval = lnLike.eval))
+}
diff --git a/R/mKrig.MLE.joint.R b/R/mKrig.MLE.joint.R
new file mode 100644
index 0000000..786c4d6
--- /dev/null
+++ b/R/mKrig.MLE.joint.R
@@ -0,0 +1,191 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+mKrig.MLE.joint <- function(x, y, weights = rep(1, nrow(x)),
+ lambda.guess = 1, cov.params.guess=NULL,
+ cov.fun="stationary.cov", cov.args=NULL,
+ Z = NULL, optim.args=NULL, find.trA.MLE = FALSE,
+ ..., verbose = FALSE) {
+
+ #set default optim.args if necessary
+ if(is.null(optim.args))
+ optim.args = list(method = "BFGS",
+ control=list(fnscale = -1,
+ ndeps = rep(log(1.1), length(cov.params.guess)+1),
+ reltol=1e-04, maxit=10))
+
+ #check which optimization options the covariance function supports
+ supportsDistMat = supportsArg(cov.fun, "distMat")
+
+ #precompute distance matrix if possible so it only needs to be computed once
+ if(supportsDistMat) {
+
+ #Get distance function and arguments if available
+ #
+ Dist.fun= c(cov.args, list(...))$Distance
+ Dist.args=c(cov.args, list(...))$Dist.args
+
+ #If user left all distance settings NULL, use rdist with compact option.
+ #Use rdist function by default in general.
+ #
+ if(is.null(Dist.fun)) {
+ Dist.fun = "rdist"
+ if(is.null(Dist.args))
+ Dist.args = list(compact=TRUE)
+ }
+
+ distMat = do.call(Dist.fun, c(list(x), Dist.args))
+ }
+
+ #set cov.args for optimal performance if possible
+ if(supportsDistMat)
+ cov.args = c(cov.args, list(distMat=distMat, onlyUpper=TRUE))
+
+ # these are all the arguments needed to call mKrig except lambda and cov.args
+ mKrig.args <- c(list(x = x, y = y, weights = weights, Z = Z, cov.fun=cov.fun),
+ list(...))
+ mKrig.args$find.trA = find.trA.MLE
+
+ # output matrix to summarize results
+ ncolSummary = 8 + length(cov.params.guess)
+ summary <- matrix(NA, nrow = 1, ncol = ncolSummary)
+ dimnames(summary) <- list(NULL, c("EffDf", "lnProfLike", "GCV", "sigma.MLE",
+ "rho.MLE", "llambda.MLE", names(cov.params.guess),
+ "counts eval","counts grad"))
+
+ # Define the objective function as a tricksy call to mKrig
+ # if Y is a matrix of replicated data sets use the log likelihood for the complete data sets
+ lnProfileLike.max <- -Inf
+ temp.fn <- function(parameters) {
+ # Separate lambda from covariance parameters.
+ # Optimization is over log-scale so exponentiate log-parameters.
+ lambda = exp(parameters[1])
+ if(length(parameters) > 1) {
+ otherParams = as.list(exp(parameters[2:length(parameters)]))
+ names(otherParams) = names(cov.params.guess)
+ }
+ else
+ otherParams = NULL
+
+ #get all this eval's covariance arguments using the input parameters
+ cov.args.temp = c(cov.args, otherParams)
+
+ # NOTE: FULL refers to estimates collapsed across the replicates if Y is a matrix
+ # assign to hold the last mKrig object
+ hold <- do.call("mKrig", c(mKrig.args, list(lambda = lambda),
+ cov.args.temp))
+
+ #save best mKrig object to global environment
+ if(hold$lnProfileLike.FULL > lnProfileLike.max) {
+ out <<- hold
+ lnProfileLike.max = hold$lnProfileLike.FULL
+ }
+ hold = hold[c("rho.MLE.FULL", "sigma.MLE.FULL", "lnProfileLike.FULL")]
+
+ # add this evalution to an object (i.e. here a matrix) in the calling frame
+ temp.eval <- get("capture.evaluations")
+ assign("capture.evaluations", rbind(temp.eval, c(parameters, unlist(hold))), envir = capture.env)
+ return(hold$lnProfileLike.FULL)
+ }
+
+ #
+ # optimize over covariance parameters and lambda
+
+ # list of covariance arguments from par.grid with right names (some R arcania!)
+ # note that this only works because 1) temp.fn will search in this frame for this object
+ # par.grid has been coerced to a data frame so one has a concept of a row subscript.
+
+ # set up matrix to store evaluations from within optim
+ capture.evaluations <- matrix(NA, ncol = 4+length(cov.params.guess), nrow = 1,
+ dimnames = list(NULL, c("lambda", names(cov.params.guess), "rho.MLE",
+ "sigma.MLE", "lnProfileLike.FULL")))
+ capture.env <- environment()
+
+ # call to optim with initial guess (on log-scale)
+ init.guess = log(unlist(c(lambda.guess, cov.params.guess)))
+ look <- do.call(optim, c(list(par=init.guess), list(temp.fn), optim.args))
+
+ #get optim results
+ optim.counts <- look$counts
+ llambda.opt <- look$par[1]
+ lambda.opt <- exp(llambda.opt)
+ if(length(look$par) > 1) {
+ params.opt <- exp(look$par[2:length(look$par)])
+ params.opt <- as.list(params.opt)
+ names(params.opt) <- names(cov.params.guess)
+ }
+ else
+ params.opt=NULL
+
+ # call to 1-d search
+ # opt.summary <- optimize(temp.fn, interval= llambda.start + c(-8,8), maximum=TRUE)
+ # llambda.opt <- opt.summary$maximum
+ # optim.counts<- c(nrow(capture.evaluations)-1, NA)
+ # accumulate the new matrix of lnlambda and ln likelihoods (omitting first row of NAs)
+ lnLik.eval <- capture.evaluations[-1,]
+
+ #exponentiate lambda and covariance parameters in lnLik.eval
+ lnLik.eval[, 1:length(look$par)] = exp(lnLik.eval[, 1:length(look$par)])
+
+ # calculate trace of best mKrig object if necessary
+ #
+ find.trA = list(...)$find.trA
+ if(is.null(find.trA) || find.trA) {
+
+ #get arguments for mKrig.trace
+ iseed = list(...)$iseed
+ NtrA = list(...)$NtrA
+
+ #set iseed and NtrA to default values of mKrig if NULL
+ if(is.null(iseed))
+ iseed = 123
+ if(is.null(NtrA))
+ NtrA = 20
+
+ #update best mKrig object with trace results
+ out2 <- mKrig.trace(out, iseed, NtrA)
+ out$eff.df <- out2$eff.df
+ out$trA.info <- out2$trA.info
+ np <- nrow(x)
+ out$GCV <- (sum(out$residuals^2)/np)/(1 - out2$eff.df/np)^2
+ if (NtrA < np)
+ out$GCV.info <- (sum(out$residuals^2)/np)/(1 - out2$trA.info/np)^2
+ else
+ out$GCV.info <- NA
+ }
+
+ # save results of the best covariance model evaluation in a neat table
+
+ summary[1, 1:ncolSummary] <- unlist(c(out$eff.df, out$lnProfileLike.FULL,
+ out$GCV, out$sigma.MLE.FULL, out$rho.MLE.FULL,
+ llambda.opt,
+ params.opt, optim.counts))
+
+ if (verbose) {
+ cat("Summary: ", 1, summary[1, 1:ncolSummary], fill = TRUE)
+ }
+
+ #add summary table to output mKrig object and ensure it is still
+ #of class mKrig
+ out = c(out, list(summary=summary, lnLik.eval=lnLik.eval))
+ class(out) = "mKrig"
+
+ return(out)
+}
diff --git a/R/mKrig.R b/R/mKrig.R
new file mode 100644
index 0000000..bb60dd4
--- /dev/null
+++ b/R/mKrig.R
@@ -0,0 +1,257 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+mKrig <- function(x, y, weights=rep(1, nrow(x)), Z = NULL,
+ cov.function="stationary.cov",
+ cov.args = NULL, lambda = 0, m = 2,
+ chol.args = NULL, find.trA = TRUE, NtrA = 20,
+ iseed = 123, llambda = NULL, na.rm=FALSE,
+ collapseFixedEffect = TRUE, ...) {
+ # pull extra covariance arguments from ... and overwrite
+ # any arguments already named in cov.args
+ ind<- match( names( cov.args), names(list(...) ) )
+ cov.args = c(cov.args[is.na(ind)], list(...))
+ #
+ #If cov.args$find.trA is true, set onlyUpper to FALSE (onlyUpper doesn't
+ #play nice with predict.mKrig, called by mKrig.trace)
+ #
+ if(find.trA == TRUE && supportsArg(cov.function, "onlyUpper"))
+ cov.args$onlyUpper= FALSE
+ if(find.trA == TRUE && supportsArg(cov.function, "distMat"))
+ cov.args$distMat= NA
+
+ if (!is.null(llambda)) {
+ lambda <- exp(llambda)
+ }
+ # see comments in Krig.engine.fixed for algorithmic commentary
+ #
+ # check for duplicate x's.
+ # stop if there are any
+ if (any(duplicated(cat.matrix(x)))) {
+ stop("locations are not unique see help(mKrig) ")
+ }
+ # next function also omits NAs from x,y,weights, and Z if na.rm=TRUE.
+ object<- mKrigCheckXY( x, y, weights, Z, na.rm = na.rm)
+ # create fixed part of model as m-1 order polynomial
+ # NOTE: if m==0 then fields.mkpoly returns a NULL to
+ # indicate no polynomial part.
+ Tmatrix <- cbind(fields.mkpoly(object$x, m), object$Z)
+ # set some dimensions
+ np <- nrow(object$x)
+ if( is.null(Tmatrix) ){
+ nt<- 0
+ }
+ else{
+ nt<- ncol(Tmatrix)
+ }
+ if( is.null(object$Z)){
+ nZ<- 0
+ }
+ else{
+ nZ<- ncol(object$Z)
+ }
+ ind.drift <- c(rep(TRUE, (nt - nZ)), rep(FALSE, nZ))
+ # as a place holder for reduced rank Kriging, distinguish between
+ # observations locations and the locations to evaluate covariance.
+ # (this is will also allow predict.mKrig to handle a Krig object)
+ object$knots <- object$x
+ # covariance matrix at observation locations
+ # NOTE: if cov.function is a sparse constuct then Mc will be sparse.
+ # see e.g. wendland.cov
+ Mc <- do.call(cov.function, c(cov.args, list(x1 = object$knots, x2 = object$knots)))
+ #
+ # decide how to handle the pivoting.
+ # one wants to do pivoting if the matrix is sparse.
+ # if Mc is not a matrix assume that it is in sparse format.
+ #
+ sparse.flag <- !is.matrix(Mc)
+ #
+ # set arguments that are passed to cholesky
+ #
+ if (is.null(chol.args)) {
+ chol.args <- list(pivot = sparse.flag)
+ }
+ else {
+ chol.args <- chol.args
+ }
+ # quantify sparsity of Mc for the mKrig object
+ nzero <- ifelse(sparse.flag, length(Mc at entries), np^2)
+ # add diagonal matrix that is the observation error Variance
+ # NOTE: diag must be a overloaded function to handle sparse format.
+ if (lambda != 0) {
+ if(! sparse.flag)
+ invisible(.Call("addToDiagC", Mc, as.double(lambda/object$weights), nrow(Mc), PACKAGE="fields")
+ )
+ else
+ diag(Mc) = diag(Mc) + lambda/object$weights
+ }
+ # MARK LINE Mc
+ # At this point Mc is proportional to the covariance matrix of the
+ # observation vector, y.
+ #
+ # cholesky decoposition of Mc
+ # do.call used to supply other arguments to the function
+ # especially for sparse applications.
+ # If chol.args is NULL then this is the same as
+ # Mc<-chol(Mc), chol.args))
+ Mc <- do.call("chol", c(list(x = Mc), chol.args))
+
+ lnDetCov <- 2 * sum(log(diag(Mc)))
+
+ #
+ # start linear algebra to find estimates and likelihood
+ # Note that all these expressions make sense if y is a matrix
+ # of several data sets and one is solving for the coefficients
+ # of all of these at once. In this case d.coef and c.coef are matrices
+ #
+ if( !is.null(Tmatrix)){
+ # Efficent way to multply inverse of Mc times the Tmatrix
+ VT <- forwardsolve(Mc, x = Tmatrix, k=ncol(Mc), transpose = TRUE, upper.tri = TRUE)
+ qr.VT <- qr(VT)
+
+ # now do generalized least squares for d
+ d.coef <- as.matrix(qr.coef(qr.VT, forwardsolve(Mc, transpose = TRUE,
+ object$y, upper.tri = TRUE)))
+
+ if (collapseFixedEffect) {
+ # use a common estimate of fixed effects across all replicates
+ d.coefMeans <- rowMeans(d.coef)
+ d.coef <- matrix(d.coefMeans, ncol = ncol(d.coef),
+ nrow = nrow(d.coef))
+ }
+
+ resid<- object$y - Tmatrix %*% d.coef
+ # GLS covariance matrix for fixed part.
+ Rinv <- solve(qr.R(qr.VT))
+ Omega <- Rinv %*% t(Rinv)
+#
+# Omega is solve(t(Tmatrix)%*%solve( Sigma)%*%Tmatrix)
+# proportional to fixed effects covariance matrix.
+# Sigma = cov.function( x,x) + lambda/object$weights
+# this is proportional to the covariance matrix for the GLS estimates of
+# the fixed linear part of the model.
+#
+ R2diag<- diag( qr.R(qr.VT) )^2
+ lnDetOmega<- -1* sum( log(R2diag) )
+ }
+ else{
+# much is set to NULL because no fixed part of model
+ nt<- 0
+ resid<- object$y
+ Rinv<- NULL
+ Omega<- NULL
+ qr.VT<- NULL
+ d.coef<- NULL
+ lnDetOmega <- 0
+ }
+ # and now find c.
+ # the coefficents for the spatial part.
+ # if linear fixed part included resid as the residuals from the
+ # GLS regression.
+ c.coef <- as.matrix(forwardsolve(Mc, transpose = TRUE,
+ resid, upper.tri = TRUE))
+ # save intermediate result this is t(y- T d.coef)( M^{-1}) ( y- T d.coef)
+ quad.form <- c(colSums(as.matrix(c.coef^2)))
+ # find c coefficients
+ c.coef <- as.matrix(backsolve(Mc, c.coef))
+ # MLE estimate of rho and sigma
+ # rhohat <- c(colSums(as.matrix(c.coef * y)))/(np - nt)
+ # NOTE if y is a matrix then each of these are vectors of parameters.
+ rho.MLE <- quad.form/np
+ rhohat <- c(colSums(as.matrix(c.coef * object$y)))/np
+ shat.MLE <- sigma.MLE <- sqrt(lambda * rho.MLE)
+ # the log profile likehood with rhohat and dhat substituted
+ # leaving a profile for just lambda.
+ # NOTE if y is a matrix then this is a vector of log profile
+ # likelihood values.
+ lnProfileLike <- (-np/2 - log(2 * pi) * (np/2) - (np/2) *
+ log(rho.MLE) - (1/2) * lnDetCov)
+ # see section 4.2 handbook of spatial statistics (Zimmermanchapter)
+ lnProfileREML <- lnProfileLike + (1/2) * lnDetOmega
+ rho.MLE.FULL <- mean(rho.MLE)
+ sigma.MLE.FULL <- sqrt(lambda * rho.MLE.FULL)
+ # if y is a matrix then compute the combined likelihood
+ # under the assumption that the columns of y are replicated
+ # fields
+ lnProfileLike.FULL <- sum((-np/2 - log(2 * pi) * (np/2) -
+ (np/2) * log(rho.MLE.FULL)
+ - (1/2) * lnDetCov)
+ )
+ lnProfileREML.FULL <- sum((-np/2 - log(2 * pi) * (np/2) -
+ (np/2) * log(rho.MLE.FULL)
+ - (1/2) * lnDetCov
+ + (1/2) * lnDetOmega )
+ )
+
+ #
+ # return coefficients and include lambda as a check because
+ # results are meaningless for other values of lambda
+ # returned list is an 'object' of class mKrig (micro Krig)
+ # also save the matrix decompositions so coefficients can be
+ # recalculated for new y values. Make sure onlyUpper and
+ # distMat are unset for compatibility with mKrig S3 functions
+ if(!is.null(cov.args$onlyUpper))
+ cov.args$onlyUpper = FALSE
+ if(!is.null(cov.args$distMat))
+ cov.args$distMat = NA
+ object <- c( object, list(
+ d = d.coef, c = c.coef, nt = nt, np = np,
+ lambda.fixed = lambda,
+ cov.function.name = cov.function,
+ args = cov.args, m = m, chol.args = chol.args, call = match.call(),
+ nonzero.entries = nzero, shat.MLE = sigma.MLE, sigma.MLE = sigma.MLE,
+ rho.MLE = rho.MLE, rhohat = rho.MLE, lnProfileLike = lnProfileLike,
+ rho.MLE.FULL = rho.MLE.FULL, sigma.MLE.FULL = sigma.MLE.FULL,
+ lnProfileLike.FULL = lnProfileLike.FULL,
+ lnProfileREML.FULL = lnProfileREML.FULL,
+ lnProfileREML = lnProfileREML,
+ lnDetCov = lnDetCov, lnDetOmega = lnDetOmega,
+ quad.form = quad.form, Omega = Omega,lnDetOmega=lnDetOmega,
+ qr.VT = qr.VT,
+ Mc = Mc,
+ Tmatrix = Tmatrix, ind.drift = ind.drift, nZ = nZ,
+ collapseFixedEffect= collapseFixedEffect)
+ )
+ #
+ # find the residuals directly from solution
+ # to avoid a call to predict
+ object$residuals <- lambda * c.coef/object$weights
+ object$fitted.values <- object$y - object$residuals
+ # estimate effective degrees of freedom using Monte Carlo trace method.
+ if (find.trA) {
+ object2 <- mKrig.trace(object, iseed, NtrA)
+ object$eff.df <- object2$eff.df
+ object$trA.info <- object2$trA.info
+ object$GCV <- (sum(object$residuals^2)/np)/(1 - object2$eff.df/np)^2
+ if (NtrA < np) {
+ object$GCV.info <- (sum(object$residuals^2)/np)/(1 - object2$trA.info/np)^2
+ }
+ else {
+ object$GCV.info <- NA
+ }
+ }
+ else {
+ object$eff.df <- NA
+ object$trA.info <- NA
+ object$GCV <- NA
+ }
+ class(object) <- "mKrig"
+ return(object)
+}
diff --git a/R/mKrig.family.R b/R/mKrig.family.R
new file mode 100644
index 0000000..e4357c3
--- /dev/null
+++ b/R/mKrig.family.R
@@ -0,0 +1,290 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+mKrig.trace <- function(object, iseed, NtrA) {
+ set.seed(iseed)
+ # if more MonteCarlo samples > number of data points just
+ # find A exactly using np calls to predict.
+ np<- object$np
+ if (NtrA >= object$np) {
+ Ey <- diag(1, np)
+ NtrA <- np
+ hold <- diag(predict.mKrig(object, ynew = Ey, collapseFixedEffect=FALSE))
+ trA.info<- NA
+ trA.est <- sum(hold)
+ }
+ else {
+ # if fewer tests then use random trace method
+ # find fitted.values for iid N(0,1) 'data' to calculate the
+ # the Monte Carlo estimate of tr A(lambda)
+ # basically repeat the steps above but take some
+ # short cuts because we only need fitted.values
+ # create random normal 'data'
+ Ey <- matrix(rnorm(np * NtrA), nrow = np,
+ ncol = NtrA)
+ trA.info <- colSums(Ey * (predict.mKrig(object, ynew = Ey,
+ collapseFixedEffect=FALSE)))
+ trA.est <- mean(trA.info)
+ }
+ if (NtrA < np) {
+ MSE<-(sum(object$residuals^2)/np)
+ GCV <- MSE/(1 - trA.est /np)^2
+ GCV.info <- MSE/( 1 - trA.info/np)^2
+ }
+ else{
+ GCV<- NA
+ GCV.info <- NA
+ }
+ return(
+ list(trA.info = trA.info, eff.df = trA.est,
+ GCV= GCV, GCV.info=GCV.info)
+ )
+}
+
+mKrig.coef <- function(object, y, collapseFixedEffect=TRUE) {
+ # given new data y and the matrix decompositions in the
+ # mKrig object find coefficients d and c.
+ # d are the coefficients for the fixed part
+ # in this case hard coded for a low order polynomial
+ # c are coefficients for the basis functions derived from the
+ # covariance function.
+ #
+ # see mKrig itself for more comments on the linear algebra
+ #
+ # Note that all these expressions make sense if y is a matrix
+ # of several data sets and one is solving for the coefficients
+ # of all of these at once. In this case d.coef and c.coef are matrices
+ #
+ # generalized least squares for d
+ if( any(is.na(y))){
+ stop("mKrig can not omit missing values in observation vecotor")
+ }
+ if( object$nt>0){
+ d.coef <- as.matrix(qr.coef(object$qr.VT, forwardsolve(object$Mc,
+ transpose = TRUE, y, upper.tri = TRUE)))
+ d.coefMeans<- rowMeans( d.coef)
+ if( collapseFixedEffect){
+ d.coef<- matrix( d.coefMeans, ncol=ncol(d.coef), nrow= nrow( d.coef))
+ }
+ # residuals from subtracting off fixed part
+ # of model as m-1 order polynomial
+ resid <- y - object$Tmatrix %*% d.coef
+ }
+ else{
+ d.coef<- NULL
+ resid <- y
+ }
+ # and now find c.
+ c.coef <- forwardsolve(object$Mc, transpose = TRUE, resid,
+ upper.tri = TRUE)
+ c.coef <- as.matrix(backsolve(object$Mc, c.coef))
+ out <- list(d = (d.coef), c = (c.coef))
+ return(out)
+}
+print.mKrig <- function(x, digits = 4, ...) {
+
+ if (is.matrix(x$residuals)) {
+ n <- nrow(x$residuals)
+ NData <- ncol(x$residuals)
+ }
+ else {
+ n <- length(x$residuals)
+ NData <- 1
+ }
+
+ c1 <- "Number of Observations:"
+ c2 <- n
+
+ if (NData > 1) {
+ c1 <- c(c1, "Number of data sets fit:")
+ c2 <- c(c2, NData)
+
+ }
+
+ c1 <- c(c1, "Degree of polynomial null space ( base model):")
+
+
+ if(x$m !=0 ){
+ c2 <- c(c2, x$m - 1)
+ }
+ else{
+ c2 <- c(c2, NA)
+ }
+ c1 <- c(c1, "Total number of parameters in base model")
+ c2 <- c(c2, x$nt)
+ if (x$nZ > 0) {
+ c1 <- c(c1, "Number of additional covariates (Z)")
+ c2 <- c(c2, x$nZ)
+ }
+ if (!is.na(x$eff.df)) {
+ c1 <- c(c1, " Eff. degrees of freedom")
+ c2 <- c(c2, signif(x$eff.df, digits))
+ if (length(x$trA.info) < x$np) {
+ c1 <- c(c1, " Standard Error of estimate: ")
+ c2 <- c(c2, signif(sd(x$trA.info)/sqrt(length(x$trA.info)),
+ digits))
+ }
+ }
+ c1 <- c(c1, "Smoothing parameter")
+ c2 <- c(c2, signif(x$lambda.fixed, digits))
+
+ if (NData == 1) {
+ c1 <- c(c1, "MLE sigma ")
+ c2 <- c(c2, signif(x$shat.MLE, digits))
+ c1 <- c(c1, "MLE rho")
+ c2 <- c(c2, signif(x$rho.MLE, digits))
+ }
+
+
+ c1 <- c(c1, "Nonzero entries in covariance")
+ c2 <- c(c2, x$nonzero.entries)
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+########### print out call and table of information
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+########### assorted remarks
+ if (NData > 1) {
+ cat(" ", fill = TRUE)
+ if( x$collapseFixedEffect){
+ cat("Estimated fixed effects pooled across replicates", fill=TRUE)
+ }
+ else{
+ cat("Estimated fixed effects found separately for each replicate", fill=TRUE)
+ }
+ cat("collapseFixedEffect :", x$collapseFixedEffect, fill=TRUE)
+ }
+
+ cat(" ", fill = TRUE)
+ cat("Covariance Model:", x$cov.function, fill = TRUE)
+ if (x$cov.function == "stationary.cov") {
+ cat(" Covariance function: ", ifelse(is.null(x$args$Covariance),
+ "Exponential", x$args$Covariance), fill = TRUE)
+ }
+ if (!is.null(x$args)) {
+ cat(" Non-default covariance arguments and their values ",
+ fill = TRUE)
+ nlist <- as.character(names(x$args))
+ NL <- length(nlist)
+ for (k in 1:NL) {
+ cat(" Argument:", nlist[k], " ")
+ if (object.size(x$args[[k]]) <= 1024) {
+ cat("has the value(s): ", fill = TRUE)
+ print(x$args[[k]])
+ }
+ else {
+ cat("too large to print value, size > 1K ...",
+ fill = TRUE)
+ }
+ }
+ }
+ invisible(x)
+}
+
+summary.mKrig <- function(object, ...) {
+ print.mKrig(object, ...)
+}
+
+predict.mKrig <- function(object, xnew = NULL, ynew = NULL, grid.list=NULL,
+ derivative = 0, Z = NULL, drop.Z = FALSE, just.fixed = FALSE,
+ collapseFixedEffect = object$collapseFixedEffect,
+ ...) {
+ # the main reason to pass new args to the covariance is to increase
+ # the temp space size for sparse multiplications
+ # other optional arguments that typically describe the covariance function
+ # from mKrig are passed along in the list object$args
+ cov.args <- list(...)
+ # predict at observation locations by default
+ if( !is.null(grid.list)){
+ xnew<- make.surface.grid(grid.list)
+ }
+ if (is.null(xnew)) {
+ xnew <- object$x
+ }
+ if (is.null(Z) & (length(object$ind.drift) >0 )) {
+ Z <- object$Tmatrix[, !object$ind.drift]
+ }
+ if (!is.null(ynew)) {
+ coef.hold <- mKrig.coef(object, ynew,
+ collapseFixedEffect=collapseFixedEffect)
+ c.coef <- coef.hold$c
+ d.coef <- coef.hold$d
+ }
+ else {
+ c.coef <- object$c
+ d.coef <- object$d
+ }
+ # fixed part of the model this a polynomial of degree m-1
+ # Tmatrix <- fields.mkpoly(xnew, m=object$m)
+ # only do this if nt>0, i.e. there is a fixed part.
+ #
+ if( object$nt>0){
+ if (derivative == 0) {
+ if (drop.Z | object$nZ == 0) {
+ # just evaluate polynomial and not the Z covariate
+ temp1 <- fields.mkpoly(xnew, m = object$m) %*%
+ d.coef[object$ind.drift, ]
+ }
+ else {
+ if( nrow( xnew) != nrow(as.matrix(Z)) ){
+ stop("number of rows of covariate Z is not
+ the same as the number of locations")
+ }
+ temp0 <- cbind(fields.mkpoly(xnew, m = object$m),as.matrix(Z))
+ temp1 <- temp0 %*% d.coef
+ }
+ }
+ else {
+ if (!drop.Z & object$nZ > 0) {
+ stop("derivative not supported with Z covariate included")
+ }
+ temp1 <- fields.derivative.poly(xnew, m = object$m, d.coef[object$ind.drift,
+ ])
+ }
+ if (just.fixed) {
+ return(temp1)
+ }
+ }
+ # add nonparametric part. Covariance basis functions
+ # times coefficients.
+ # syntax is the name of the function and then a list with
+ # all the arguments. This allows for different covariance functions
+ # that have been passed as their name.
+ if (derivative == 0) {
+ # argument list are the parameters and other options from mKrig
+ # locations and coefficients,
+ temp2 <- do.call(object$cov.function.name, c(object$args,
+ list(x1 = xnew, x2 = object$knots, C = c.coef), cov.args))
+ }
+ else {
+ temp2 <- do.call(object$cov.function.name, c(object$args,
+ list(x1 = xnew, x2 = object$knots, C = c.coef, derivative = derivative),
+ cov.args))
+ }
+ # add two parts together and coerce to vector
+ if( object$nt>0){
+ return((temp1 + temp2))
+ }
+ else{
+ return( temp2)
+ }
+}
diff --git a/R/mKrigCheckXY.R b/R/mKrigCheckXY.R
new file mode 100644
index 0000000..2fc62fb
--- /dev/null
+++ b/R/mKrigCheckXY.R
@@ -0,0 +1,77 @@
+mKrigCheckXY <- function(x, y, weights, Z, na.rm)
+ {
+ #
+ # check for missing values in y or X.
+ #
+ # save logical indicating where there are NA's
+ # and check for NA's
+ #
+ ind <- is.na(y)
+ if (any(ind) & !na.rm) {
+ stop("Need to remove missing values or use: na.rm=TRUE in the call")
+ }
+ #
+ # coerce x to be a matrix
+ x <- as.matrix(x)
+ #
+ # coerce y to be a vector
+ #
+ y <- as.matrix(y)
+
+ #
+ #default weights ( reciprocal variance of errors).
+ #
+ if (is.null(weights))
+ weights <- rep(1, nrow(y))
+ #
+ # check that dimensions agree
+ #
+ if (nrow(y) != nrow(x)) {
+ stop(" length of y and number of rows of x differ")
+ }
+ if (nrow(y) != length(weights)) {
+ stop(" length of y and weights differ")
+ }
+ # if Z is not NULL coerce to be a matrix
+ # and check # of rows
+ if (!is.null(Z)) {
+ if (!is.matrix(Z)) {
+ Z <- as.matrix(Z)
+ }
+ if (length(y) != nrow(Z)) {
+ stop(" length of y and number of rows of Z differ")
+ }
+ }
+ # if NAs can be removed then remove them and warn the user
+ if (na.rm) {
+ ind <- is.na(y)
+ if(all(ind)){
+ stop("Oops! All y values are missing!")
+ }
+ if (any(ind)) {
+ y <- y[!ind]
+ x <- as.matrix(x[!ind, ])
+ if (!is.null(Z)) {
+ Z <- as.matrix(Z[!ind, ])
+ }
+ weights <- weights[!ind]
+ }
+ }
+ #
+ # check for NA's in x matrix -- there should not be any !
+ if (any(c(is.na(x)))) {
+ stop(" NA's in x matrix")
+ }
+ #
+ # check for NA's in Z matrix
+ if (!is.null(Z)) {
+ if (any(c(is.na(Z)))) {
+ stop(" NA's in Z matrix")
+ }
+ }
+
+ # save x, weights and y w/o NAs
+ N <- length(y)
+ return(list(N = N, y = y, x = x, weights = weights, Z = Z,
+ NA.ind = ind) )
+}
diff --git a/R/mKrigMLEGrid.R b/R/mKrigMLEGrid.R
new file mode 100644
index 0000000..d2861c1
--- /dev/null
+++ b/R/mKrigMLEGrid.R
@@ -0,0 +1,149 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+mKrigMLEGrid <- function(x, y, weights = rep(1, nrow(x)), Z = NULL,
+ mKrig.args = NULL,
+ cov.fun = "stationary.cov",
+ cov.args = NULL,
+ na.rm = TRUE,
+ par.grid = NULL,
+ lambda = NULL,
+ lambda.profile = TRUE,
+ relative.tolerance = 1e-04,
+ REML = FALSE,
+ verbose = FALSE) {
+ if( na.rm){
+ obj<- mKrigCheckXY(x, y, weights, Z, na.rm)
+ x<- obj$x
+ y<- obj$y
+ weights<- obj$weights
+ Z<- obj$Z
+ }
+ #check which optimization options the covariance function supports
+ #precompute distance matrix if possible so it only needs to be computed once
+ supportsDistMat = supportsArg(cov.fun, "distMat")
+ #precompute distance matrix if possible so it only needs to be computed once
+ if(supportsDistMat) {
+ #Get distance function and arguments if available
+ #If user left all distance settings NULL, use rdist with compact option.
+ #Use rdist function by default in general.
+ #
+ if(is.null(cov.args$Distance)) {
+ cov.args$Distance <- "rdist"
+ cov.args$Dist.args <- list(compact=TRUE)
+ }
+ cov.args$distMat<-do.call(cov.args$Distance, c( list(x), cov.args$Dist.args) )
+ cov.args$onlyUpper<- TRUE
+ }
+
+ lnProfileLike.max <- -1e+20
+# find NG -- number of parameters to try
+ par.grid <- data.frame(par.grid)
+ if (nrow(par.grid) == 0) {
+ NG<- ifelse(is.null(lambda), 1, length( lambda))
+ }
+ else {
+ NG <- nrow(par.grid)
+ }
+ lambda.best <- NA
+ # default for lambda is 1.0 for first value and exp(llambda.opt) for subsequent ones
+ # this is controlled by NAs for lambda starting values.
+ if (is.null(lambda)) {
+ lambda <- rep(NA, NG)
+ }
+ # output matrix to summarize results
+ summary <- matrix(NA, nrow = NG, ncol = 8)
+
+ # default starting value for lambda is .5 or log lambda is 0
+ lambda.opt <- .5
+ optim.counts <- c(NA, NA)
+ lnLike.eval <- list()
+ # Define the objective function as a tricksy call to mKrig
+ # if Y is a matrix of replicated data sets use the log likelihood for the complete data sets
+ #
+ # begin loop over covariance arguments
+ lnLike.eval<- list()
+ for (k in 1:NG) {
+ lambda.start <- ifelse(is.na(lambda[k]), lambda.opt, (lambda[k]))
+ # list of covariance arguments from par.grid with right names (some R arcania!)
+ # note that this only works because 1) temp.fn will search in this frame for this object
+ # par.grid has been coerced to a data frame so one has a concept of a row subscript.
+ cov.args.temp <- as.list(par.grid[k, ])
+ names(cov.args.temp) <- names(par.grid)
+ currentCov.args<- c(cov.args.temp, cov.args)
+ # optimize over lambda if lambda.profile is TRUE
+ optim.args = list(method = "BFGS",
+ control = list(fnscale = -1, parscale = c(0.5),
+ ndeps = c(0.05)))
+ if (lambda.profile) {
+ # set up matrix to store evaluations from within optim
+ MLEfit0 <- mKrigMLEJoint(x, y, weights=weights, Z=Z,
+ lambda.start = lambda.start,
+ cov.params.start = NULL,
+ cov.fun = cov.fun,
+ optim.args = optim.args,
+ cov.args = currentCov.args,
+ na.rm = na.rm,
+ mKrig.args = mKrig.args,
+ REML = REML,
+ verbose = verbose)
+ lnLike.eval<- c( lnLike.eval, list(MLEfit0$lnLike.eval))
+ lambda.opt<- MLEfit0$pars.MLE[1]
+ }
+ else {
+ # no refinement for lambda so just save the the 'start' value as final one.
+ lambda.opt <- lambda.start
+ }
+
+# final fit at optimal value
+# (or starting value if not refinement/maximization for lambda)
+ obj <- do.call("mKrig", c(
+ list(x = x, y = y, weights = weights, Z = Z, na.rm = na.rm),
+ mKrig.args,
+ list(lambda=lambda.opt),
+ list( cov.fun= cov.fun, cov.args = currentCov.args)
+ )
+ )
+ nameCriterion<- ifelse( !REML,
+ "lnProfileLike.FULL",
+ "lnProfileREML.FULL" )
+ if (obj[[nameCriterion]] > lnProfileLike.max) {
+ lnProfileLike.max <- obj$lnProfileLike.FULL
+ cov.args.MLE <- cov.args.temp
+ lambda.best <- lambda.opt
+ }
+
+# save results of the kth covariance model evaluation
+ summary[k, 1:8] <- c(obj$eff.df, obj[[nameCriterion]],
+ obj$GCV, obj$sigma.MLE.FULL, obj$rho.MLE.FULL, lambda.opt,
+ optim.counts)
+ dimnames(summary) <- list(NULL, c("EffDf",nameCriterion ,
+ "GCV", "sigma.MLE", "rho.MLE", "lambda.MLE", "counts eval",
+ "counts grad"))
+ if (verbose) {
+ cat("Summary: ", k, summary[k, 1:8], fill = TRUE)
+ }
+ }
+ return(list(summary = summary, par.grid = par.grid, cov.args.MLE = cov.args.MLE,
+ lambda.best = lambda.best, lambda.MLE = lambda.best,
+ call = match.call(), lnLike.eval = lnLike.eval)
+ )
+}
diff --git a/R/mKrigMLEJoint.R b/R/mKrigMLEJoint.R
new file mode 100644
index 0000000..bac65d7
--- /dev/null
+++ b/R/mKrigMLEJoint.R
@@ -0,0 +1,197 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+mKrigMLEJoint <- function(x, y, weights = rep(1, nrow(x)), Z = NULL,
+ mKrig.args = NULL,
+ na.rm = TRUE,
+ cov.fun = "stationary.cov", cov.args=NULL,
+ lambda.start = .5,
+ cov.params.start = NULL,
+ optim.args = NULL,
+ abstol = 1e-4,
+ parTransform = NULL,
+ REML = FALSE,
+ verbose = FALSE) {
+ # overwrite basic data to remove NAs this has be done in case distance
+ # matrices are precomputed (see below)
+ if( na.rm){
+ obj<- mKrigCheckXY(x, y, weights, Z, na.rm)
+ x<- obj$x
+ y<- obj$y
+ weights<- obj$weights
+ Z<- obj$Z
+ }
+ #set default optim.args if necessary
+ # abstol is anticipating this is a likelihood so differencs of 1e-4 are not appreciable
+ #
+ if(is.null(optim.args)){
+ optim.args = list(method = "BFGS",
+ control=list(fnscale = -1,
+ ndeps = rep(log(1.1),length(cov.params.start)+1),
+ abstol = abstol,
+ maxit = 20)
+ )
+ }
+# main way to keep track of parameters to optimize -- lambda always included
+parNames<- c( "lambda", names(cov.params.start))
+if( is.null(parTransform)){
+ # parTransform: log/exp
+ parTransform<- function( ptemp, inv=FALSE){
+ if( !inv){ log( ptemp)}
+ else{
+ exp(ptemp)
+ }
+ }
+}
+########bug
+if(verbose){
+ cat("parameters to optimze: ", parNames, fill=TRUE)
+}
+#check which optimization options the covariance function supports
+ supportsDistMat = supportsArg(cov.fun, "distMat")
+#precompute distance matrix if possible so it only needs to be computed once
+ if(supportsDistMat & is.null( cov.args$distMat)) {
+ #Get distance function and arguments if available
+ #
+ Dist.fun= c(cov.args)$Distance
+ Dist.args=c(cov.args)$Dist.args
+
+ #If user left all distance settings NULL, use rdist with compact option.
+ #Use rdist function by default in general.
+ #
+ if(is.null(Dist.fun)) {
+ Dist.fun = "rdist"
+ if(is.null(Dist.args))
+ Dist.args = list(compact=TRUE)
+ }
+ distMat = do.call(Dist.fun, c(list(x), Dist.args))
+ #set cov.args for optimal performance
+ cov.args = c(cov.args, list(distMat=distMat, onlyUpper=TRUE))
+ }
+# these are all the arguments needed to call mKrig except lambda and cov.args
+ mKrig.args <- c(list(x = x, y = y, weights = weights, Z = Z),
+ mKrig.args,
+ list(cov.fun=cov.fun)
+ )
+# reset switch so trace is not found for each evaluation of the likelihood.
+ mKrig.args$find.trA = FALSE
+# output matrix to summarize results
+ ncolSummary = 7 + length(parNames)
+ summary <- matrix(NA, nrow = 1, ncol = ncolSummary)
+ dimnames(summary) <- list(NULL, c("EffDf", "lnProfLike", "GCV", "sigma.MLE",
+ "rho.MLE", parNames,
+ "counts eval","counts grad"))
+
+ lnProfileLike.max <- -Inf
+
+ #
+ # optimize over (some) covariance parameters and lambda
+ capture.evaluations <- matrix(NA, ncol = length(parNames) + 4 , nrow = 1,
+ dimnames = list(NULL,
+ c( parNames,
+ "rho.MLE",
+ "sigma.MLE",
+ "lnProfileLike.FULL",
+ "lnProfileREML.FULL")
+ )
+ )
+ capture.env <- environment()
+# call to optim with initial start (default is log scaling )
+ init.start <- parTransform( unlist(c(lambda.start, cov.params.start)), inv=FALSE)
+# cat("init.start", init.start, fill=TRUE)
+ optimResults <- do.call(optim, c( list(par=init.start),
+ list(mKrigJointTemp.fn),
+ optim.args,
+ list( parNames = parNames,
+ parTransform = parTransform,
+ mKrig.args = mKrig.args,
+ cov.args = cov.args,
+ capture.env = capture.env,
+ REML = REML)
+ )
+ )
+#get optim results
+ optim.counts <- optimResults$counts
+ parOptimum<- parTransform(optimResults$par, inv=TRUE)
+# first row is just NAs
+ lnLike.eval <- capture.evaluations[-1,]
+
+ nameCriterion<- ifelse( !REML,
+ "lnProfileLike.FULL",
+ "lnProfileREML.FULL" )
+ ind<- which( lnLike.eval[ , nameCriterion]
+ == optimResults$value )
+ ind<- max( ind)
+ # below is an aspect from optim I dont understand and thought to flag
+ #if( length(ind)!=1 ){
+ # cat( "Weirdness in optimization. See lnLike.eval rows: ", ind,
+ # fill=TRUE )
+ # ind<- max( ind)
+ #}
+# save results of the best covariance model evaluation in a neat table
+ summary <- c( optimResults$value,
+ parOptimum,
+ lnLike.eval[ind,"sigma.MLE"],
+ lnLike.eval[ind,"rho.MLE"],
+ optim.counts)
+ names(summary) <- c(nameCriterion, parNames,
+ "sigmaMLE", "rhoMLE", "funEval", "gradEval")
+ out = c( list(summary=summary, lnLike.eval = lnLike.eval, optimResults=optimResults,
+ pars.MLE=parOptimum, parTransform = parTransform))
+ return(out)
+}
+
+# Define the objective function as a tricksy call to mKrig
+# if y is a matrix of replicated data sets use the log likelihood for the complete data sets
+ mKrigJointTemp.fn <- function(parameters,
+ mKrig.args, cov.args, parTransform, parNames,
+ REML=FALSE,
+ capture.env) {
+ # optimization is over a transformed scale ( so need to back transform for mKrig)
+ tPars<- parTransform( parameters, inv=TRUE)
+ names( tPars)<- parNames
+ #get all this eval's covariance arguments using the input parameters
+ cov.args.temp = c(cov.args, tPars)
+ # NOTE: FULL refers to estimates collapsed across the replicates if Y is a matrix
+ # assign to hold the last mKrig object
+ hold <- do.call("mKrig", c(mKrig.args,
+ cov.args.temp))
+
+ hold = hold[c("rho.MLE.FULL",
+ "sigma.MLE.FULL",
+ "lnProfileLike.FULL",
+ "lnProfileREML.FULL"
+ )]
+
+ # add this evalution to an object (i.e. here a matrix) in the calling frame
+ temp.eval <- get("capture.evaluations", envir=capture.env)
+ assign("capture.evaluations", rbind(temp.eval,
+ c(parTransform(parameters, inv=TRUE),
+ unlist(hold))),
+ envir = capture.env)
+ if( !REML){
+ return(hold$lnProfileLike.FULL)
+ }
+ else{
+ return(hold$lnProfileREML.FULL)
+ }
+}
+
diff --git a/R/make.surface.grid.R b/R/make.surface.grid.R
new file mode 100644
index 0000000..05290ff
--- /dev/null
+++ b/R/make.surface.grid.R
@@ -0,0 +1,38 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"make.surface.grid" <- function(grid.list) {
+ #
+ # the old fields version of make.surface.grid was complicated
+ # and we believe rarely used.
+ # this current function
+ # is essentially a single line replacement
+ #
+ # but adds an attribute for the grid matrix to carry
+ # and carries along the names of the grid.list variables.
+ # along the information as to how it was created.
+ # see as.surface
+ temp <- as.matrix(expand.grid(grid.list))
+ # wipe out row names
+ dimnames(temp) <- list(NULL, names(grid.list))
+ # set attribute
+ attr(temp, "grid.list") <- grid.list
+ temp
+}
diff --git a/R/matern.image.cov.R b/R/matern.image.cov.R
new file mode 100644
index 0000000..8f0d1f0
--- /dev/null
+++ b/R/matern.image.cov.R
@@ -0,0 +1,74 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+matern.image.cov <- function(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, M = NULL, N = NULL,theta=1, smoothness=.5 ) {
+ if (is.null(cov.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ if (is.null(M))
+ M <- ceiling2(2 * m)
+ if (is.null(N))
+ N <- ceiling2(2 * n)
+# make sure M and N are even.
+# (not sure what it means if this is not the case!)
+ if( M%%2 !=0) {
+ M<- M+1}
+ if( N%%2 !=0) {
+ N<- N+1}
+# need to evaluate the covariance between the center of the grid and
+# every grid point do this using several simple steps for efficiency.
+ xGrid<- (1:M) * dx - (dx * M)/2
+ yGrid<- (1:N) * dy - (dy * N)/2
+# a matrix the same size as the grid that has the distance between every
+# grid point and the center point.
+ bigDistance<-
+ sqrt(
+ matrix( xGrid^2, M,N, byrow=FALSE) +
+ matrix( yGrid^2, M,N, byrow=TRUE) )
+# this should make for a nice image plot of the covariance w/r to the center point #
+ out<- Matern( bigDistance /theta, smoothness=smoothness)
+ temp <- matrix(0, nrow = M, ncol = N)
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+ cov.obj <- list(m = m, n = n, grid = grid, N = N, M = M,
+ wght = wght, call = match.call())
+ if (setup) {
+ return(cov.obj)
+ }
+ }
+ temp <- matrix(0, nrow = cov.obj$M, ncol = cov.obj$N)
+ if (missing(ind1)) {
+ temp[1:cov.obj$m, 1:cov.obj$n] <- Y
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[1:cov.obj$m,
+ 1:cov.obj$n])
+ }
+ else {
+ if (missing(ind2)) {
+ temp[ind1] <- Y
+ }
+ else {
+ temp[ind2] <- Y
+ }
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[ind1])
+ }
+}
diff --git a/R/minimax.crit.R b/R/minimax.crit.R
new file mode 100644
index 0000000..44486c4
--- /dev/null
+++ b/R/minimax.crit.R
@@ -0,0 +1,31 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"minimax.crit" <- function(obj, des = TRUE, R) {
+ R <- as.matrix(R)
+ id <- 1:nrow(R)
+ if (des)
+ Dset <- attr(obj, "best.id")
+ else Dset <- obj
+ Cset <- id[-Dset]
+ dist.mat <- rdist(R[Cset, ], R[Dset, ])
+ mM.crit <- max(apply(dist.mat, 1, min))
+ mM.crit
+}
diff --git a/R/minitri.R b/R/minitri.R
new file mode 100644
index 0000000..7f4c719
--- /dev/null
+++ b/R/minitri.R
@@ -0,0 +1,72 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"minitri" <- structure(list(swim = c(13.77, 14.25,
+ 12.4, 13.6, 11.23, 14.6, 15.63, 13.62, 17.03, 13.1, 16.5,
+ 15.57, 14.28, 16.13, 14.98, 15.65, 14.55, 13.73, 14.67, 14.63,
+ 12.53, 16.52, 14.45, 17.23, 15.77, 13.53, 17.38, 16.58, 16.97,
+ 12.9, 15.48, 15.6, 16.43, 17.37, 15.65, 20.17, 16.57, 15.75,
+ 18.32, 16.15, 18.77, 19.55, 17.05, 15.75, 17.47, 22.18, 12.78,
+ 19.05, 17, 18.65, 18.38, 18.03, 18.08, 18.22, 21.3, 17.82,
+ 16, 15.3, 19.8, 16.33, 14.9, 18.08, 17, 24.77, 17.03, 21.72,
+ 14.77, 17.28, 19.88, 19.75, 18.17, 20.67, 17.6, 18.07, 20.53,
+ 16.78, 18.42, 21.43, 25.35, 16.37, 18.88, 20.3, 21.93, 20,
+ 19.42, 16.97, 15.25, 22.52, 16.8, 17.58, 20.45, 21.78, 17.92,
+ 24.83, 20.38, 20.37, 28.35, 23.95, 25.15, 18.57, 19.95, 26.22,
+ 22.08, 25.77, 23.7, 26.15, 18.05, 30.7, 29.88, 30.77), bike = c(39.72,
+ 39, 42.2, 39.95, 41.72, 42.25, 43.77, 42.17, 40.63, 44.68,
+ 43.63, 42.78, 41.7, 45.57, 42.83, 44.52, 43.9, 45.73, 43.05,
+ 44.9, 48.07, 44.93, 47.87, 45.55, 46.1, 44.6, 45.72, 44.73,
+ 46.85, 49.4, 47.43, 44.52, 43.67, 47.7, 48.57, 45.78, 47.35,
+ 45.63, 46.47, 47.12, 49.62, 48.5, 54.4, 49.4, 48.77, 46.97,
+ 49.92, 48.98, 50.57, 49.52, 52.58, 47.62, 50.05, 51.75, 50.68,
+ 52.28, 48.57, 50.38, 46.2, 49.93, 47.7, 52.68, 50.28, 46.12,
+ 53.17, 51.63, 58.42, 55.3, 50.68, 51.82, 51.93, 48.82, 51.87,
+ 55.4, 55.17, 53.82, 50.23, 47.92, 50.98, 51.95, 53.47, 52.5,
+ 53.98, 47.05, 58.92, 52.67, 52.1, 49.72, 61.33, 58.73, 58,
+ 56.23, 58.95, 55.48, 61.03, 57.37, 54.68, 57.62, 58.52, 56.7,
+ 63.82, 59.07, 66.03, 60.83, 62.37, 66.87, 75, 65.38, 68.25,
+ 67.35), run = c(25.02, 25.68, 24.95, 27.65, 29.25, 26.83,
+ 24.8, 28.73, 29, 28.1, 25.85, 29.4, 31.33, 26.2, 30.45, 29.32,
+ 31.13, 30.48, 32.32, 30.62, 29.92, 30.12, 29.7, 29.25, 30.28,
+ 34.07, 31.55, 33.5, 31.03, 32.72, 32.28, 35.1, 35.13, 30.48,
+ 31.5, 30.37, 32.82, 35.8, 32.82, 34.67, 30.33, 30.95, 27.57,
+ 34.08, 33.13, 30.57, 37.43, 32.12, 32.93, 32.4, 29.63, 35.08,
+ 32.65, 30.98, 30.22, 32.12, 38.08, 37.23, 36.95, 36.72, 40.7,
+ 32.87, 36.53, 33.72, 34.45, 31.47, 31.7, 33.03, 35.68, 34.88,
+ 36.57, 37.27, 37.87, 34.32, 32.1, 37.33, 39.73, 39.07, 32.12,
+ 40.52, 36.77, 36.57, 33.55, 42.55, 31.48, 40.8, 43.45, 39.03,
+ 34.12, 37.12, 35.03, 36.45, 38.48, 36.25, 36.57, 40.9, 37.98,
+ 37.98, 37.45, 45.9, 44.18, 45.02, 42.37, 45.38, 51.8, 45.88,
+ 48.18, 51.8, 57.23, 57.35)), .Names = c("swim", "bike", "run"),
+ row.names = c(" 1", " 2", " 3", " 4", " 5", " 6", " 7",
+ " 8", " 9", " 10", " 11", " 12", " 13", " 14", " 15",
+ " 16", " 17", " 18", " 19", " 20", " 21", " 22", " 23",
+ " 24", " 25", " 26", " 27", " 28", " 29", " 30", " 31",
+ " 32", " 33", " 34", " 35", " 36", " 37", " 38", " 39",
+ " 40", " 41", " 42", " 43", " 44", " 45", " 46", " 47",
+ " 48", " 49", " 50", " 51", " 52", " 53", " 54", " 55",
+ " 56", " 59", " 60", " 61", " 62", " 64", " 65", " 66",
+ " 68", " 69", " 70", " 71", " 72", " 73", " 74", " 75",
+ " 76", " 77", " 78", " 79", " 80", " 81", " 82", " 83",
+ " 84", " 85", " 86", " 87", " 88", " 89", " 90", " 91",
+ " 92", " 93", " 96", " 97", " 98", "100", "102", "103",
+ "104", "105", "106", "107", "108", "109", "110", "111",
+ "112", "113", "114", "116", "117", "118", "119"), class = "data.frame")
diff --git a/R/parse.grid.list.R b/R/parse.grid.list.R
new file mode 100644
index 0000000..2c8425d
--- /dev/null
+++ b/R/parse.grid.list.R
@@ -0,0 +1,55 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"parse.grid.list" <- function(grid.list, order.variables = "xy") {
+ #
+ # utility to find the x and y sequences in grid.list
+ # this is used in predictSurface and as.surface
+ #
+ M <- length(grid.list)
+ gcounts <- unlist(lapply(grid.list, FUN = length))
+ xy <- (1:M)[gcounts > 1]
+ if (length(xy) > 2) {
+ stop("only two components of the grid list\ncan have more than one element")
+ }
+ #
+ # swap the roles of x and y
+ if (order.variables == "yx") {
+ xy <- xy[2:1]
+ }
+ #
+ #
+ # here is the good stuff
+ #
+ nx <- gcounts[xy[1]]
+ ny <- gcounts[xy[2]]
+ x <- grid.list[[xy[1]]]
+ y <- grid.list[[xy[2]]]
+ #
+ # extract the names of the x and y components of the
+ # list
+ #
+ xlab <- names(grid.list)[xy[1]]
+ ylab <- names(grid.list)[xy[2]]
+ xlab <- ifelse(is.null(xlab), "X", xlab)
+ ylab <- ifelse(is.null(ylab), "Y", ylab)
+ list(x = x, y = y, nx = nx, ny = ny, xlab = xlab, ylab = ylab,
+ xy = xy)
+}
diff --git a/R/plot.Krig.R b/R/plot.Krig.R
new file mode 100644
index 0000000..ee0db8e
--- /dev/null
+++ b/R/plot.Krig.R
@@ -0,0 +1,77 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.Krig" <- function(x, digits = 4, which = 1:4,
+ ...) {
+ out <- x
+ #
+ # don't do plots 2:4 if a fixed lambda
+ #
+ if (x$fixed.model) {
+ which <- 1
+ }
+ fitted.values <- predict(out)
+ std.residuals <- (out$residuals * sqrt(out$weights))/out$shat.GCV
+ if (any(which == 1)) {
+ temp <- summary(out)
+ plot(fitted.values, out$y, ylab = "Y", xlab = " predicted values",
+ bty = "n", ...)
+ abline(0, 1)
+ # hold <- par("usr")
+ # text(hold[1], hold[4], paste(" R**2 = ", format(round(100 *
+ # temp$covariance, 2)), "%", sep = ""), cex = 0.8,
+ # adj = 0)
+ }
+ if (any(which == 2)) {
+ plot(fitted.values, std.residuals, ylab = "(STD) residuals",
+ xlab = " predicted values", bty = "n", ...)
+ yline(0)
+ hold <- par("usr")
+ # text(hold[1], hold[4], paste(" RMSE =", format(signif(sqrt(sum(out$residuals^2)/(temp$num.observation -
+ # temp$enp)), digits))), cex = 0.8, adj = 0)
+ }
+ if (any(which == 3)) {
+ if (nrow(out$gcv.grid) > 1) {
+ ind <- out$gcv.grid[, 3] < 1e+19
+ out$gcv.grid <- out$gcv.grid[ind, ]
+ yr <- range(unlist(out$gcv.grid[, 3:5]), na.rm = TRUE)
+ plot(out$gcv.grid[, 2], out$gcv.grid[, 3], xlab = "Eff. number of parameters",
+ ylab = " GCV function", bty = "n", ylim = yr,
+
+ ...)
+ lines(out$gcv.grid[, 2], out$gcv.grid[, 4], lty = 3)
+ lines(out$gcv.grid[, 2], out$gcv.grid[, 5], lty = 1)
+ xline(out$eff.df, lwd=2, col="grey")
+ usr.save<- par()$usr
+ usr.save[3:4]<- range( -out$gcv.grid[,7] )
+ par( usr= usr.save, ylog=FALSE)
+ lines( out$gcv.grid[, 2], -out$gcv.grid[,7] ,
+ lty=2, lwd=2, col="blue")
+ axis( side=4)
+ mtext( side=4, line=2, "log profile likelihood ")
+ title("GCV-points, solid-model, dots- single \n REML dashed",
+ cex = 0.5)
+ box()
+ }
+ }
+ if (any(which == 4)) {
+ hist(std.residuals, ylab="")
+ }
+}
diff --git a/R/plot.spatialDesign.R b/R/plot.spatialDesign.R
new file mode 100644
index 0000000..4319bc1
--- /dev/null
+++ b/R/plot.spatialDesign.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.spatialDesign" <- function(x, ...) {
+ pairs(x$design, ...)
+}
diff --git a/R/plot.spatialProcess.R b/R/plot.spatialProcess.R
new file mode 100644
index 0000000..97644a2
--- /dev/null
+++ b/R/plot.spatialProcess.R
@@ -0,0 +1,75 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.spatialProcess" <- function(x, digits = 4, which = 1:4,
+ ...) {
+ out <- x
+ #
+ # don't do plots 2:4 if a fixed lambda
+ #
+ fitted.values <- predict(out)
+ std.residuals <- (out$residuals * sqrt(out$weights))/out$sigma.MLE
+ if (any(which == 1)) {
+ #temp <- summary(out)
+ plot(fitted.values, out$y, ylab = "Y", xlab = " predicted values",
+ bty = "n", ...)
+ abline(0, 1)
+ hold <- par("usr")
+ title("Observations by predicted values")
+
+ }
+ if (any(which == 2)) {
+ plot(fitted.values, std.residuals, ylab = "(STD) residuals",
+ xlab = " predicted values", bty = "n", ...)
+ yline(0)
+ }
+ if (any(which == 3)) {
+ mar.old<- par()$mar
+ summary<- out$MLEInfo$MLEProfileLambda$summary
+ # referring to summary[,2] is fragile -- can be either full or REML
+ par( mar= mar.old + c(0,0,0,2) )
+ plot(summary[,"EffDf" ], summary[,"GCV" ], xlab = "Eff. number of parameters",
+ ylab = "GCV function", type="l",
+ ...)
+ xline( summary[which.min(summary[,"GCV" ] ),"EffDf"])
+ usr.save <- par()$usr
+ usr.save[3:4]<- range( summary[,2 ] )
+ par( usr= usr.save, ylog=FALSE)
+ lines(summary[,"EffDf" ], summary[,2 ],
+ lty=2, lwd=2, col="blue")
+ xline( summary[which.max(summary[,2 ] ),"EffDf"],
+ col="blue")
+ axis( side=4)
+ mtext( side=4, line=2, "log Profile Likelihood(lamdba)",cex=.75,
+ col="blue")
+ title("Profile over lambda",
+ cex = 0.6)
+ box()
+ par( mar=mar.old)
+ }
+ if (any(which == 4)) {
+ summary<- out$MLEInfo$MLEGrid$summary
+ thetaGrid<- (out$MLEInfo$MLEGrid$par.grid)$theta
+ plot(thetaGrid,summary[,2], pch=16, xlab="theta (range parameter)", ylab="log Profile Likelihood (theta)")
+ title("Profile likelihood for theta \n (range parameter)")
+ xline( out$theta.MLE, lwd=2, col="grey")
+ lines( splint(thetaGrid,summary[,2], nx=200), lwd=2, col="red")
+ }
+}
diff --git a/R/plot.sreg.R b/R/plot.sreg.R
new file mode 100644
index 0000000..ac8f468
--- /dev/null
+++ b/R/plot.sreg.R
@@ -0,0 +1,61 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.sreg" <- function(x, digits = 4, which = 1:4,
+ ...) {
+ out <- x
+ if (any(which == 1)) {
+ plot(out$x, out$y, ylab = "predicted", xlab = " X", bty = "n",
+ ...)
+ matlines(out$predicted$x, out$predicted$y, lty = 1)
+ }
+ if (any(which == 2) & length(out$lambda) == 1) {
+ plot(out$fitted.values, out$residuals, ylab = "residuals",
+ xlab = " predicted values", bty = "n", ...)
+ yline(0)
+ }
+ if (any(which == 3)) {
+ if (nrow(out$gcv.grid) > 1) {
+ # trim off + infinity due to pole in the denominator of GCV function
+ #with cost
+ ind <- out$gcv.grid[, 3] < 1e+19
+ out$gcv.grid <- out$gcv.grid[ind, ]
+ yr <- range(unlist(out$gcv.grid[, 3:5]), na.rm = TRUE)
+ plot(out$gcv.grid[, 2], out$gcv.grid[, 3], xlab = "Eff. parameters",
+ ylab = " GCV function", bty = "n", ylim = yr,
+ log = "y", ...)
+ lines(out$gcv.grid[, 2], out$gcv.grid[, 4], lty = 2)
+ lines(out$gcv.grid[, 2], out$gcv.grid[, 5], lty = 1)
+ xline(out$eff.df)
+ title("GCV-points , solid- GCV model,\ndashed- GCV one",
+ cex = 0.6)
+ }
+ }
+ if (any(which == 4)) {
+ if (length(out$lambda) == 1) {
+ hist(out$residuals, xlab = "Residuals", main = "")
+ }
+ else {
+ bplot(out$residuals, names = format(round(out$trace,
+ 1)), xlab = "eff df")
+ title("Residuals")
+ }
+ }
+}
diff --git a/R/plot.surface.R b/R/plot.surface.R
new file mode 100644
index 0000000..4914a13
--- /dev/null
+++ b/R/plot.surface.R
@@ -0,0 +1,100 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.surface" <- function(x, main = NULL, type = "C",
+ zlab = NULL, xlab = NULL, ylab = NULL, levels = NULL, zlim = NULL,
+ graphics.reset = NULL, labcex = 0.6, add.legend = TRUE, ...) {
+ obj <- x
+ old.par <- par(no.readonly = TRUE)
+ if (is.na(match(type, c("b", "c", "C", "I", "p")))) {
+ stop("plot type does not match b, C, I, or p.")
+ }
+ if (is.null(zlim)) {
+ zlim = range(obj$z, na.rm = TRUE)
+ }
+ if (is.null(graphics.reset) & (type == "b")) {
+ graphics.reset <- TRUE
+ }
+ else {
+ graphics.reset <- FALSE
+ }
+ if (graphics.reset) {
+ on.exit(par(old.par))
+ }
+ if (is.null(xlab)) {
+ if (is.null(obj$xlab))
+ xlab <- "X"
+ else xlab <- obj$xlab
+ }
+ if (is.null(ylab)) {
+ if (is.null(obj$ylab))
+ ylab <- "Y"
+ else ylab <- obj$ylab
+ }
+ if (is.null(zlab)) {
+ if (is.null(obj$zlab))
+ zlab <- "Z"
+ else zlab <- obj$zlab
+ }
+ if (is.null(main))
+ if (!is.null(obj$main))
+ main <- obj$main
+ if (type == "b")
+ set.panel(1, 2, TRUE)
+ if (type == "p" | type == "b") {
+ if (type == "b") {
+ add.legend <- FALSE
+ old.mar <- par()$mar
+ par(mar = c(0, 5, 0, 0))
+ }
+ drape.plot(obj, xlab = xlab, ylab = ylab, zlab = zlab,
+ zlim = zlim, add.legend = add.legend, ...)
+ if (!is.null(main))
+ title(main)
+ }
+ if (type == "I") {
+ image.plot(obj$x, obj$y, obj$z, xlab = xlab, ylab = ylab,
+ zlim = zlim, ...)
+ if ((!is.null(main)) & type != "b")
+ title(main)
+ }
+ if (type == "c") {
+ if (is.null(levels))
+ levels <- pretty(obj$z[!is.na(obj$z)], 5)
+ contour(obj$x, obj$y, obj$z, levels = levels, labcex = labcex,
+ lwd = 2, ...)
+ if ((!is.null(main)) & type != "b")
+ title(main)
+ }
+ if (type == "b" | type == "C") {
+ if (type == "b") {
+ par(mar = old.mar)
+ }
+ image.plot(obj$x, obj$y, obj$z, xlab = xlab, ylab = ylab,
+ graphics.reset = graphics.reset, zlim = zlim, ...)
+ if (is.null(levels))
+ levels <- pretty(obj$z[!is.na(obj$z)], 5)
+ contour(obj$x, obj$y, obj$z, add = TRUE, levels = levels,
+ labcex = labcex, col = "black", lwd = 2)
+ if ((!is.null(main)) & type != "b")
+ title(main)
+ }
+ invisible()
+}
diff --git a/R/plot.vgram.matrix.R b/R/plot.vgram.matrix.R
new file mode 100644
index 0000000..767009e
--- /dev/null
+++ b/R/plot.vgram.matrix.R
@@ -0,0 +1,37 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"plot.vgram.matrix" <- function(x, ...) {
+ ind <- x$ind
+ ir <- range(ind[, 1])
+ jr <- range(ind[, 2])
+ # x and y grid values
+ temp.list <- list(x = (ir[1]:ir[2]) * x$dx, y = (jr[1]:jr[2]) *
+ x$dy)
+ # fill in a matrix with variogram values
+ ind2 <- cbind(ind[, 1] - min(ind[, 1]) + 1, ind[, 2] - min(ind[,
+ 2]) + 1)
+ temp <- matrix(NA, nrow = max(ind2[, 1]), ncol = max(ind2[,
+ 2]))
+ temp[ind2] <- x$vgram.full
+ temp.list$z <- temp
+ # plot it!
+ image.plot(temp.list, ...)
+}
diff --git a/R/poly.image.R b/R/poly.image.R
new file mode 100644
index 0000000..c0508f9
--- /dev/null
+++ b/R/poly.image.R
@@ -0,0 +1,100 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+poly.image.regrid <- function(x) {
+ ##################################
+ temp.addcol <- function(X) {
+ N <- ncol(X)
+ # add extra columns to X, on either side
+ cbind(X[, 1] - (X[, 2] - X[, 1]), X, (X[, N] - X[, (N -
+ 1)]) + X[, N])
+ }
+ ###############################
+ # find approximate grid with z values at centers
+ M <- nrow(x)
+ N <- ncol(x)
+ # new x matrix that is the midpoints of original grid points.
+ x <- (x[, 1:(N - 1)] + x[, 2:N])/2
+ x <- (x[1:(M - 1), ] + x[2:M, ])/2
+ # now add extra rows and columns on all sides
+ x <- t(temp.addcol(x))
+ t(temp.addcol(x))
+}
+poly.image <- function(x, y, z, col = tim.colors(64),
+ breaks, transparent.color = "white", midpoint = FALSE, zlim = range(z,
+ na.rm = TRUE), xlim = range(x), ylim = range(y), add = FALSE,
+ border = NA, lwd.poly = 1, ...) {
+ # check dimensions
+ Dx <- dim(x)
+ Dy <- dim(y)
+ if (any((Dx - Dy) != 0)) {
+ stop(" x and y matrices should have same dimensions")
+ }
+ # check whether grid and z values coincide.
+ Dz <- dim(z)
+ if (all((Dx - Dz) == 0) & !midpoint) {
+ # expand grid in a linear way so that the z are not
+ # grid box centers
+ x <- poly.image.regrid(x)
+ y <- poly.image.regrid(y)
+ }
+ # figure out the breaks make sure that missing breaks are passed as NA.
+ if (missing(breaks)) {
+ breaks <- NA
+ }
+
+ # code values in z based on range to colors.
+ # if midpoint is true z values will be averaged first
+ zcol <- drape.color(z, col = col, midpoint = midpoint, zlim = zlim,
+ transparent.color = transparent.color, breaks = breaks)$color.index
+ # blank if not adding to an exising plot
+ if (!add) {
+ plot(xlim, ylim, type = "n", ...)
+ }
+ N <- ncol(x)
+ Nm1 <- N - 1
+ M <- nrow(x)
+ Mm1 <- M - 1
+ for (i in (1:Mm1)) {
+ # draw polygons one row at a time
+ # uses feature of polygon to draw multiple regions with NAs
+ # marking start and end.
+ xp <- cbind(x[i, 1:Nm1], x[i + 1, 1:Nm1], x[i + 1, 2:N],
+ x[i, 2:N], rep(NA, Nm1))
+ yp <- cbind(y[i, 1:Nm1], y[i + 1, 1:Nm1], y[i + 1, 2:N],
+ y[i, 2:N], rep(NA, Nm1))
+ xp <- c(t(xp))
+ yp <- c(t(yp))
+ pcol <- c(zcol[i, 1:Nm1])
+
+ # draw each poly with different color including the border
+ # if the border color has not been specified.
+ # this will avoid missing some space on some output devices.
+ # one can also crank down width of border lines to avoid rounded corners
+
+ polygon(xp, yp, border = pcol, col = pcol, lwd = lwd.poly)
+
+ # fill in border with different color if it is not an NA.
+ if (!is.na(border)) {
+ polygon(xp, yp, border = border, col = NA, lwd = lwd.poly)
+ }
+
+ }
+}
diff --git a/R/predict.Krig.R b/R/predict.Krig.R
new file mode 100644
index 0000000..0033afc
--- /dev/null
+++ b/R/predict.Krig.R
@@ -0,0 +1,150 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+# wrapper for Tps object
+"predict.Tps"<- function(object, ...){
+ UseMethod("Krig")
+ }
+
+"predict.Krig" <- function(object, x = NULL, Z = NULL,
+ drop.Z = FALSE, just.fixed = FALSE, lambda = NA, df = NA,
+ model = NA, eval.correlation.model = TRUE, y = NULL, yM = NULL,
+ verbose = FALSE, ...) {
+ #NOTE: most of this function is figuring out what to do!
+ #
+ # check that derivative is not called
+ if (!is.null(list(...)$derivative)) {
+ stop("For derivatives use predictDerivative")
+ }
+ # y is full data yM are the data collapsed to replicate means
+ # if new data is not passed then copy from the object
+ if (is.null(y) & is.null(yM)) {
+ temp.c <- object$c
+ temp.d <- object$d
+ }
+ # check for passed x but no Z -- this is an error
+ # if there are Z covariates in the model and drop.Z is FALSE
+ ZinModel<- !is.null(object$Z)
+ newX<- !is.null(x)
+ missingZ<- is.null(Z)
+ if( ZinModel&newX){
+ if( missingZ & !drop.Z) {
+ stop("Need to specify drop.Z as TRUE or pass Z values")
+ }
+ }
+ # default is to predict at data x's
+ if (is.null(x)) {
+ x <- object$x
+ }
+ else {
+ x <- as.matrix(x)
+ }
+ # default is to predict at data Z's
+ if (is.null(Z)) {
+ Z <- object$Z
+ }
+ else {
+ Z <- as.matrix(Z)
+ }
+ if (verbose) {
+ print(x)
+ print(Z)
+ }
+ # transformations of x values used in Krig
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ x <- scale(x, xc, xs)
+ # NOTE knots are already scaled in Krig object and are used
+ # in transformed scale.
+ # i.e. knots <- scale( object$knots, xc, xs)
+ #
+ # figure out if the coefficients for the surface needto be recomputed.
+ find.coef <- (!is.null(y) | !is.null(yM) | !is.na(lambda) |
+ !is.na(df) | !is.na(model[1]))
+ if (verbose) {
+ cat("find.coef", find.coef, fill = TRUE)
+ }
+ # convert effective degrees of freedom to equivalent lambda
+ if (!is.na(df)) {
+ lambda <- Krig.df.to.lambda(df, object$matrices$D)
+ }
+ if (!is.na(model)) {
+ lambda <- model[1]
+ }
+ if (is.na(lambda))
+ lambda <- object$lambda
+ #
+ # if the coefficients need to be recomputed do it.
+ if (find.coef) {
+ if (verbose) {
+ cat("new coefs found", fill = TRUE)
+ }
+ object3 <- Krig.coef(object, lambda = lambda, y = y,
+ yM = yM)
+ temp.d <- object3$d
+ temp.c <- object3$c
+ }
+ if (verbose) {
+ cat(" d coefs", fill = TRUE)
+ print(temp.d)
+ cat("c coefs", fill = TRUE)
+ print(temp.c)
+ }
+
+ # this is the fixed part of predictor
+ #
+ Tmatrix <- do.call(object$null.function.name, c(object$null.args,
+ list(x = x, Z = Z, drop.Z = drop.Z)))
+ if (drop.Z) {
+ temp <- Tmatrix %*% temp.d[object$ind.drift]
+ }
+ else {
+ temp <- Tmatrix %*% temp.d
+ }
+ # add in spatial piece
+ if (!just.fixed) {
+ #
+ # Now find sum of covariance functions times coefficients
+ # Note that the multiplication of the cross covariance matrix
+ # by the coefficients is done implicitly in the covariance function
+ #
+ # The covariance function is
+ # evaluated by using its name, the do.call function, and any
+ # additional arguments.
+ #
+ temp <- temp + do.call(object$cov.function.name, c(object$args,
+ list(x1 = x, x2 = object$knots, C = temp.c)))
+ }
+ #
+ # transform back into raw scale if this is a correlation model.
+ # if y's are in the scale of correlations
+ # if so scale by sd and add back in mean
+ correlation.model <- (object$correlation.model & eval.correlation.model)
+ if (correlation.model) {
+ if (!is.na(object$sd.obj[1])) {
+ temp <- temp * predict(object$sd.obj, x)
+ }
+ if (!is.na(object$mean.obj[1])) {
+ temp <- temp + predict(object$mean.obj, x)
+ }
+ }
+ return(temp)
+}
diff --git a/R/predict.interp.surface.R b/R/predict.interp.surface.R
new file mode 100644
index 0000000..be9020b
--- /dev/null
+++ b/R/predict.interp.surface.R
@@ -0,0 +1,24 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"predict.interp.surface" <- function(object, loc,
+ ...) {
+ interp.surface( object, loc,...)
+}
diff --git a/R/predict.sreg.R b/R/predict.sreg.R
new file mode 100644
index 0000000..fcf6161
--- /dev/null
+++ b/R/predict.sreg.R
@@ -0,0 +1,28 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"predict.sreg" <- function(object, x, derivative = 0,
+ model = 1, ...) {
+ if (missing(x)) {
+ x <- object$x
+ }
+ c(splint(object$predicted$x, object$predicted$y[, model],
+ x, derivative = derivative, ...))
+}
diff --git a/R/predictDerivative.Krig.R b/R/predictDerivative.Krig.R
new file mode 100644
index 0000000..6b122e9
--- /dev/null
+++ b/R/predictDerivative.Krig.R
@@ -0,0 +1,72 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+
+"predictDerivative.Krig" <- function(object, x = NULL,
+ verbose = FALSE, ...) {
+ # this is a lean evaluation of the derivatives of the
+ # random component of the model.
+ # several checks to make sure this being applied to
+ # simple Krig models where it makes sense
+ if (object$correlation.model) {
+ stop("Can not handle correlation model with derivative evaluation")
+ }
+ if (object$null.function.name != "Krig.null.function") {
+ stop("null space may not be a low order polynomial")
+ }
+ # default is to predict at data x's
+ if (is.null(x)) {
+ x <- object$x
+ }
+ else {
+ x <- as.matrix(x)
+ }
+ # transformations of x values used in Krig
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ x <- scale(x, xc, xs)
+ # NOTE knots are already scaled in Krig object and are used
+ # in transformed scale.
+ # i.e. knots <- scale( object$knots, xc, xs)
+ temp.d <- object$d
+ temp.c <- object$c
+ if (verbose) {
+ cat(" d coefs", fill = TRUE)
+ print(temp.d)
+ cat("c coefs", fill = TRUE)
+ print(temp.c)
+ }
+ #
+ # this is the polynomial fixed part of predictor
+ #
+ temp1 <- fields.derivative.poly(x, m = object$m, object$d)
+ # add in spatial piece
+ # The covariance function is
+ # evaluated by using it name, do.call function and any
+ # additional arguments. Note use of derivative and 'C' arguments
+ # to do multiplication of partials of covariance times the C
+ # vector. If C is a matrix of coefficients a error is produced.
+ temp2 <- do.call(object$cov.function.name, c(object$args,
+ list(x1 = x, x2 = object$knots, derivative = 1, C = temp.c)))
+ # returned value is the matrix of partials of polynomial plus partials of spatial # part aso add in chain rule scale factor because
+ # functional form for the surface uses the coordinates xscaled = (x- xc)/xs
+ return(t(t(temp1 + temp2)/xs))
+}
diff --git a/R/predictSE.R b/R/predictSE.R
new file mode 100644
index 0000000..1f6c68f
--- /dev/null
+++ b/R/predictSE.R
@@ -0,0 +1,21 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"predictSE" <- function(object, ...) UseMethod("predictSE")
diff --git a/R/predictSE.family.R b/R/predictSE.family.R
new file mode 100644
index 0000000..dbe0835
--- /dev/null
+++ b/R/predictSE.family.R
@@ -0,0 +1,227 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+"predictSurfaceSE"<- function( object,...){
+ UseMethod("predictSurfaceSE")
+}
+
+"predictSurfaceSE.default" <- function(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE, ...) {
+ # NOTE:
+ # without grid.list
+ # default is 80X80 grid on first two variables
+ # rest are set to median value of x.
+ if (is.null(grid.list)) {
+ grid.list <- fields.x.to.grid(object$x, nx = nx, ny = ny,
+ xy = xy)
+ }
+ # here is the heavy lifting
+ xg <- make.surface.grid(grid.list)
+# NOTE: the specific predict function called will need to do the checks
+# whether the evaluation of a large number of grid points makes sense.
+ out <- as.surface( xg, predictSE(object, xg,...) )
+ #
+ # if extrapolate is FALSE set all values outside convex hull to NA
+ if (!extrap) {
+ if( is.null( object$x)){
+ stop("need and x matrix in object")
+ }
+ if (is.na(chull.mask)) {
+ chull.mask <- unique.matrix(object$x[, xy])
+ }
+ out$z[!in.poly(xg[, xy], xp = chull.mask, convex.hull = TRUE)] <- NA
+ }
+ #
+ return(out)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"predictSE.Krig" <- function(object, x = NULL, cov = FALSE,
+ verbose = FALSE, ...) {
+ #
+ # name of covariance function
+ call.name <- object$cov.function.name
+ #
+ # default is to predict at data x's
+ if (is.null(x)) {
+ x <- object$x
+ }
+ x <- as.matrix(x)
+ if (verbose) {
+ print(x)
+ }
+ xraw <- x
+ # transformations of x values used in Krig
+ # NOTE knots are already scaled in Krig object
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ x <- scale(x, xc, xs)
+ #
+ # scaled unique observation locations.
+ xM <- object$xM
+ # find marginal variance before transforming x.
+ if (!is.na(object$sd.obj[1])) {
+ temp.sd <- c(predict(object$sd.obj, xraw))
+ }
+ else {
+ temp.sd <- 1
+ }
+ if (verbose) {
+ print(temp.sd)
+ }
+ # Default is to use parameters in best.model
+ lambda <- object$best.model[1]
+ rho <- object$best.model[3]
+ sigma2 <- object$best.model[2]
+ nx <- nrow(xM)
+ wght.vec <- t(Krig.Amatrix(object, xraw, lambda, eval.correlation.model = FALSE,
+ ...))
+ if (verbose) {
+ cat("wght.vector", fill = TRUE)
+ print(wght.vec)
+ }
+ #var( f0 - yhat)= var( f0) - cov( f0,yhat) - cov( yhat, f0) + cov( yhat)
+ # = temp0 - temp1 - t( temp1) + temp2
+ #
+ # if off diagonal weight matrix is passed then
+ # find inverse covariance matrix
+ # otherwise just create this quickly from diagonal weights
+ #
+ Wi <- Krig.make.Wi(object)$Wi
+ # find covariance of data
+ if (object$nondiag.W) {
+ Cov.y <- rho * do.call(call.name, c(object$args, list(x1 = xM,
+ x2 = xM))) + sigma2 * Wi
+ }
+ else {
+ # this is one case where keeping diagonal
+ # matrix as a vector will not work.
+ Cov.y <- rho * do.call(call.name, c(object$args, list(x1 = xM,
+ x2 = xM))) + sigma2 * diag(Wi)
+ }
+ if (!cov) {
+ # find diagonal elements of covariance matrix
+ # now find the three terms.
+ # note the use of an element by element multiply to only get the
+ # diagonal elements of the full
+ # prediction covariance matrix.
+ #
+ temp1 <- rho * colSums(wght.vec * do.call(call.name,
+ c(object$args, list(x1 = xM, x2 = x))))
+ temp2 <- colSums(wght.vec * (Cov.y %*% wght.vec))
+ #
+ # find marginal variances -- trival in the stationary case!
+ # Note that for the case of the general covariances
+ # as radial basis functions (RBFs) temp0 should be zero.
+ # Positivity results from the generalized divided difference
+ # properties of RBFs.
+ temp0 <- rho * do.call(call.name, c(object$args, list(x1 = x,
+ marginal = TRUE)))
+ #
+ temp <- temp0 - 2 * temp1 + temp2
+ #
+ return(sqrt(temp * temp.sd^2))
+ }
+ else {
+ #
+ # find full covariance matrix
+ #
+ temp1 <- rho * t(wght.vec) %*% do.call(call.name, c(object$args,
+ list(x1 = xM, x2 = x)))
+ #
+ temp2 <- t(wght.vec) %*% Cov.y %*% wght.vec
+ #
+ temp0 <- rho * do.call(call.name, c(object$args, list(x1 = x,
+ x2 = x)))
+ #
+ temp <- temp0 - t(temp1) - temp1 + temp2
+ temp <- t(t(temp) * temp.sd) * temp.sd
+ #
+ return(temp)
+ }
+}
+
+# fields, Tools for spatial data
+# Copyright 2004-2009, Institute for Mathematics Applied to Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"predictSE.mKrig" <- function(object, xnew = NULL,
+ Z = NULL, verbose = FALSE, drop.Z = FALSE, ...) {
+ #
+ # name of covariance function
+ call.name <- object$cov.function.name
+ #
+ # default is to predict at data x's
+ if (is.null(xnew)) {
+ xnew <- object$x
+ }
+ if ((!drop.Z) & !is.null(object$Z)) {
+ Z <- object$Z
+ }
+ xnew <- as.matrix(xnew)
+ if (!is.null(Z)) {
+ Z <- as.matrix(Z)
+ }
+ if (verbose) {
+ print(xnew)
+ print(Z)
+ }
+ lambda <- object$lambda
+ rho <- object$rhohat
+ sigma2 <- lambda * rho
+ if (verbose) {
+ print(c(lambda, rho, sigma2))
+ }
+ k0 <- do.call(call.name, c(object$args, list(x1 = object$x,
+ x2 = xnew)))
+ # fixed effects matrox includes both spatial drift and covariates.
+ if (!drop.Z) {
+ t0 <- t(cbind(fields.mkpoly(xnew, m = object$m), Z))
+ }
+ else {
+ stop(" drop.Z not supported")
+ }
+ #
+ # old form based on the predict function
+ # temp1 <- rho*(t0%*% object$Omega %*%t(t0)) -
+ # rho*predict( object, y= k0, x=x) -
+ # rho*predict( object, y= k0, x=x, just.fixed=TRUE)
+
+ # alternative formula using the d and c coefficients directly.
+ # collapseFixedEffect=FALSE because
+ # we want the "fixed effect" computation
+ # to be done separately for each column of k0
+ hold <- mKrig.coef(object, y = k0, collapseFixedEffect=FALSE)
+ temp1 <- rho * (colSums(t0 * (object$Omega %*% t0)) - colSums((k0) *
+ hold$c) - 2 * colSums(t0 * hold$d))
+ # find marginal variances -- trival in the stationary case!
+ temp0 <- rho * do.call(call.name, c(object$args, list(x1 = xnew,
+ marginal = TRUE)))
+ # Add marginal variance to part from estimate
+ temp <- temp0 + temp1
+ # return square root as the standard error in units of observations.
+ return(sqrt(temp))
+}
+
+
diff --git a/R/predictSEUsingKrigA.R b/R/predictSEUsingKrigA.R
new file mode 100644
index 0000000..e0a9619
--- /dev/null
+++ b/R/predictSEUsingKrigA.R
@@ -0,0 +1,120 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"predictSEUsingKrigA" <- function(object, x = NULL, cov = FALSE,
+ verbose = FALSE, ...) {
+ #
+ # name of covariance function
+ call.name <- object$cov.function.name
+ #
+ # default is to predict at data x's
+ if (is.null(x)) {
+ x <- object$x
+ }
+ x <- as.matrix(x)
+ if (verbose) {
+ print(x)
+ }
+ xraw <- x
+ # transformations of x values used in Krig
+ # NOTE knots are already scaled in Krig object
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ x <- scale(x, xc, xs)
+ #
+ # scaled unique observation locations.
+ xM <- object$xM
+ # find marginal variance before transforming x.
+ if (!is.na(object$sd.obj[1])) {
+ temp.sd <- c(predict(object$sd.obj, xraw))
+ }
+ else {
+ temp.sd <- 1
+ }
+ # Default is to use parameters in best.model
+ lambda <- object$best.model[1]
+ rho <- object$best.model[3]
+ sigma2 <- object$best.model[2]
+ nx <- nrow(xM)
+ wght.vec <- t(Krig.Amatrix(object, xraw, lambda, ...))
+ if (verbose) {
+ cat("wght.vector", fill = TRUE)
+ print(wght.vec)
+ }
+ #var( f0 - yhat)= var( f0) - cov( f0,yhat) - cov( yhat, f0) + cov( yhat)
+ # = temp0 - temp1 - t( temp1) + temp2
+ #
+ # if off diagonal weight matrix is passed then
+ # find inverse covariance matrix
+ # otherwise just create this quickly from diagonal weights
+ #
+ Wi <- Krig.make.Wi(object)$Wi
+ # find covariance of data
+ if (object$nondiag.W) {
+ Cov.y <- rho * do.call(call.name, c(object$args, list(x1 = xM,
+ x2 = xM))) + sigma2 * Wi
+ }
+ else {
+ # this is one case where keeping diagonal
+ # matrix as a vector will not work.
+ Cov.y <- rho * do.call(call.name, c(object$args, list(x1 = xM,
+ x2 = xM))) + sigma2 * diag(Wi)
+ }
+ if (!cov) {
+ # find diagonal elements of covariance matrix
+ # now find the three terms.
+ # note the use of an element by element multiply to only get the
+ # diagonal elements of the full
+ # prediction covariance matrix.
+ #
+ temp1 <- rho * colSums(wght.vec * do.call(call.name,
+ c(object$args, list(x1 = xM, x2 = x))))
+ temp2 <- colSums(wght.vec * (Cov.y %*% wght.vec))
+ #
+ # find marginal variances -- trival in the stationary case!
+ # Note that for the case of the general covariances
+ # as radial basis functions (RBFs) temp0 should be zero.
+ # Positivity results from the generalized divided difference
+ # properties of RBFs.
+ temp0 <- rho * do.call(call.name, c(object$args, list(x1 = x,
+ marginal = TRUE)))
+ #
+ temp <- temp0 - 2 * temp1 + temp2
+ #
+ return(sqrt(temp * temp.sd^2))
+ }
+ else {
+ #
+ # find full covariance matrix
+ #
+ temp1 <- rho * t(wght.vec) %*% do.call(call.name, c(object$args,
+ list(x1 = xM, x2 = x)))
+ #
+ temp2 <- t(wght.vec) %*% Cov.y %*% wght.vec
+ #
+ temp0 <- rho * do.call(call.name, c(object$args, list(x1 = x,
+ x2 = x)))
+ #
+ temp <- temp0 - t(temp1) - temp1 + temp2
+ temp <- t(t(temp) * temp.sd) * temp.sd
+ #
+ return(temp)
+ }
+}
diff --git a/R/predictSurface.Krig.R b/R/predictSurface.Krig.R
new file mode 100644
index 0000000..15db8e1
--- /dev/null
+++ b/R/predictSurface.Krig.R
@@ -0,0 +1,71 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+
+"predictSurface.Krig" <- function(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE,
+ ZGrid=NULL, drop.Z= FALSE, just.fixed=FALSE, ...) {
+
+ if( is.null(ZGrid) & !drop.Z & (!is.null(object$Z)) ) {
+ stop("Need to specify covariate (Z) values or set drop.Z==TRUE")
+ }
+# create a default grid if it is not passed
+ if (is.null(grid.list)) {
+ # NOTE:
+ # without grid.list
+ # default is 80X80 grid on first two variables
+ # rest are set to median value of the x's
+ grid.list <- fields.x.to.grid(object$x, nx = nx, ny = ny,
+ xy = xy)
+ }
+# do some checks on Zgrid and also reshape as a matrix
+# rows index grid locations and columns are the covariates
+# (as Z in predict).
+# if ZGrid is NULL just returns that back
+ Z<- unrollZGrid( grid.list, ZGrid)
+# here is the heavy lifting
+ xg <- make.surface.grid(grid.list)
+# NOTE: the predict function called will need to do some internal the checks
+# whether the evaluation of a large number of grid points (xg) makes sense.
+if( verbose){
+print( dim( xg))
+print( drop.Z)
+print( dim( Z))
+}
+ out<- predict(object, x=xg, Z=Z, drop.Z= drop.Z,
+ just.fixed=just.fixed, ...)
+# reshape as list with x, y and z components
+ out <- as.surface( xg, out )
+ #
+ # if extrapolate is FALSE set all values outside convex hull to NA
+ if (!extrap) {
+ if( is.null( object$x)){
+ stop("need and x matrix in object")
+ }
+ if (is.na(chull.mask)) {
+ chull.mask <- unique.matrix(object$x[, xy])
+ }
+ out$z[!in.poly(xg[, xy], xp = chull.mask, convex.hull = TRUE)] <- NA
+ }
+ #
+ return(out)
+}
diff --git a/R/predictSurface.family.R b/R/predictSurface.family.R
new file mode 100644
index 0000000..031127d
--- /dev/null
+++ b/R/predictSurface.family.R
@@ -0,0 +1,91 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"predict.surface" <- function(object, ...) {
+ UseMethod("predict.surface")
+}
+
+predict.surface.default<- function(object,...){
+ cat("predict.surface is now the function predictSurface")
+ }
+
+"predictSurface"<- function( object,...){
+ UseMethod("predictSurface")
+}
+
+"predictSurface.default" <- function(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE, ...) {
+ # NOTE:
+ # without grid.list
+ # default is 80X80 grid on first two variables
+ # rest are set to median value of x.
+ if (is.null(grid.list)) {
+ grid.list <- fields.x.to.grid(object$x, nx = nx, ny = ny,
+ xy = xy)
+ }
+ # here is the heavy lifting
+ xg <- make.surface.grid(grid.list)
+# NOTE: the specific predict function called will need to do the checks
+# whether the evaluation of a large number of grid points makes sense.
+ out <- as.surface( xg, predict(object, xg,...) )
+ #
+ # if extrapolate is FALSE set all values outside convex hull to NA
+ if (!extrap) {
+ if( is.null( object$x)){
+ stop("need and x matrix in object")
+ }
+ if (is.na(chull.mask)) {
+ chull.mask <- unique.matrix(object$x[, xy])
+ }
+ out$z[!in.poly(xg[, xy], xp = chull.mask, convex.hull = TRUE)] <- NA
+ }
+ #
+ return(out)
+}
+
+"predictSurface.mKrig" <- function( object, ...){
+ NextMethod("predictSurface.Krig")
+}
+
+"predictSurface.fastTps" <- function(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE, ...) {
+# NOTE: See predictSurface.default for comments
+ if (is.null(grid.list)) {
+ grid.list <- fields.x.to.grid(object$x, nx = nx, ny = ny,
+ xy = xy)
+ }
+# in the case of fastTps pass the grid list instead of the locations of grid points
+# (see xg in predictSurface.default)
+ out <- predict(object, grid.list=grid.list, xy=xy, ...)
+ out <- as.surface(grid.list, out )
+ #
+ # if extrapolate is FALSE set all values outside convex hull to NA
+ if (!extrap) {
+ if (is.na(chull.mask)) {
+ chull.mask <- unique.matrix(object$x[, xy])
+ }
+ xg<- make.surface.grid( grid.list)
+ out$z[!in.poly(xg[, xy], xp = chull.mask, convex.hull = TRUE)] <- NA
+ }
+ #
+ return(out)
+}
diff --git a/R/print.Krig.R b/R/print.Krig.R
new file mode 100644
index 0000000..9875748
--- /dev/null
+++ b/R/print.Krig.R
@@ -0,0 +1,60 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.Krig" <- function(x, digits = 4, ...) {
+ c1 <- "Number of Observations:"
+ c2 <- length(x$residuals)
+ #
+ # print out null space poly info only if 'm' is used
+ if (!is.null(x$args.null$m)) {
+ c1 <- c(c1, "Degree of polynomial null space ( base model):")
+ c2 <- c(c2, x$m - 1)
+ }
+ c1 <- c(c1, "Number of parameters in the null space")
+ c2 <- c(c2, x$nt)
+ c1 <- c(c1, "Parameters for fixed spatial drift")
+ c2 <- c(c2, sum(x$ind.drift))
+ c1 <- c(c1, "Model degrees of freedom:")
+ c2 <- c(c2, format(round(x$eff.df, 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(length(x$residuals) - x$eff.df,
+ 1)))
+ c1 <- c(c1, "GCV estimate for sigma:")
+ c2 <- c(c2, format(signif(x$shat.GCV, digits)))
+ c1 <- c(c1, "MLE for sigma:")
+ c2 <- c(c2, format(signif(x$shat.MLE, digits)))
+ c1 <- c(c1, "MLE for rho:")
+ c2 <- c(c2, format(signif(x$rho.MLE, digits)))
+ c1 <- c(c1, "lambda")
+ c2 <- c(c2, format(signif(x$lambda, 2)))
+ c1 <- c(c1, "User supplied rho")
+ c2 <- c(c2, format(signif(x$rho, digits)))
+ c1 <- c(c1, "User supplied sigma^2")
+ c2 <- c(c2, format(signif(x$sigma2, digits)))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+cat("Summary of estimates: \n")
+ print( x$lambda.est)
+ # print( x$warningTable)
+ invisible(x)
+}
diff --git a/R/print.spatial.design.R b/R/print.spatial.design.R
new file mode 100644
index 0000000..e842322
--- /dev/null
+++ b/R/print.spatial.design.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.spatialDesign" <- function(x, ...) {
+ print(x$design)
+}
diff --git a/R/print.spatialProcess.R b/R/print.spatialProcess.R
new file mode 100644
index 0000000..89cbf1d
--- /dev/null
+++ b/R/print.spatialProcess.R
@@ -0,0 +1,90 @@
+print.spatialProcess <- function(x, digits = 4, ...) {
+
+ if (is.matrix(x$residuals)) {
+ n <- nrow(x$residuals)
+ NData <- ncol(x$residuals)
+ }
+ else {
+ n <- length(x$residuals)
+ NData <- 1
+ }
+
+ c1 <- "Number of Observations:"
+ c2 <- n
+
+ if (NData > 1) {
+ c1 <- c(c1, "Number of data sets fit:")
+ c2 <- c(c2, NData)
+ }
+
+ c1 <- c(c1, "Degree of polynomial null space ( base model):")
+
+
+ if(x$m !=0 ){
+ c2 <- c(c2, x$m - 1)
+ }
+ else{
+ c2 <- c(c2, NA)
+ }
+ c1 <- c(c1, "Total number of parameters in base model")
+ c2 <- c(c2, x$nt)
+ if (x$nZ > 0) {
+ c1 <- c(c1, "Number of additional covariates (Z)")
+ c2 <- c(c2, x$nZ)
+ }
+ if (!is.na(x$eff.df)) {
+ c1 <- c(c1, " Eff. degrees of freedom")
+ c2 <- c(c2, signif(x$eff.df, digits))
+ if (length(x$trA.info) < x$np) {
+ c1 <- c(c1, " Standard Error of estimate: ")
+ c2 <- c(c2, signif(sd(x$trA.info)/sqrt(length(x$trA.info)),
+ digits))
+ }
+ }
+ c1 <- c(c1, "Smoothing parameter")
+ c2 <- c(c2, signif(x$lambda.fixed, digits))
+
+ c1 <- c(c1, "MLE sigma")
+ c2 <- c(c2, signif(x$sigma.MLE.FULL, digits))
+
+ c1 <- c(c1, "MLE rho")
+ c2 <- c(c2, signif(x$rho.MLE.FULL, digits))
+
+ c1 <- c(c1, "MLE lambda = MLE sigma^2 / MLE rho")
+ c2 <- c(c2, signif(x$lambda.MLE, digits))
+
+ c1 <- c(c1, "MLE theta")
+ c2 <- c(c2, signif(x$theta.MLE, digits))
+
+ c1 <- c(c1, "Nonzero entries in covariance")
+ c2 <- c(c2, x$nonzero.entries)
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ cat(" ", fill = TRUE)
+ cat(" Covariance Model:", x$cov.function, fill = TRUE)
+ if (x$cov.function == "stationary.cov") {
+ cat(" Covariance function: ", ifelse(is.null(x$args$Covariance),
+ "Exponential", x$args$Covariance), fill = TRUE)
+ }
+ if (!is.null(x$args)) {
+ cat(" Non-default covariance arguments and their values ",
+ fill = TRUE)
+ nlist <- as.character(names(x$args))
+ NL <- length(nlist)
+ for (k in 1:NL) {
+ cat(" Argument:", nlist[k], " ")
+ if (object.size(x$args[[k]]) <= 1024) {
+ cat("has the value(s): ", fill = TRUE)
+ print(x$args[[k]])
+ }
+ else {
+ cat("too large to print value, size > 1K ...",
+ fill = TRUE)
+ }
+ }
+ }
+ invisible(x)
+}
diff --git a/R/print.sreg.R b/R/print.sreg.R
new file mode 100644
index 0000000..a2bfa27
--- /dev/null
+++ b/R/print.sreg.R
@@ -0,0 +1,52 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.sreg" <- function(x, ...) {
+ if (length(x$lambda) > 1) {
+ c1 <- "Number of Observations:"
+ c2 <- (x$N)
+ c1 <- c(c1, "Number of values of lambda in grid:")
+ c2 <- c(c2, length(x$lambda))
+ sum <- cbind(c1, c2)
+ }
+ else {
+ digits <- 4
+ N <- x$N
+ c1 <- "Number of Observations:"
+ c2 <- (x$N)
+ c1 <- c(c1, "Unique Observations:")
+ c2 <- c(c2, length(x$xM))
+ c1 <- c(c1, "Effective degrees of freedom:")
+ c2 <- c(c2, format(round(x$trace, 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$N - x$trace, 1)))
+ c1 <- c(c1, "Residual root mean square:")
+ c2 <- c(c2, format(signif(sqrt(sum(x$residuals^2)/N),
+ 4)))
+ c1 <- c(c1, "Lambda ( smoothing parameter)")
+ c2 <- c(c2, format(signif((x$lambda), 4)))
+ sum <- cbind(c1, c2)
+ }
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ invisible(x)
+}
diff --git a/R/print.summary.Krig.R b/R/print.summary.Krig.R
new file mode 100644
index 0000000..5572e15
--- /dev/null
+++ b/R/print.summary.Krig.R
@@ -0,0 +1,100 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.summary.Krig" <- function(x, ...) {
+ digits <- x$digits
+ c1 <- "Number of Observations:"
+ c2 <- x$num.observation
+ c1 <- c(c1, "Number of unique points:")
+ c2 <- c(c2, x$num.uniq)
+ #
+ # print out null space poly info only if 'm' is used
+ if (!is.null(x$args.null$m)) {
+ c1 <- c(c1, "Degree of polynomial null space ( base model):")
+ c2 <- c(c2, x$m - 1)
+ }
+ c1 <- c(c1, "Number of parameters in the null space")
+ c2 <- c(c2, x$nt)
+ c1 <- c(c1, "Parameters for fixed spatial drift")
+ c2 <- c(c2, x$df.drift)
+ c1 <- c(c1, "Effective degrees of freedom:")
+ c2 <- c(c2, format(round(x$enp, 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$num.observation - x$enp, 1)))
+ c1 <- c(c1, "MLE sigma ")
+ c2 <- c(c2, format(signif(x$shat.MLE, digits)))
+ c1 <- c(c1, "GCV sigma ")
+ c2 <- c(c2, format(signif(x$shat.GCV, digits)))
+ if (!is.na(x$shat.pure.error)) {
+ c1 <- c(c1, "Pure error sigma")
+ c2 <- c(c2, format(signif(x$shat.pure.error, digits)))
+ }
+ c1 <- c(c1, "MLE rho ")
+ c2 <- c(c2, format(signif(x$rhohat, digits)))
+ c1 <- c(c1, "Scale passed for covariance (rho)")
+ c2 <- c(c2, signif(x$rho, digits))
+ c1 <- c(c1, "Scale passed for nugget (sigma^2)")
+ c2 <- c(c2, signif(x$sigma2, digits))
+ c1 <- c(c1, "Smoothing parameter lambda")
+ c2 <- c(c2, signif(x$lambda, digits))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ res.quantile <- x$res.quantile
+ names(res.quantile) <- c("min", "1st Q", "median", "3rd Q",
+ "max")
+ cat("CALL:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ cat("\n")
+ cat("Residual Summary:", fill = TRUE)
+ print(signif(res.quantile, digits))
+ cat("\n")
+ cat("Covariance Model:", x$cov.function, fill = TRUE)
+ if (x$cov.function == "stationary.cov") {
+ cat(" Covariance function is ", x$args$Covariance, fill = TRUE)
+ }
+ if (!is.null(x$args)) {
+ cat(" Names of non-default covariance arguments: ",
+ fill = TRUE)
+ cat(" ", paste(as.character(names(x$args)), collapse = ", "),
+ fill = TRUE)
+ }
+ if ((x$correlation.model)) {
+ cat(" A correlation model was fit:\nY is standardized before spatial estimate is found",
+ fill = TRUE)
+ }
+ if (x$knot.model) {
+ cat(" Knot model: ", x$np - x$nt, " knots supplied to define basis\nfunctions",
+ fill = TRUE)
+ }
+ cat("\n")
+ cat("DETAILS ON SMOOTHING PARAMETER:", fill = TRUE)
+ cat(" Method used: ", x$method, " Cost: ", x$cost, fill = TRUE)
+ print(x$sum.gcv.lambda, digits = digits)
+ cat("\n")
+ cat(" Summary of all estimates found for lambda", fill = TRUE)
+ if (!is.na(x$lambda.est[1])) {
+ print(x$lambda.est, digits = x$digits)
+ }
+ else {
+ cat(x$lambda, " supplied by user", fill = TRUE)
+ }
+ invisible(x)
+}
diff --git a/R/print.summary.spatialProcess.R b/R/print.summary.spatialProcess.R
new file mode 100644
index 0000000..15dff33
--- /dev/null
+++ b/R/print.summary.spatialProcess.R
@@ -0,0 +1,94 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.summary.spatialProcess" <- function(x, ...) {
+ digits <- x$digits
+ c1 <- "Number of Observations:"
+ c2 <- x$num.observation
+ c1 <- c(c1, "Number of unique points (locations):")
+ c2 <- c(c2, x$num.uniq)
+ #
+ # print out null space poly info only if 'm' is used
+ if (!is.null(x$args.null$m)) {
+ c1 <- c(c1, "Degree of polynomial in fixed part ( base model):")
+ c2 <- c(c2, x$m - 1)
+ }
+ c1 <- c(c1, "Number of parameters in the fixed part of model")
+ c2 <- c(c2, x$nt)
+ c1 <- c(c1, "Parameters for fixed spatial drift")
+ c2 <- c(c2, x$df.drift)
+ c1 <- c(c1, "Effective degrees of freedom:")
+ c2 <- c(c2, format(round(x$enp, 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$num.observation - x$enp, 1)))
+ c1 <- c(c1, "MLE for sigma (nugget variance)")
+ c2 <- c(c2, format(signif(x$sigma.MLE, digits)))
+ c1 <- c(c1, "GCV estimate for sigma ")
+ c2 <- c(c2, format(signif(x$shat.GCV, digits)))
+ if (!is.na(x$shat.pure.error)) {
+ c1 <- c(c1, "Estimate of sigma from replicates ")
+ c2 <- c(c2, format(signif(x$shat.pure.error, digits)))
+ }
+ c1 <- c(c1, "MLE for rho (process variance)")
+ c2 <- c(c2, format(signif(x$rho.MLE, digits)))
+ c1 <- c(c1, "MLE for rho + sigma^2 (the variogram sill)")
+ c2 <- c(c2, format(signif(x$rho.MLE + x$sigma.MLE^2 , digits)))
+ c1 <- c(c1, "MLE for theta (range parameter)")
+ c2 <- c(c2, signif(x$theta.MLE, digits))
+ c1 <- c(c1, "MLE for lambda (sigma^2/rho)")
+ c2 <- c(c2, signif(x$lambda, digits))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ res.quantile <- x$res.quantile
+ names(res.quantile) <- c("min", "1st Q", "median", "3rd Q",
+ "max")
+ cat("CALL:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ cat("\n")
+ cat("R function used for the covariance model:", x$cov.function, fill = TRUE)
+ cat( "\n")
+ if (!is.null(x$args)) {
+ cat(" Covariance arguments that are different from their defaults: ", fill = TRUE)
+ changedArgs<- names(x$args)
+ for( argName in changedArgs){
+ #cat(" ", as.character(argName),":" , fill=TRUE )
+ quickPrint( x$args[argName])
+ }
+ }
+ if ((x$correlation.model)) {
+ cat(" A correlation model was fit:\nY is standardized before spatial estimate is found",
+ fill = TRUE)
+ }
+ if (x$knot.model) {
+ cat(" Knot model: ", x$np - x$nt, " knots supplied to define basis\nfunctions",
+ fill = TRUE)
+ }
+ cat("\n")
+ cat("RESIDUAL SUMMARY:", fill = TRUE)
+ print(signif(res.quantile, digits))
+ cat("\n")
+ cat("DETAILS ON SMOOTHING PARAMETER ESTIMATE:", fill = TRUE)
+ cat(" Method used: ", x$method, fill = TRUE)
+ print(x$sum.gcv.lambda, digits = digits)
+ cat("\n")
+
+ invisible(x)
+}
diff --git a/R/print.summary.sreg.R b/R/print.summary.sreg.R
new file mode 100644
index 0000000..a3cde29
--- /dev/null
+++ b/R/print.summary.sreg.R
@@ -0,0 +1,66 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.summary.sreg" <- function(x, ...) {
+ digits <- x$digits
+ c1 <- "Number of Observations:"
+ c2 <- x$num.observation
+ c1 <- c(c1, "Number of unique points:")
+ c2 <- c(c2, x$num.uniq)
+ c1 <- c(c1, "Eff. degrees of freedom for spline:")
+ c2 <- c(c2, format(round(x$enp, 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$num.observation - x$enp, 1)))
+ c1 <- c(c1, "GCV est. sigma ")
+ c2 <- c(c2, format(signif(x$shat.GCV, digits)))
+ if (!is.na(x$shat.pure.error)) {
+ c1 <- c(c1, "Pure error sigma")
+ c2 <- c(c2, format(signif(x$shat.pure.error, digits)))
+ }
+ c1 <- c(c1, "lambda ")
+ c2 <- c(c2, signif(x$lambda, digits))
+ #\tc1 <- c(c1, 'Cost in GCV')
+ #\tc2 <- c(c2, format(round(x$cost, 2)))
+ #\tc1 <- c(c1, 'GCV Minimum')
+ #\tc2 <- c(c2, format(signif(x$gcvmin, digits)))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ res.quantile <- x$res.quantile
+ names(res.quantile) <- c("min", "1st Q", "median", "3rd Q",
+ "max")
+ ###
+ ###
+ ###
+ cat("CALL:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ cat("\n")
+ cat("RESIDUAL SUMMARY:", fill = TRUE)
+ print(signif(res.quantile, digits))
+ cat("\n")
+ cat("DETAILS ON SMOOTHING PARAMETER:", fill = TRUE)
+ cat(" Method used: ", x$method, " Cost: ", x$cost, fill = TRUE)
+ #\tcat(' Stats on this choice of lambda', fill = TRUE)
+ print(x$sum.gcv.lambda, digits = digits)
+ cat("\n")
+ cat(" Summary of estimates for lambda", fill = TRUE)
+ print(x$lambda.est, digits = x$digits)
+ invisible(x)
+}
diff --git a/R/print.summarySpatialDesign.R b/R/print.summarySpatialDesign.R
new file mode 100644
index 0000000..da4fd7f
--- /dev/null
+++ b/R/print.summarySpatialDesign.R
@@ -0,0 +1,45 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"print.summarySpatialDesign" <- function(x, digits = 4,
+ ...) {
+ cat("Call:\n")
+ dput(x$call)
+ c1 <- "Number of design points:"
+ c2 <- length(x$best.id)
+ c1 <- c(c1, "Number of fixed points:")
+ if (is.null(x$fixed))
+ c2 <- c(c2, 0)
+ else c2 <- c(c2, length(x$fixed))
+ c1 <- c(c1, "Optimality Criterion:")
+ c2 <- c(c2, round(x$opt.crit, digits))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ print(sum, quote = FALSE, digits = digits)
+ other.crit <- x$other.crit
+ if (length(other.crit) > 1) {
+ cat("\nOptimality criteria for other designs:\n\t")
+ cat(round(other.crit, digits), "\n")
+ }
+ cat("\nHistory:\n")
+ dimnames(x$history)[[1]] <- rep("", nrow(x$history))
+ print(round(x$history, digits), quote = FALSE)
+ invisible(x)
+}
diff --git a/R/printGCVWarnings.R b/R/printGCVWarnings.R
new file mode 100644
index 0000000..9866bdf
--- /dev/null
+++ b/R/printGCVWarnings.R
@@ -0,0 +1,56 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+printGCVWarnings<- function( Table, method="all"){
+ ind<- Table$Warning
+ if( method == "all"){
+ kIndex<- 1:6
+ }
+ else{
+ kIndex<- match( method,c("GCV",
+ "GCV.model",
+ "GCV.one",
+ "RMSE",
+ "pure error",
+ "REML")
+ )
+ }
+ methodList<- c(
+ "(GCV) Generalized Cross-Validation ",
+ "(GCV.model) Generalized Cross-Validation on replicate means ",
+ "(GCV.one) Generalized Cross-Validation on individual observations ",
+ "(RMSE) Matching estimate of sigma to supplied rmse ",
+ "Matching estimate of sigma to that from replicated observations",
+ "(REML) Restricted maximum likelihood "
+ )
+ if( any( ind[kIndex])){
+ cat("Warning: ", fill=TRUE)
+ cat("Grid searches over lambda (nugget and sill variances) with minima at the endpoints: ", fill=TRUE) }
+ for( k in kIndex){
+ if( ind[k]){
+ whichEnd<- ifelse(Table[k,2],"left","right")
+ cat( " ", methodList[k], fill =TRUE)
+ cat( " minimum at ", whichEnd, "endpoint",
+ " lambda = ", Table[k,6] ,
+ "(eff. df=", Table[k,7] , ")", fill = TRUE )
+ }
+ }
+
+}
diff --git a/R/qr.q2ty.R b/R/qr.q2ty.R
new file mode 100644
index 0000000..486c528
--- /dev/null
+++ b/R/qr.q2ty.R
@@ -0,0 +1,31 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"qr.q2ty" <- function(qr, y) {
+ if (!is.matrix(y)) {
+ y <- as.matrix(y)
+ }
+ dy <- dim(y)
+ dq <- dim(qr$qr)
+ rank <- qr$rank
+ if (dy[1] != dq[1])
+ stop("y and qr$qr should have same number of rows")
+ qr.qty(qr, y)[(rank + 1):dy[1], ]
+}
diff --git a/R/qr.yq2.R b/R/qr.yq2.R
new file mode 100644
index 0000000..3a4ed13
--- /dev/null
+++ b/R/qr.yq2.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"qr.yq2" <- function(qr, y) {
+ t(qr.q2ty(qr, t(y)))
+}
diff --git a/R/qsreg.family.R b/R/qsreg.family.R
new file mode 100644
index 0000000..431ebe6
--- /dev/null
+++ b/R/qsreg.family.R
@@ -0,0 +1,324 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+"qsreg" <- function(x, y, lam = NA, maxit = 50, maxit.cv = 10,
+ tol = 1e-07, offset = 0, sc = sqrt(var(y)) * 1e-05, alpha = 0.5,
+ wt = rep(1, length(x)), cost = 1, nstep.cv = 80, hmin = NA,
+ hmax = NA, trmin = 2 * 1.05, trmax = 0.95 * length(unique(x))) {
+
+ # see the function QTps for a different computational implementation
+ # and a code that works for more than 1-d.
+ out <- list()
+ class(out) <- c("qsreg")
+ N <- length(y)
+ out$N <- N
+ xgrid <- sort(unique(x))
+ if (length(x) != length(y))
+ stop(" X and Y do not match")
+ if (!is.na(lam[1]))
+ hgrid <- log(lam)
+ else {
+ # find lambda grid
+ if (is.na(hmin)) {
+ hmin <- 0
+ for (k in 1:25) {
+ b <- qsreg.trace(lam = as.double(exp(hmin)),
+ x = x, y = y, wt = wt, cost = cost, maxit = maxit,
+ tol = tol, sc = sc, alpha = alpha)
+ if (b > trmax) {
+ break
+ }
+ hmin <- hmin - 1
+ }
+ }
+ if (is.na(hmax)) {
+ hmax <- 0
+ for (k in 1:25) {
+ b <- qsreg.trace(lam = as.double(exp(hmax)),
+ x = x, y = y, wt = wt, cost = cost, maxit = maxit,
+ tol = tol, sc = sc, alpha = alpha)
+ if (b < trmin) {
+ break
+ }
+ hmax <- hmax + 1
+ }
+ }
+ h <- seq(hmin, hmax, , nstep.cv)
+ lam <- exp(h)
+ }
+ # now loop through values for lam ( really log lam)
+ b <- list()
+ NL <- length(lam)
+ NG <- length(xgrid)
+ h <- log(lam)
+ residuals <- matrix(NA, ncol = NL, nrow = N)
+ diagA <- residuals
+ cv.ps <- rep(0, NL)
+ trace.ps <- rep(0, NL)
+ cv <- rep(0, NL)
+ predicted <- matrix(NA, ncol = NL, nrow = NG)
+ trace <- rep(0, NL)
+ converge <- rep(0, NL)
+ wt.old <- wt
+ for (k in 1:NL) {
+ b <- .Fortran("rcss", PACKAGE="fields",
+ h = as.double(h[k]), npoint = as.integer(N),
+ x = as.double(x), y = as.double(y), wt = as.double(wt.old),
+ sy = as.double(rep(0, N)), trace = as.double(0),
+ diag = as.double(rep(0, N)), cv = as.double(0), ngrid = as.integer(NG),
+ xg = as.double(xgrid), yg = as.double(rep(0, NG)),
+ job = as.integer(c(3, 3, 0)), ideriv = as.integer(0),
+ din = as.double(c(cost, offset, maxit, tol, sc, alpha)),
+ dout = as.double(rep(0, 4)), ierr = as.integer(0))
+ residuals[, k] <- y - b$sy
+ diagA[, k] <- b$diag
+ cv[k] <- b$dout[4]
+ trace[k] <- b$trace
+ predicted[, k] <- b$yg
+ converge[k] <- b$dout[1]
+ wt.old <- b$wt
+ }
+ # second loop to find approx CV residuals based on pseudo values
+ y.pseudo <- rep(NA, N)
+ residuals.cv <- matrix(NA, ncol = NL, nrow = length(x))
+ for (k in 1:NL) {
+ y.pseudo <- (sc) * qsreg.psi(residuals[, k], alpha = alpha,
+ C = sc) + y - residuals[, k]
+ #
+ # call the robust spline but set the cutoff for the huber weight so big
+ # it is essentially a LS spline this helps to match the lambda for robust spline
+ # with a lambda for the LS one.
+ #
+ b <- .Fortran("rcss", PACKAGE="fields",
+ h = as.double(h[k]), npoint = as.integer(N),
+ x = as.double(x), y = as.double(y.pseudo), wt = as.double(wt),
+ sy = as.double(rep(0, N)), trace = as.double(0),
+ diag = as.double(rep(0, N)), cv = as.double(0), ngrid = as.integer(NG),
+ xg = as.double(xgrid), yg = as.double(rep(0, NG)),
+ job = as.integer(c(3, 3, 0)), ideriv = as.integer(0),
+ din = as.double(c(cost, offset, maxit, tol, sqrt(var(y)) *
+ 10, alpha)), dout = as.double(rep(0, 4)), ierr = as.integer(0))
+ #
+ # CV residuals based on pseudo-data)
+ # Use the linear approximation Y_k - f.cv_k = (Y_k- f_k)/( 1- A_kk)
+ # f.cv_k = f_k/( 1- A_kk) - ( A_kk)Y_k/( 1- A_kk)
+ #
+ # Note: we find f.cv based on pseudo data but then consider its deviation
+ # from the actual data
+ #
+ f.cv <- (b$sy/(1 - b$diag)) - b$diag * y.pseudo/(1 -
+ b$diag)
+ trace.ps[k] <- b$trace
+ residuals.cv[, k] <- (y - f.cv)
+ cv.ps[k] <- mean(qsreg.rho(y - f.cv, alpha = alpha, C = sc))
+ }
+ #
+ #
+ #
+ cv.grid <- cbind(lam, trace, cv, converge, trace.ps, cv.ps)
+ dimnames(cv.grid) <- list(NULL, c("lambda", "trace", "CV",
+ "iterations", "trace.PS", "CV.PS"))
+ #
+ ind.cv <- (1:NL)[cv == min(cv)]
+ ind.cv.ps <- (1:NL)[cv.ps == min(cv.ps)]
+ out$call <- match.call()
+ out$x <- x
+ out$y <- y
+ out$predicted <- list(x = xgrid, y = predicted)
+ out$trace <- trace
+ out$residuals.cv <- residuals.cv
+ out$residuals <- residuals
+ out$fitted.values <- y - residuals
+ out$cv.grid <- cv.grid
+ out$diagA <- diagA
+ out$sc <- sc
+ out$alpha <- alpha
+ out$ind.cv <- ind.cv
+ out$ind.cv.ps <- ind.cv.ps
+ out
+}
+"qsreg.fit" <- function(x, y, lam, maxit = 50, maxit.cv = 10,
+ tol = 1e-04, offset = 0, sc = sqrt(var(y)) * 1e-07, alpha = 0.5,
+ wt = rep(1, length(x)), cost = 1) {
+ N <- length(y)
+ if (length(x) != length(y))
+ stop(" X and Y do not match")
+ h <- log(lam)
+ temp <- .Fortran("rcss", PACKAGE="fields",
+ h = as.double(log(lam)), npoint = as.integer(N),
+ x = as.double(x), y = as.double(y), wt = as.double(wt),
+ sy = as.double(rep(0, N)), trace = as.double(0), diag = as.double(rep(0,
+ N)), cv = as.double(0), ngrid = as.integer(0), xg = as.double(0),
+ yg = as.double(0), job = as.integer(c(3, 0, 0)), ideriv = as.integer(0),
+ din = as.double(c(cost, offset, maxit, tol, sc, alpha)),
+ dout = as.double(rep(0, 4)), ierr = as.integer(0))$dout
+ return(temp)
+}
+
+
+qsreg.psi <- function(r, alpha = 0.5, C = 1) {
+
+ temp <- ifelse(r < 0, 2 * (1 - alpha) * r/C, 2 * alpha *
+ r/C)
+ temp <- ifelse(temp > 2 * alpha, 2 * alpha, temp)
+ temp <- ifelse(temp < -2 * (1 - alpha), -2 * (1 - alpha),
+ temp)
+ temp
+}
+
+qsreg.rho <- function(r, alpha = 0.5, C = 1) {
+ temp <- ifelse(r < 0, ((1 - alpha) * r^2)/C, (alpha * r^2)/C)
+ temp <- ifelse(r > C, 2 * alpha * r - alpha * C, temp)
+ temp <- ifelse(r < -C, -2 * (1 - alpha) * r - (1 - alpha) *
+ C, temp)
+ temp
+}
+# next two functions included for just checking with new versions
+"qsreg.psi.OLD" <- function(r, alpha = 0.5, C = 1) {
+ temp <- rep(NA, length(r))
+ r <- r/C
+ temp <- r
+ ind <- r > 1
+ temp[ind] <- 2 * alpha
+ ind <- r < 1 & r > 0
+ temp[ind] <- (2 * alpha * r[ind])
+ ind <- r < -1
+ temp[ind] <- -2 * (1 - alpha)
+ ind <- r > -1 & r < 0
+ temp[ind] <- 2 * (1 - alpha) * r[ind]
+ temp
+}
+"qsreg.rho.OLD" <- function(r, alpha = 0.5, C = 1) {
+ temp <- rep(NA, length(r))
+ ind <- r > C
+ temp[ind] <- 2 * alpha * r[ind] - alpha * C
+ ind <- r < C & r > 0
+ temp[ind] <- (alpha * r[ind]^2)/C
+ ind <- r < -C
+ temp[ind] <- -2 * (1 - alpha) * r[ind] - (1 - alpha) * C
+ ind <- r > -C & r < 0
+ temp[ind] <- ((1 - alpha) * r[ind]^2)/C
+ temp
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"qsreg.trace" <- function(x, y, lam, maxit = 50, maxit.cv = 10,
+ tol = 1e-04, offset = 0, sc = sqrt(var(y)) * 1e-07, alpha = 0.5,
+ wt = rep(1, length(x)), cost = 1) {
+ N <- length(y)
+ if (length(x) != length(y))
+ stop(" X and Y do not match")
+ h <- log(lam)
+ temp <- .Fortran("rcss", PACKAGE="fields",
+ h = as.double(log(lam)), npoint = as.integer(N),
+ x = as.double(x), y = as.double(y), wt = as.double(wt),
+ sy = as.double(rep(0, N)), trace = as.double(0), diag = as.double(rep(0,
+ N)), cv = as.double(0), ngrid = as.integer(0), xg = as.double(0),
+ yg = as.double(0), job = as.integer(c(3, 0, 0)), ideriv = as.integer(0),
+ din = as.double(c(cost, offset, maxit, tol, sc, alpha)),
+ dout = as.double(rep(0, 4)), ierr = as.integer(0))$dout
+ return(temp[3])
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"summary.qsreg" <- function(object, ...) {
+ x <- object
+ digits <- 4
+ c1 <- "Number of Observations:"
+ c2 <- (x$N)
+ c1 <- c(c1, "Effective degrees of freedom:")
+ c2 <- c(c2, format(round(x$trace[x$ind.cv.ps], 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$N - x$trace[x$ind.cv.ps], 1)))
+ c1 <- c(c1, "Log10(lambda)")
+ c2 <- c(c2, format(round(log10(x$cv.grid[x$ind.cv.ps, 1]),
+ 2)))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ invisible(x)
+}
+"plot.qsreg" <- function(x, pch = "*", main = NA,
+ ...) {
+ out <- x
+ old.par <- par("mfrow", "oma")
+ on.exit(par(old.par))
+ set.panel(2, 2, relax = TRUE)
+ plot(out$x, out$y, xlab = "X", ylab = "y", pch = pch)
+ orderx <- order(out$x)
+ temp <- out$fitted.values[, c(out$ind.cv, out$ind.cv.ps)]
+ matlines(out$x[orderx], temp[orderx, ], lty = 1, col = c(1,
+ 2))
+ ##
+ # residual plot
+ #
+ matplot(out$x, qsreg.psi(out$residuals[, c(out$ind.cv, out$ind.cv.ps)],
+ out$alpha, out$sc), col = c(1, 2), pch = "o", ylab = "Pseudo residuals",
+ xlab = "X")
+ yline(0)
+ if (nrow(out$cv.grid) > 1) {
+ ind <- out$cv.grid[, 3] < 1e+19
+ out$cv.grid <- out$cv.grid[ind, ]
+ matplot(out$cv.grid[, 2], cbind(out$cv.grid[, 3], out$cv.grid[,
+ 6]), xlab = "Effective number of parameters", ylab = "Log CV Rho function ",
+ log = "y", type = "l", col = c(1, 2))
+ xline(out$cv.grid[out$ind.cv, 2], col = 1)
+ xline(out$cv.grid[out$ind.cv.ps, 2], col = 2)
+ title(" CV curves", cex = 0.5)
+ }
+ bplot(qsreg.psi(out$residuals[, c(out$ind.cv, out$ind.cv.ps)],
+ out$alpha, out$sc), names = c("CV", "CV pseudo"))
+ yline(0, col = 2)
+ if (is.na(main))
+ mtext(deparse(out$call), cex = 1.3, outer = TRUE, line = -2)
+ else mtext(main, cex = 1.3, outer = TRUE, line = -2)
+}
+"predict.qsreg" <- function(object, x, derivative = 0,
+ model = object$ind.cv.ps, ...) {
+ if (missing(x))
+ x <- object$x
+ c(splint(object$predicted$x, object$predicted$y[, model],
+ x, derivative = derivative))
+}
+"print.qsreg" <- function(x, ...) {
+ digits <- 4
+ c1 <- "Number of Observations:"
+ c2 <- (x$N)
+ c1 <- c(c1, "Effective degrees of freedom:")
+ c2 <- c(c2, format(round(x$trace[x$ind.cv.ps], 1)))
+ c1 <- c(c1, "Residual degrees of freedom:")
+ c2 <- c(c2, format(round(x$N - x$trace[x$ind.cv.ps], 1)))
+ c1 <- c(c1, "Log10(lambda) ")
+ lambda <- x$cv.grid[, 1]
+ c2 <- c(c2, format(round(log10(lambda[x$ind.cv.ps]), 2)))
+ sum <- cbind(c1, c2)
+ dimnames(sum) <- list(rep("", dim(sum)[1]), rep("", dim(sum)[2]))
+ cat("Call:\n")
+ dput(x$call)
+ print(sum, quote = FALSE)
+ invisible(x)
+}
diff --git a/R/quickPrint.R b/R/quickPrint.R
new file mode 100644
index 0000000..48d5f36
--- /dev/null
+++ b/R/quickPrint.R
@@ -0,0 +1,36 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+quickPrint<- function( obj, max.values=10){
+# this simple function will only print an object if
+# it is not too big
+ if( is.list( obj)){
+ sizeObj<- length( unlist( obj))
+ }
+ else{
+ sizeObj<-length( c( obj))
+ }
+ if( sizeObj<= max.values){
+ print(obj)
+ }
+ else{
+ cat("Object size is more than", max.values,"items (", sizeObj, "total)", fill=TRUE)
+ }
+}
diff --git a/R/quilt.plot.R b/R/quilt.plot.R
new file mode 100644
index 0000000..b3352a3
--- /dev/null
+++ b/R/quilt.plot.R
@@ -0,0 +1,62 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"quilt.plot" <- function(x, y, z, nx = 64, ny = 64,
+ grid = NULL, add.legend = TRUE, add = FALSE, nlevel=64,
+ col = tim.colors(nlevel), nrow = NULL, ncol = NULL, FUN=NULL,
+ plot=TRUE, na.rm=FALSE, ...) {
+ #
+ # note that nrow and ncol refer to the resulting 'image format' for plotting.
+ # here the x values are the rows and the y values are the columns
+ # FUN = NULL means the weighted means are found for each grid cell
+ if( !is.null(nrow)|!is.null(nrow)){
+ nx<- nrow
+ ny<- ncol
+ }
+ x <- as.matrix(x)
+ if (ncol(x) == 2) {
+ z <- y
+ }
+ if (ncol(x) == 1) {
+ x <- cbind(x, y)
+ }
+ if (ncol(x) == 3) {
+ z <- x[, 3]
+ x <- x[, 1:2]
+ }
+ # at this point x should be a 2 column matrix of x-y locations
+ # z is a vector or one column matrix of the z values.
+ #discretize data
+ out.p <- as.image(z, x = x, nx = nx, ny = ny,
+ grid = grid, FUN=FUN, na.rm=na.rm)
+ # besides the image information this list has the indices that
+ # map each z value to a grid box
+ #
+ # plot it
+ if( plot){
+ if (add.legend) {
+ image.plot(out.p, nlevel = nlevel, col = col, add = add, ...)
+ }
+ else {
+ image(out.p, col = col, add = add, ...)
+ }
+ }
+ invisible(out.p)
+}
diff --git a/R/rad.cov.R b/R/rad.cov.R
new file mode 100644
index 0000000..0ccdabc
--- /dev/null
+++ b/R/rad.cov.R
@@ -0,0 +1,100 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Rad.cov" <- function(x1, x2=NULL, p = 1, m = NA, with.log = TRUE,
+ with.constant = TRUE, C = NA, marginal = FALSE, derivative = 0) {
+ #
+ # mth order thin plate spline radial basis functions
+ # in d dimensions
+ # usually called with p = 2m-d
+ # Because this is
+ # a generalized covariance the marginal variance is not really
+ # defined.
+ # Thus, marginal is a dummy argument to be consistent with
+ # other covariance functions
+ # marginal = TRUE this should only be called within predictSE.Krig
+ # and provides the correct calculation.
+ #
+ if (marginal) {
+ return(rep(0, nrow(x1)))
+ }
+ #
+ # coerce locations to matrices, if x2 is missing use x1
+ if (!is.matrix(x1))
+ x1 <- as.matrix(x1)
+ if( is.null( x2)){
+ x2<- x1
+ }
+ if (!is.matrix(x2))
+ x2 <- as.matrix(x2)
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ if (is.na(m)) {
+ m <- (d + p)/2
+ }
+ else {
+ p <- 2 * m - d
+ }
+ if (p < 0) {
+ stop(" p is negative (m possibly too small)")
+ }
+ # parameter list to send to the FORTRAN
+ par <- c(p/2, ifelse((d%%2 == 0) & (with.log), 1, 0))
+ #
+ # multiply by constant if requested
+ rbf.constant <- ifelse(with.constant, radbas.constant(m,
+ d), 1)
+ # compute matrix in FORTRAN
+ if (is.na(C[1])) {
+ temp <- .Fortran("radbas", PACKAGE="fields",
+ nd = as.integer(d), x1 = as.double(x1),
+ n1 = as.integer(n1), x2 = as.double(x2), n2 = as.integer(n2),
+ par = as.double(par), k = as.double(rep(0, n1 * n2)))
+ return(rbf.constant * matrix(temp$k, ncol = n2, nrow = n1))
+ }
+ else {
+ # do cross covariance matrix multiplication in FORTRAN
+ if (derivative == 0) {
+ # evaluate function not partial derivatives.
+ C <- as.matrix(C)
+ n3 <- ncol(C)
+ temp <- .Fortran("multrb",PACKAGE="fields",
+ nd = as.integer(d), x1 = as.double(x1),
+ n1 = as.integer(n1), x2 = as.double(x2), n2 = as.integer(n2),
+ par = as.double(par), c = as.double(C), n3 = as.integer(n3),
+ h = as.double(rep(0, n1 * n3)), work = as.double(rep(0,
+ n2)))$h
+ return(rbf.constant * matrix(temp, nrow = n1, ncol = n3))
+ }
+ else {
+ if (ncol(C) > 1) {
+ stop("Can only evaluate derivatives on one spline fit")
+ }
+ temp <- .Fortran("mltdrb", PACKAGE="fields",
+ nd = as.integer(d), x1 = as.double(x1),
+ n1 = as.integer(n1), x2 = as.double(x2), n2 = as.integer(n2),
+ par = as.double(par), c = as.double(C), h = as.double(rep(0,
+ n1 * d)), work = as.double(rep(0, n2)))$h
+ return(rbf.constant * matrix(temp, nrow = n1, ncol = d))
+ }
+ }
+ stop("should not get here!")
+}
diff --git a/R/rad.image.cov.R b/R/rad.image.cov.R
new file mode 100644
index 0000000..d6f7ea3
--- /dev/null
+++ b/R/rad.image.cov.R
@@ -0,0 +1,59 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Rad.image.cov" <- function(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, ...) {
+ if (is.null(cov.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ M <- ceiling2(2 * m)
+ N <- ceiling2(2 * n)
+ xg <- make.surface.grid(list((1:M) * dx, (1:N) * dy))
+ center <- matrix(c((dx * M)/2, (dy * N)/2), nrow = 1,
+ ncol = 2)
+ out <- Rad.cov(xg, center, ...)
+ out <- as.surface(xg, c(out))$z
+ temp <- matrix(0, nrow = M, ncol = N)
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+ cov.obj <- list(m = m, n = n, grid = grid, N = N, M = M,
+ wght = wght, call = match.call())
+ if (setup) {
+ return(cov.obj)
+ }
+ }
+ temp <- matrix(0, nrow = cov.obj$M, ncol = cov.obj$N)
+ if (missing(ind1)) {
+ temp[1:cov.obj$m, 1:cov.obj$n] <- Y
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[1:cov.obj$m,
+ 1:cov.obj$n])
+ }
+ else {
+ if (missing(ind2)) {
+ temp[ind1] <- Y
+ }
+ else {
+ temp[ind2] <- Y
+ }
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[ind1])
+ }
+}
diff --git a/R/rad.simple.cov.R b/R/rad.simple.cov.R
new file mode 100644
index 0000000..aad740e
--- /dev/null
+++ b/R/rad.simple.cov.R
@@ -0,0 +1,55 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"Rad.simple.cov" <- function(x1, x2, p = 1, with.log = TRUE,
+ with.constant = TRUE, C = NA, marginal = FALSE) {
+ if (marginal) {
+ return(rep(1, nrow(x1)))
+ }
+ if (!is.matrix(x1))
+ x1 <- as.matrix(x1)
+ if (!is.matrix(x2))
+ x2 <- as.matrix(x2)
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ m <- (d + p)/2
+ temp <- rdist(x1, x2)
+ if (with.constant) {
+ Amd <- radbas.constant(m, d)
+ }
+ else {
+ Amd <- 1
+ }
+ if ((d%%2 == 0) & (with.log)) {
+ temp <- Amd * ifelse(temp < 1e-10, 0, temp^(p/2) * log(temp))
+ }
+ else {
+ temp <- Amd * temp^(p)
+ }
+ #
+ #
+ if (is.na(C[1])) {
+ return(temp)
+ }
+ else {
+ return(temp %*% C)
+ }
+}
diff --git a/R/radbas.constant.R b/R/radbas.constant.R
new file mode 100644
index 0000000..d715efb
--- /dev/null
+++ b/R/radbas.constant.R
@@ -0,0 +1,44 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"radbas.constant" <- function(m, d) {
+ # local gamma function to avoid imprecision warnings for negative arguments.
+ gamma.local <- function(x) {
+ if (x < 0) {
+ temp <- 1
+ while (x < 0) {
+ temp <- temp * x
+ x <- x + 1
+ }
+ return(gamma(x)/temp)
+ }
+ else {
+ gamma(x)
+ }
+ }
+ if (d%%2 == 0) {
+ Amd <- (((-1)^(1 + m + d/2)) * (2^(1 - 2 * m)) * (pi^(-d/2)))/(gamma(m) *
+ gamma.local(m - d/2 + 1))
+ }
+ else {
+ Amd <- (gamma.local(d/2 - m) * (2^(-2 * m)) * (pi^(-d/2)))/gamma(m)
+ }
+ Amd
+}
diff --git a/R/rat.diet.R b/R/rat.diet.R
new file mode 100644
index 0000000..05f6d5c
--- /dev/null
+++ b/R/rat.diet.R
@@ -0,0 +1,38 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"rat.diet" <- structure(list(t = c(0, 1, 3, 7, 8,
+ 10, 14, 15, 17, 21, 22, 24, 28, 29, 31, 35, 36, 38, 42, 43,
+ 45, 49, 50, 52, 56, 57, 59, 63, 64, 70, 73, 77, 80, 84, 87,
+ 91, 94, 98, 105), con = c(20.5, 19.399, 22.25, 17.949, 19.899,
+ 21.449, 16.899, 21.5, 22.8, 24.699, 26.2, 28.5, 24.35, 24.399,
+ 26.6, 26.2, 26.649, 29.25, 27.55, 29.6, 24.899, 27.6, 28.1,
+ 27.85, 26.899, 27.8, 30.25, 27.6, 27.449, 27.199, 27.8, 28.199,
+ 28, 27.3, 27.899, 28.699, 27.6, 28.6, 27.5), trt = c(21.3,
+ 16.35, 19.25, 16.6, 14.75, 18.149, 14.649, 16.7, 15.05, 15.5,
+ 13.949, 16.949, 15.6, 14.699, 14.15, 14.899, 12.449, 14.85,
+ 16.75, 14.3, 16, 16.85, 15.65, 17.149, 18.05, 15.699, 18.25,
+ 18.149, 16.149, 16.899, 18.95, 22, 23.6, 23.75, 27.149, 28.449,
+ 25.85, 29.7, 29.449)), .Names = c("t", "con", "trt"), class = "data.frame",
+ row.names = c("1", "2", "3", "4", "5", "6", "7", "8", "9",
+ "10", "11", "12", "13", "14", "15", "16", "17", "18",
+ "19", "20", "21", "22", "23", "24", "25", "26", "27",
+ "28", "29", "30", "31", "32", "33", "34", "35", "36",
+ "37", "38", "39"))
diff --git a/R/rdist.R b/R/rdist.R
new file mode 100644
index 0000000..d7364a6
--- /dev/null
+++ b/R/rdist.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"rdist" <- function(x1, x2 = NULL, compact = FALSE) {
+ if (!is.matrix(x1)) {
+ x1 <- as.matrix(x1)
+ }
+ if (is.null(x2)) {
+ storage.mode(x1) <- "double"
+ if (compact)
+ return(dist(x1))
+ else
+ return(.Call("RdistC", x1, x1, PACKAGE = "fields"))
+ } else {
+ if (!is.matrix(x2)) {
+ x2 <- as.matrix(x2)
+ }
+ storage.mode(x1) <- "double"
+ storage.mode(x2) <- "double"
+ return(.Call("RdistC", x1, x2, PACKAGE = "fields"))
+ }
+
+}
diff --git a/R/rdist.earth.R b/R/rdist.earth.R
new file mode 100644
index 0000000..424766f
--- /dev/null
+++ b/R/rdist.earth.R
@@ -0,0 +1,45 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"rdist.earth" <- function(x1, x2=NULL, miles = TRUE, R = NULL) {
+ if (is.null(R)) {
+ if (miles)
+ R <- 3963.34
+ else R <- 6378.388
+ }
+ coslat1 <- cos((x1[, 2] * pi)/180)
+ sinlat1 <- sin((x1[, 2] * pi)/180)
+ coslon1 <- cos((x1[, 1] * pi)/180)
+ sinlon1 <- sin((x1[, 1] * pi)/180)
+ if (is.null(x2)) {
+ pp <- cbind(coslat1 * coslon1, coslat1 * sinlon1, sinlat1) %*%
+ t(cbind(coslat1 * coslon1, coslat1 * sinlon1, sinlat1))
+ return(R * acos(ifelse(abs(pp) > 1, 1 * sign(pp), pp)))
+ }
+ else {
+ coslat2 <- cos((x2[, 2] * pi)/180)
+ sinlat2 <- sin((x2[, 2] * pi)/180)
+ coslon2 <- cos((x2[, 1] * pi)/180)
+ sinlon2 <- sin((x2[, 1] * pi)/180)
+ pp <- cbind(coslat1 * coslon1, coslat1 * sinlon1, sinlat1) %*%
+ t(cbind(coslat2 * coslon2, coslat2 * sinlon2, sinlat2))
+ return(R * acos(ifelse(abs(pp) > 1, 1 * sign(pp), pp)))
+ }
+}
diff --git a/R/rdist.earth.vec.R b/R/rdist.earth.vec.R
new file mode 100644
index 0000000..1b82faf
--- /dev/null
+++ b/R/rdist.earth.vec.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+rdist.earth.vec = function(x1, x2, miles=TRUE, R=NULL) {
+
+ #set default radius
+ if(is.null(R)) {
+ if(miles)
+ R = 3963.34
+ else
+ R = 6378.388
+ }
+
+ #convert lon/lat to radians
+ x1 = x1 * (pi/180)
+ x2 = x2 * (pi/180)
+
+ #calculate distances using Haversine method
+ lonDist2 = (x2[,1] - x1[,1]) * (1/2)
+ latDist2 = (x2[,2] - x1[,2]) * (1/2)
+ a = sin(latDist2) * sin(latDist2) + cos(x1[, 2]) * cos(x2[, 2]) * sin(lonDist2) * sin(lonDist2)
+ return(2 * atan2(sqrt(a), sqrt(1 - a)) * R)
+}
diff --git a/R/rdist.vec.R b/R/rdist.vec.R
new file mode 100644
index 0000000..d243c1e
--- /dev/null
+++ b/R/rdist.vec.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+rdist.vec = function(x1, x2) {
+ #make sure inputs are matrices
+ if (!is.matrix(x1)) {
+ x1 <- as.matrix(x1)
+ }
+ if(!is.matrix(x2)) {
+ x2 <- as.matrix(x2)
+ }
+
+ #return distances
+ sqrt(rowSums((x1 - x2)^2))
+}
diff --git a/R/residuals.Krig.R b/R/residuals.Krig.R
new file mode 100644
index 0000000..181a194
--- /dev/null
+++ b/R/residuals.Krig.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+resid.Krig <- function(object, ...) {
+ object$residuals
+}
diff --git a/R/ribbon.plot.R b/R/ribbon.plot.R
new file mode 100644
index 0000000..dae117c
--- /dev/null
+++ b/R/ribbon.plot.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+ribbon.plot <- function(x, y, z, zlim = NULL, col = tim.colors(256),
+ transparent.color = "white", ...) {
+ N <- length(x)
+ x1 <- (x[1:(N - 1)] + x[2:(N)])/2
+ y1 <- (y[1:(N - 1)] + y[2:(N)])/2
+ x1 <- c(x[1] - (x[2] - x[1])/2, x1, x[N] + (x[N] - x[N -
+ 1])/2)
+ y1 <- c(y[1] - (y[2] - y[1])/2, y1, y[N] + (y[N] - y[N -
+ 1])/2)
+ eps <- 1e-07
+ if (is.null(zlim)) {
+ zlim <- range(c(z), na.rm = TRUE)
+ }
+
+ # convert z values to a color scale.
+ colz <- color.scale(z, col = col, transparent.color = transparent.color)
+
+ segments(x1[1:(N)], y1[1:(N)], x1[2:(N + 1)], y1[2:(N + 1)],
+ col = colz, ...)
+}
diff --git a/R/set.panel.R b/R/set.panel.R
new file mode 100644
index 0000000..31ee67d
--- /dev/null
+++ b/R/set.panel.R
@@ -0,0 +1,30 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"set.panel" <- function(m = 1, n = 1, relax = FALSE) {
+ temp <- par()
+ single.plot <- (temp$mfg[3] == 1 & temp$mfg[4] == 1)
+ if (!relax | single.plot | ((m == 1) & (n == 1))) {
+ par(mfrow = c(m, n))
+ cat("plot window will lay out plots in a", m, "by", n,
+ "matrix ", fill = TRUE)
+ }
+ invisible()
+}
diff --git a/R/setup.image.smooth.R b/R/setup.image.smooth.R
new file mode 100644
index 0000000..6596204
--- /dev/null
+++ b/R/setup.image.smooth.R
@@ -0,0 +1,47 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"setup.image.smooth" <- function(nrow = 64, ncol = 64,
+ dx = 1, dy = 1, kernel.function = double.exp, theta = 1,
+ xwidth = nrow * dx, ywidth = ncol * dx, lambda = NULL, ...) {
+ M2 <- round((nrow + xwidth/dx)/2)
+ N2 <- round((ncol + ywidth/dy)/2)
+ M <- 2 * M2
+ N <- 2 * N2
+ xi <- seq(-(M2 - 1), M2, 1) * dx
+ xi <- xi/theta
+
+ yi <- seq(-(N2 - 1), (N2), 1) * dy
+ yi <- yi/theta
+ dd <- sqrt((matrix(xi, M, N)^2 + matrix(yi, M, N, byrow = TRUE)^2))
+ out <- matrix(kernel.function(dd, ...), nrow = M, ncol = N)
+ out2 <- matrix(0, M, N)
+ out2[M2, N2] <- 1
+
+ W = fft(out)/fft(out2)
+ if (!is.null(lambda)) {
+ # want fft(out) / ( fft(out2)*lambda + fft(out))
+ W = W/(lambda/fft(out2) + W)
+ }
+
+ list(W = W/(M * N), dx = dx, dy = dy, xwidth = xwidth, ywidth = ywidth,
+ M = M, N = N, m = nrow, n = ncol, lambda = lambda, grid = list(x = xi,
+ y = yi))
+}
diff --git a/R/sim.Krig.R b/R/sim.Krig.R
new file mode 100644
index 0000000..08a0bcc
--- /dev/null
+++ b/R/sim.Krig.R
@@ -0,0 +1,139 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"sim.Krig" <- function(object, xp, M = 1,
+ verbose = FALSE, ...) {
+ sigma2 <- object$best.model[2]
+ rho <- object$best.model[3]
+ #
+ # check for unique rows of xp
+ if (any(duplicated(xp))) {
+ stop(" predictions locations should be unique")
+ }
+ #
+ # set up various sizes of arrays
+ m <- nrow(xp)
+ n <- nrow(object$xM)
+ N <- length(object$y)
+ if (verbose) {
+ cat(" m,n,N", m, n, N, fill = TRUE)
+ }
+ #transform the new points
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ xpM <- scale(xp, xc, xs)
+ # complete set of points for prediction.
+ # check for replicates and adjust
+ x <- rbind(object$xM, xpM)
+ if (verbose) {
+ cat("full x ", fill = TRUE)
+ print(x)
+ }
+ #
+ # find indices of all rows of xp that correspond to rows of
+ # xM and then collapse x to unique rows.
+ rep.x.info <- fields.duplicated.matrix(x)
+ x <- as.matrix(x[!duplicated(rep.x.info), ])
+
+ if (verbose) {
+ cat("full x without duplicates ", fill = TRUE)
+ print(x)
+ }
+
+ N.full <- nrow(x)
+ if (verbose) {
+ cat("N.full", N.full, fill = TRUE)
+ }
+ # these give locations in x matrix to reconstruct xp matrix
+ xp.ind <- rep.x.info[(1:m) + n]
+ if (verbose) {
+ print(N.full)
+ print(x)
+ }
+ if (verbose) {
+ cat("reconstruction of xp from collapsed locations",
+ fill = TRUE)
+ print(x[xp.ind, ])
+ }
+ #
+ # Sigma is full covariance at the data locations and at prediction points.
+ #
+ Sigma <- rho * do.call(object$cov.function.name, c(object$args,
+ list(x1 = x, x2 = x)))
+ #
+ # square root of Sigma for simulating field
+ # Cholesky is fast but not very stable.
+ #
+ # the following code line is similar to chol(Sigma)-> Scol
+ # but adds possible additional arguments controlling the Cholesky
+ # from the Krig object.
+ #
+ Schol <- do.call("chol", c(list(x = Sigma), object$chol.args))
+ #
+ # output matrix to hold results
+ N.full <- nrow(x)
+ out <- matrix(NA, ncol = m, nrow = M)
+ #
+ # find conditional mean field from initial fit
+ # don't multiply by sd or add mean if this is
+ # a correlation model fit.
+ # (these are added at the predict step).
+ h.hat <- predict(object, xp, ...)
+ # marginal standard deviation of field.
+ temp.sd <- 1
+ #
+ #
+ # this is not 1 if Krig object is a corelation model.
+ if (object$correlation.model) {
+ if (!is.na(object$sd.obj[1])) {
+ temp.sd <- predict(object$sd.obj, x)
+ }
+ }
+ #
+ # Define W2i for simulating errors.
+ #
+ W2i <- Krig.make.Wi(object)$W2i
+ for (k in 1:M) {
+ # simulate full field
+ h <- t(Schol) %*% rnorm(N.full)
+ # value of simulated field at observations
+ #
+ # NOTE: fixed part of model (null space) need not be simulated
+ # because the estimator is unbiased for this part.
+ # the variability is still captured because the fixed part
+ # is still estimated as part of the predict step below
+ h.data <- h[1:n]
+ # expand the values according to the replicate pattern
+ h.data <- h.data[object$rep.info]
+ # create synthetic data
+ y.synthetic <- h.data + sqrt(sigma2) * W2i %d*% rnorm(N)
+ # predict at xp using these data
+ # and subtract from 'true' value
+ # note that true values of field have to be expanded in the
+ # case of common locations between xM and xp.
+ h.true <- (h[xp.ind])
+ temp.error <- predict(object, xp, y = y.synthetic, eval.correlation.model = FALSE, ...) -
+ h.true
+ # add the error to the actual estimate (conditional mean)
+ # and adjust by marginal standard deviation
+ out[k, ] <- h.hat + temp.error * temp.sd
+ }
+ out
+}
diff --git a/R/sim.Krig.approx.R b/R/sim.Krig.approx.R
new file mode 100644
index 0000000..d2e473e
--- /dev/null
+++ b/R/sim.Krig.approx.R
@@ -0,0 +1,131 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"sim.Krig.approx" <- function(object, grid.list = NULL,
+ M = 1, nx = 40, ny = 40, verbose = FALSE,
+ extrap = FALSE,...) {
+ # check that this is a stationary covariance
+ if (object$cov.function.name != "stationary.cov") {
+ stop("covariance function is not stationary.cov")
+ }
+ # create grid if not passed
+ if ( is.null(grid.list) ) {
+ grid.list <- fields.x.to.grid(object$x, nx = nx, ny = ny)
+ }
+ #
+ # extract what are the x and y and their lengths
+ #
+ temp <- parse.grid.list(grid.list)
+ nx <- temp$nx
+ ny <- temp$ny
+ #
+ # coerce grid list to have x and y components
+ #
+ glist <- list(x = temp$x, y = temp$y)
+ # figure out what sigma and rho should be
+ sigma2 <- object$best.model[2]
+ rho <- object$best.model[3]
+ #
+ # set up various sizes of arrays
+ m <- nx * ny
+ n <- nrow(object$xM)
+ N <- n
+ if (verbose) {
+ cat(" m,n,N ", m, n, N, fill = TRUE)
+ }
+ #transform the new points
+ xc <- object$transform$x.center
+ xs <- object$transform$x.scale
+ # xpM <- scale(xp, xc, xs)
+ if (verbose) {
+ cat("center and scale", fill = TRUE)
+ print(xc)
+ print(xs)
+ }
+ #
+ # set up for simulating on a grid
+ #
+ cov.obj <- do.call("stationary.image.cov", c(object$args,
+ list(setup = TRUE, grid = glist)))
+ out <- array(NA, c(nx, ny, M))
+ #
+ # find conditional mean field from initial fit
+ # don't multiply by sd or add mean if this is
+ # a correlation model fit.
+ # (these are added at the predict step).
+ # from now on all predicted values are on the grid
+ # represented by a matrix
+ h.hat <- predictSurface(object, grid.list = grid.list, extrap = extrap,...)$z
+ if (verbose) {
+ cat("mean predicted field", fill = TRUE)
+ image.plot(h.hat)
+ }
+ # empty surface object to hold ('truth') simulated fields
+ h.true <- list(x = glist$x, y = glist$y, z = matrix(NA, nx,
+ ny))
+ # covariance matrix for observations
+ W2i <- Krig.make.Wi(object, verbose = verbose)$W2i
+ if (verbose) {
+ cat("dim of W2i", dim(W2i), fill = TRUE)
+ }
+ ####
+ ### begin the big loop
+ ###
+ for (k in 1:M) {
+ # simulate full field
+ h.true$z <- sqrt(object$rhohat) * sim.rf(cov.obj)
+ if (verbose) {
+ cat("mean predicted field", fill = TRUE)
+ image.plot(h.true)
+ }
+ # value of simulated field at observations
+ #
+ # NOTE: fixed part of model (null space) need not be simulated
+ # because the estimator is unbiased for this part.
+ # the variability is still captured because the fixed part
+ # is still estimated as part of the predict step below
+ #
+ # bilinear interpolation to approximate values at data locations
+ #
+ h.data <- interp.surface(h.true, object$xM)
+ if (verbose) {
+ cat("synthetic true values", h.data, fill = TRUE)
+ }
+ # create synthetic data
+ # NOTE:these are actually the 'yM's the y's
+ # having been collapsed to replicate means.
+ y.synthetic <- h.data + sqrt(sigma2) * W2i %d*% rnorm(N)
+ if (verbose) {
+ cat("synthetic data", y.synthetic, fill = TRUE)
+ }
+ # predict at grid using these data
+ # and subtract from 'true' value
+ temp.error <- predictSurface(object, grid.list = grid.list,
+ yM = y.synthetic, eval.correlation.model = FALSE,
+ extrap = TRUE,...)$z - h.true$z
+ if (verbose) {
+ cat("mean predicted field", fill = TRUE)
+ image.plot(temp.error)
+ }
+ # add the error to the actual estimate (conditional mean)
+ out[, , k] <- h.hat + temp.error
+ }
+ return(list(x = glist$x, y = glist$y, z = out))
+}
diff --git a/R/sim.mKrig.approx.R b/R/sim.mKrig.approx.R
new file mode 100644
index 0000000..e00d0a8
--- /dev/null
+++ b/R/sim.mKrig.approx.R
@@ -0,0 +1,134 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"sim.mKrig.approx" <- function(mKrigObject, predictionPoints = NULL,
+ predictionPointsList = NULL, simulationGridList = NULL, gridRefinement = 5,
+ gridExpansion = 1 + 1e-07, M = 1, nx = 40, ny = 40, nxSimulation = NULL,
+ nySimulation = NULL, delta = NULL, verbose = FALSE,...) {
+ if (ncol(mKrigObject$x) != 2) {
+ stop("conditional simulation only implemented for 2 dimensions")
+ }
+ # create prediction set of points based on what is passed
+ if (is.null(predictionPoints)) {
+ predictionPoints <- makePredictionPoints(mKrigObject,
+ nx, ny, predictionPointsList)
+ }
+ if (is.null(simulationGridList)) {
+ simulationGridList <- makeSimulationGrid(mKrigObject,
+ predictionPoints, nx, ny, nxSimulation, nySimulation,
+ gridRefinement, gridExpansion)
+ }
+ nxSimulation <- length(simulationGridList$x)
+ nySimulation <- length(simulationGridList$y)
+ sigma <- mKrigObject$sigma.MLE
+ rho <- mKrigObject$rho.MLE
+ #
+ # set up various sizes of arrays
+ nObs <- nrow(mKrigObject$x)
+ if (verbose) {
+ cat("nObs, sigma, rho", nObs, sigma, rho, fill = TRUE)
+ cat("simulationGridList)", fill=TRUE)
+ print( t( stats( simulationGridList)))
+ }
+ # set up object for simulating on a grid using circulant embedding
+ covarianceObject <- stationary.image.cov(setup = TRUE,
+ grid = simulationGridList,
+ cov.function = mKrigObject$cov.function, cov.args = mKrigObject$args,
+ delta = delta)
+ if (verbose) {
+ cat("dim of full circulant matrix ", dim(covarianceObject$wght),
+ fill = TRUE)
+ }
+ #
+ # find conditional mean field from initial fit
+ hHat <- predict(mKrigObject, xnew = predictionPoints, grid.list = predictionPointsList, ...)
+ # setup output array to hold ensemble
+ out <- matrix(NA, length(hHat), M)
+ # empty image object to hold simulated fields
+ hTrue <- c(simulationGridList, list(z = matrix(NA, nxSimulation,
+ nySimulation)))
+ ##########################################################################################
+ ### begin the big loop
+ ##########################################################################################
+ xData <- mKrigObject$x
+ weightsError <- mKrigObject$weights
+ for (k in 1:M) {
+ # simulate full field
+ if (verbose) {
+ cat(k, " ")
+ }
+ hTrue$z <- sqrt(rho) * sim.rf(covarianceObject)
+ #
+ # NOTE: fixed part of model (null space) need not be simulated
+ # because the estimator is unbiased for this part.
+ # the variability is still captured because the fixed part
+ # is still estimated as part of the predict step below
+ #
+ # bilinear interpolation to approximate values at data locations
+ #
+ hData <- interp.surface(hTrue, xData)
+ hPredictionGrid <- interp.surface(hTrue, predictionPoints)
+ ySynthetic <- hData + sigma * 1/sqrt(weightsError) *
+ rnorm(nObs)
+ if (verbose) {
+ cat("stats for synthetic values", fill = TRUE)
+ print(t(stats(ySynthetic)))
+ }
+ # predict at grid using these data
+ # and subtract from synthetic 'true' value
+ spatialError <- predict(mKrigObject, xnew = predictionPoints,
+ grid.list = predictionPointsList, ynew = ySynthetic, ...) -
+ hPredictionGrid
+ # add the error to the actual estimate (conditional mean)
+ out[, k] <- hHat + spatialError
+ }
+ return(list(predictionPoints = predictionPoints, Ensemble = out,
+ call = match.call()))
+}
+
+makeSimulationGrid <- function(mKrigObject, predictionPoints,
+ nx, ny, nxSimulation, nySimulation, gridRefinement, gridExpansion) {
+ # if prediction grid is passed use these to deterimine the simulation grid.
+ #
+ if (is.null(nxSimulation) | is.null(nySimulation)) {
+ nxSimulation <- nx * gridRefinement * gridExpansion
+ nySimulation <- ny * gridRefinement * gridExpansion
+ }
+ # Note NULL values are transparent ther because of 'c' operator.
+ xRange <- range(c(mKrigObject$x[, 1], predictionPoints[,
+ 1]))
+ yRange <- range(c(mKrigObject$x[, 2], predictionPoints[,
+ 2]))
+ midpointX <- (xRange[2] + xRange[1])/2
+ midpointY <- (yRange[2] + yRange[1])/2
+ deltaX <- gridExpansion * (xRange[2] - xRange[1])/2
+ deltaY <- gridExpansion * (yRange[2] - yRange[1])/2
+ return(list(x = seq(midpointX - deltaX, midpointX + deltaX,
+ , nxSimulation), y = seq(midpointY - deltaY, midpointY +
+ deltaY, , nySimulation)))
+}
+makePredictionPoints <- function(mKrigObject, nx,
+ ny, predictionPointsList) {
+ if (is.null(predictionPointsList)) {
+ predictionPointsList <- fields.x.to.grid(mKrigObject$x,
+ nx = nx, ny = ny)
+ }
+ return(make.surface.grid(predictionPointsList))
+}
diff --git a/R/sim.rf.R b/R/sim.rf.R
new file mode 100644
index 0000000..74edf3f
--- /dev/null
+++ b/R/sim.rf.R
@@ -0,0 +1,32 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"sim.rf" <- function(obj) {
+ n <- obj$n
+ m <- obj$m
+ M <- obj$M
+ N <- obj$N
+ if (any(Re(obj$wght) < 0)) {
+ stop("FFT of covariance has negative\nvalues")
+ }
+ z <- fft(matrix(rnorm(N * M), ncol = N, nrow = M))
+ Re(fft(sqrt(obj$wght) * z, inverse = TRUE))[1:m, 1:n]/sqrt(M *
+ N)
+}
diff --git a/R/sim.spatialProcess.R b/R/sim.spatialProcess.R
new file mode 100644
index 0000000..23684bc
--- /dev/null
+++ b/R/sim.spatialProcess.R
@@ -0,0 +1,128 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+sim.spatialProcess<- function(object, xp, M = 1,
+ verbose = FALSE, ...) {
+ sigma2 <- (object$sigma.MLE.FULL)^2
+ rho <- object$rho.MLE.FULL
+ xp<- as.matrix( xp)
+ #
+ # check for unique rows of data locations
+ if( any(duplicated(object$x)) ){
+ stop("data locations should be unique")
+ }
+ #
+ # set up various sizes of arrays
+ m <- nrow(xp)
+ n<- nrow( object$x)
+ N <- length(object$y)
+ if (verbose) {
+ cat("m,n,N", m,n, N, fill = TRUE)
+ }
+ #
+ # find indices of all rows of xp that correspond to rows of
+ # xM and then collapse x to unique rows.
+ if( any( duplicated(object$x)) ){
+ stop('Can not handle replicated locations')}
+ if( any( duplicated(xp)) ){
+ stop('Can not handle repeated
+ prediction locations')}
+ #
+ x<- as.matrix(rbind( object$x, xp))
+ rep.x.info <- fields.duplicated.matrix(x)
+ # find uniuqe locations.
+ ind<- !duplicated(rep.x.info)
+ xUnique <- as.matrix(x[ind, ])
+ if (verbose) {
+ cat('full x and predicted locations without duplicates ', fill = TRUE)
+ print(xUnique)
+ }
+ N.full <- nrow(xUnique)
+ if (verbose) {
+ cat("N.full", N.full, fill = TRUE)
+ }
+ if( N.full > 5000){
+ cat("WARNING: Number of locations for conditional simulation is large ( >5000)
+ this may take some time to compute or exhaust the memory.",
+ fill=FALSE)
+ }
+ # these give locations in x matrix to reconstruct xp matrix
+ xp.ind <- rep.x.info[(1:m) + n]
+ if (verbose) {
+ print(N.full)
+ print(xUnique)
+ }
+ if (verbose) {
+ cat("reconstruction of xp from collapsed locations",
+ fill = TRUE)
+ print(xUnique[xp.ind, ])
+ }
+ #
+ # Sigma is full covariance at the data locations and at prediction points.
+ #
+
+ Sigma <- rho * do.call(object$cov.function.name, c(object$args,
+ list(x1 = xUnique, x2 = xUnique)))
+ #
+ # square root of Sigma for simulating field
+ # Cholesky is fast but not very stable.
+ #
+ # the following code line is similar to chol(Sigma)-> Scol
+ # but adds possible additional arguments controlling the Cholesky
+ # from the mKrig object.
+ # x has has been winnowed down to unique rows so that
+ # Sigma has full rank.
+ #
+ Schol <- do.call("chol", c(list(x = Sigma), object$chol.args))
+ #
+ # output matrix to hold results
+ out <- matrix(NA, ncol = m, nrow = M)
+ #
+ # find conditional mean field from initial fit
+ # (these are added at the predict step).
+ #
+ h.hat <- predict(object, xnew=xp, ...)
+ #
+ # NOTE: fixed part of model (null space) need not be simulated
+ # because the estimator is unbiased for this part.
+ # the variability is still captured because the fixed part
+ # is still estimated as part of the predict step below
+ # create synthetic data
+ for (k in 1:M) {
+ # simulate full field
+ h <- t(Schol) %*% rnorm(N.full)
+ # value of simulated field at observations
+ h.data <- h[1:n]
+ #
+ y.synthetic <- h.data + sqrt(sigma2/object$weights)*rnorm(n)
+ # predict at xp using these data
+ # and subtract from 'true' value,
+ # note that true values of field have to be expanded in the
+ # case of common locations between object$x and xp.
+ h.true <- (h[xp.ind])
+# temp.error <- predict(object, xnew=xp, ynew = y.synthetic,
+# Z=Zp, ...) - h.true
+ temp.error <- predict(object, xnew=xp, ynew = y.synthetic,
+ ...) - h.true
+ # add the error to the actual estimate (conditional mean)
+ out[k, ] <- h.hat + temp.error
+ }
+ out
+}
diff --git a/R/smooth.2d.R b/R/smooth.2d.R
new file mode 100644
index 0000000..f68eee7
--- /dev/null
+++ b/R/smooth.2d.R
@@ -0,0 +1,75 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"smooth.2d" <- function(Y, ind = NULL, weight.obj = NULL,
+ setup = FALSE, grid = NULL, x = NULL, nrow = 64, ncol = 64,
+ surface = TRUE, cov.function = gauss.cov, Mwidth = NULL,
+ Nwidth = NULL, ...) {
+ temp <- as.image(Y, ind, grid = grid, nx = nrow, ny = ncol,
+ x = x)
+ Y <- temp$z
+ NN <- temp$weights
+ grid <- list(x = temp$x, y = temp$y)
+ if (is.null(weight.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ if (is.null(Mwidth))
+ M <- 2 * m
+ else {
+ M <- m + Mwidth
+ }
+ if (is.null(Nwidth))
+ N <- 2 * n
+ else {
+ N <- n + Nwidth
+ }
+ xg <- make.surface.grid(list((1:M) * dx, (1:N) * dy))
+ center <- matrix(c((dx * M)/2, (dy * N)/2), nrow = 1,
+ ncol = 2)
+ out <- cov.function(xg, center, ...)
+ out <- as.surface(xg, c(out))$z
+ temp <- matrix(0, nrow = M, ncol = N)
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+ weight.obj <- list(m = m, n = n, grid = grid, N = N,
+ M = M, wght = wght, call = match.call())
+ if (setup) {
+ return(weight.obj)
+ }
+ }
+ temp <- matrix(0, nrow = weight.obj$M, ncol = weight.obj$N)
+ temp[1:m, 1:n] <- Y
+ temp[is.na(temp)] <- 0
+ temp2 <- Re(fft(fft(temp) * weight.obj$wght, inverse = TRUE))[1:weight.obj$m,
+ 1:weight.obj$n]
+ temp <- matrix(0, nrow = weight.obj$M, ncol = weight.obj$N)
+ temp[1:m, 1:n] <- NN
+ temp[is.na(temp)] <- 0
+ temp3 <- Re(fft(fft(temp) * weight.obj$wght, inverse = TRUE))[1:weight.obj$m,
+ 1:weight.obj$n]
+ if (!surface)
+ (temp2/temp3)
+ else {
+ list(x = weight.obj$grid$x, y = weight.obj$grid$y, z = temp2/temp3,
+ index = ind)
+ }
+}
diff --git a/R/spam_2lz.R b/R/spam_2lz.R
new file mode 100644
index 0000000..b7896b9
--- /dev/null
+++ b/R/spam_2lz.R
@@ -0,0 +1,69 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+spind2full <- function(obj) {
+ # create empty matrix and stuff at non zero locations
+ temp <- matrix(0, obj$da[1], obj$da[2])
+ temp[obj$ind] <- obj$ra
+ return(temp)
+}
+spind2spam <- function(obj, add.zero.rows = TRUE) {
+ # Check if there is one or more missing rows. If so either stop or fill in these rows
+ # with a zero value
+ rows.present <- unique(obj$ind[, 1])
+ if (length(rows.present) < obj$da[1]) {
+ # The missing row indices
+ ind.missing <- (1:obj$da[1])[-rows.present]
+ N.missing <- length(ind.missing)
+ if (!add.zero.rows) {
+ cat(N.missing, " missing row(s)", fill = TRUE)
+ stop("Can not coerce to spam format with add.zero.rows==FALSE")
+ }
+ else {
+ # put a hard zero in the first column of each missing row
+ obj$ind <- rbind(obj$ind, cbind(ind.missing, rep(1,
+ N.missing)))
+ obj$ra <- c(obj$ra, rep(0, N.missing))
+ }
+ }
+ # sort on rows and then columns to make sure they are in order
+ ii <- order(obj$ind[, 1], obj$ind[, 2])
+ # shuffle indices and entries so they are in row order
+ obj$ind <- obj$ind[ii, ]
+ obj$ra <- obj$ra[ii]
+ ia <- obj$ind[, 1]
+ # define total number of nonzero elements
+ M <- length(ia)
+ # find places where rows change
+ hold <- diff(c(0, ia, M + 1))
+ # Note: 1:M is the cumsum for elements.
+ ia <- (1:(M + 1))[hold != 0]
+ return(new("spam", entries = as.numeric(obj$ra), colindices = as.integer(obj$ind[,
+ 2]), rowpointers = as.integer(ia), dimension = as.integer(obj$da)))
+}
+spam2spind <- function(obj) {
+ # diff gives the number of nonzero elements in each row
+ I <- rep((1:obj at dimension[1]), diff(obj at rowpointers))
+ list(ind = cbind(I, obj at colindices), da = obj at dimension,
+ ra = obj at entries)
+}
+spam2full <- function(obj) {
+ spind2full(spam2spind(obj))
+}
diff --git a/R/spatialProcess.R b/R/spatialProcess.R
new file mode 100644
index 0000000..e0c36a7
--- /dev/null
+++ b/R/spatialProcess.R
@@ -0,0 +1,117 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+spatialProcess <- function(x, y, weights = rep(1, nrow(x)), Z = NULL,
+ mKrig.args = list( m=2),
+ cov.function = "stationary.cov",
+ cov.args = list(Covariance = "Matern", smoothness = 1),
+ theta = NULL,
+ theta.start = NULL,
+ lambda.start = .5,
+ theta.range = NULL,
+ abstol = 1e-4,
+ na.rm = TRUE,
+ verbose = FALSE,
+ REML = FALSE,
+ ...) {
+
+# NOTE all ... information is assumed to be for the cov.args list
+# overwrite the default choices (some R arcania!)
+ ind<- match( names( cov.args), names(list(...) ) )
+ cov.args <- c(cov.args[is.na(ind)], list(...))
+ if( verbose){
+ cat("extra arguments from ... " , names( list(...)) , fill=TRUE)
+ cat(" full list from cov.args: ", names(cov.args) )
+ }
+
+# NOTE: switch to find theta MLE is.null( theta)
+ if( !is.null(theta)){
+ par.grid<- list( theta = theta)
+ MLEInfo <-mKrigMLEGrid(x, y, weights = weights, Z= Z,
+ mKrig.args = mKrig.args,
+ cov.fun = cov.function,
+ cov.args = cov.args,
+ par.grid = par.grid,
+ lambda = lambda.start,
+ lambda.profile = TRUE,
+ na.rm = na.rm,
+ verbose = FALSE,
+ REML = REML)
+ lambda.MLE <- MLEInfo$lambda.MLE
+ theta.MLE <- NA
+ thetaModel <- theta
+ if( verbose){
+ print( MLEInfo$summary)
+ }
+ }
+ else{
+#
+# NOTE MLEspatialProcess omits NAs
+ MLEInfo <- MLESpatialProcess(x, y, weights = weights, Z=Z,
+ mKrig.args = mKrig.args,
+ cov.function = cov.function,
+ cov.args = cov.args,
+ theta.start = theta.start,
+ theta.range = theta.range,
+ gridN = 20,
+ lambda.start = lambda.start,
+ abstol = abstol,
+ verbose = FALSE,
+ REML = REML
+ )
+ lambda.MLE <- MLEInfo$MLEJoint$pars.MLE[1]
+ theta.MLE<- MLEInfo$MLEJoint$pars.MLE[2]
+ thetaModel<- theta.MLE
+ }
+#
+ if( verbose){
+ cat("Summary from joint optimization", fill=TRUE)
+ print( MLEInfo$MLEJoint$summary )
+ print( MLEInfo$MLEJoint$pars.MLE)
+ }
+# now fit spatial model with MLE for theta (range parameter)
+# or the value supplied in the call
+# reestimate the other parameters for simplicity to get the complete mKrig object
+ obj <- do.call( "mKrig",
+ c( list(x=x,
+ y=y,
+ weights=weights,
+ Z=Z),
+ mKrig.args,
+ list( na.rm=na.rm),
+ list(
+ cov.function = cov.function,
+ cov.args = cov.args,
+ lambda = lambda.MLE,
+ theta = thetaModel
+ )
+ )
+ )
+ obj <- c(obj, list( MLEInfo = MLEInfo,
+ thetaModel= thetaModel,
+ theta.MLE = theta.MLE,
+ lambda.MLE = lambda.MLE, summary=MLEInfo$summary)
+ )
+# replace call to mKrig with this top level one
+ obj$call<- match.call()
+ class(obj) <- c( "spatialProcess","mKrig")
+
+ return(obj)
+}
diff --git a/R/splint.R b/R/splint.R
new file mode 100644
index 0000000..13a183a
--- /dev/null
+++ b/R/splint.R
@@ -0,0 +1,91 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"splint" <- function(x, y, xgrid, wt = NULL, derivative = 0,
+ lam = 0, df = NA, lambda = NULL, nx=NULL) {
+ #
+ # reform calling args if passed as a matrix or list
+
+ if (is.matrix(x)) {
+ if (ncol(x) > 1) {
+ xgrid <- y
+ y <- x[, 2]
+ x <- x[, 1]
+ }
+ }
+ if (is.list(x)) {
+ xgrid <- y
+ y <- x$y
+ x <- x$x
+ }
+ if (any(duplicated(x))) {
+ stop("duplicated x values, use sreg")
+ }
+ if ((derivative > 2) | (derivative < 0))
+ stop("derivative must be 0,1,2")
+ if (length(x) != length(y))
+ stop("Lengths of x and y must match")
+ n <- length(x)
+ #default values for weights
+ # NOTE: weights do not matter when interpolating (lam==0)
+ if (is.null(wt)) {
+ wt <- rep(1, n)
+ }
+ # find lambda from eff degrees of freedom if it is passed
+ if (!is.na(df)) {
+ if ((df < 2) | (df > n)) {
+ stop("df out of range")
+ }
+ lam <- sreg.df.to.lambda(df, x, wt)
+ }
+ # use lambda is it is passed
+ if (!is.null(lambda)) {
+ lam <- lambda
+ }
+ igcv <- ifelse(lam == 0, 2, 0)
+ # call to FORTRAN -- only return the evaluated poiints (ygrid).
+ if( !is.null(nx)){
+ xgrid<- seq( min( x), max(x),,nx)
+ }
+
+ ygrid<- .Fortran("css",PACKAGE="fields",
+ h = as.double(ifelse(igcv == 2, 1, log(lam))),
+ as.integer(n),
+ as.double(x),
+ as.double(y),
+ wt = as.double(1/sqrt(wt)),
+ sy = as.double(rep(0, n)),
+ as.double(1),
+ as.double(1),
+ as.double(1),
+ as.integer(length(xgrid)),
+ as.double(xgrid),
+ ygrid = as.double(rep(0, length(xgrid))),
+ job = as.integer(c(igcv, 3, 0)),
+ as.integer(derivative),
+ as.integer(0)
+ )$ygrid
+ if(!is.null(nx) ){
+ return(list( x=xgrid, y=ygrid))
+}
+else{
+ return( ygrid)
+ }
+}
diff --git a/R/sreg.family.R b/R/sreg.family.R
new file mode 100644
index 0000000..f5519d7
--- /dev/null
+++ b/R/sreg.family.R
@@ -0,0 +1,317 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"sreg" <- function(x, y, lambda = NA, df = NA, offset = 0,
+ weights = rep(1, length(x)), cost = 1, nstep.cv = 80, tol = 1e-05,
+ find.diagA = TRUE, trmin = 2.01, trmax = NA, lammin = NA,
+ lammax = NA, verbose = FALSE, do.cv = TRUE, method = "GCV",
+ rmse = NA, na.rm = TRUE) {
+ call <- match.call()
+ out <- list()
+ out$call <- match.call()
+ class(out) <- c("sreg")
+ out$cost <- cost
+ out$offset <- offset
+ out$method <- method
+ #
+ # some obscure components so that some of the Krig functions
+ # work without size of 'null space'
+ out$nt <- 2
+ out$knots <- NULL
+ #
+ # various checks on x and y including looking for NAs.
+ out2 <- Krig.check.xY(x, y, NULL, weights, na.rm, verbose = verbose)
+ out <- c(out, out2)
+ # find duplicate rows of the x vector
+ # unique x values are now in out$xM and the means of
+ # y are in out$yM.
+ out <- Krig.replicates(out, verbose = verbose)
+ out <- c(out, out2)
+ # number of unique locations
+ out$np <- length(out$yM)
+ # now set maximum of trace for upper bound of GCV grid search
+ if (is.na(trmax)) {
+ trmax <- out$np * 0.99
+ }
+ if (verbose) {
+ print(out)
+ }
+ #
+ # sorted unique values for prediction to make line plotting quick
+ xgrid <- sort(out$xM)
+ out$trace <- NA
+ #
+ # figure out if the GCV function should be minimized
+ # and which value of lambda should be used for the estimate
+ # old code used lam as argument, copy it over from lambda
+ lam <- lambda
+ if (is.na(lam[1]) & is.na(df[1])) {
+ do.cv <- TRUE
+ }
+ else {
+ do.cv <- FALSE
+ }
+ #
+ # find lambda's if df's are given
+ if (!is.na(df[1])) {
+ lam <- rep(0, length(df))
+ for (k in 1:length(df)) {
+ lam[k] <- sreg.df.to.lambda(df[k], out$xM, out$weightsM)
+ }
+ }
+ #
+ if (verbose) {
+ cat("lambda grid", fill = TRUE)
+ print(lam)
+ }
+ if (do.cv) {
+ a <- gcv.sreg(out, lambda.grid = lam, cost = cost, offset = offset,
+ nstep.cv = nstep.cv, verbose = verbose, trmin = trmin,
+ trmax = trmax, rmse = rmse)
+ # if the spline is evaluated at the GCV solution
+ # wipe out lam grid
+ # and just use GCV lambda.
+ out$gcv.grid <- a$gcv.grid
+ out$lambda.est <- a$lambda.est
+ #
+ # save GCV estimate if that is what is needed
+ lam <- a$lambda.est[method, "lambda"]
+ out$shat.GCV <- a$lambda.est[method, "shat"]
+ }
+ #
+ # now evaluate spline at lambda either from specified grid or GCV value.
+ b <- list()
+ # lam can either be a grid or just the GCV value
+ NL <- length(lam)
+ NG <- length(xgrid)
+ h <- log(lam)
+ fitted.values<- residuals <- matrix(NA, ncol = NL, nrow = length(out$y))
+ predicted <- matrix(NA, ncol = NL, nrow = NG)
+ trace <- rep(NA, NL)
+ job <- as.integer(c(0, 3, 0))
+ if (find.diagA) {
+ diagA <- matrix(NA, ncol = NL, nrow = out$np)
+ # add switch to find diag of A.
+ job <- as.integer(c(3, 3, 0))
+ }
+ for (k in 1:NL) {
+ #
+ # call cubic spline FORTRAN, this is nasty looking but fast.
+ # note lambda is passed in log scale.
+ # what the routine does is controlled by array job
+ # spline solution evaluated at xgrid
+ #
+ b <- .Fortran("css", PACKAGE="fields",
+ h = as.double(h[k]),
+ npoint = as.integer(out$np),
+ x = as.double(out$xM),
+ y = as.double(out$yM),
+ wt = as.double(1/sqrt(out$weightsM)),
+ sy = as.double(rep(0, out$np)),
+ trace = as.double(0),
+ diag = as.double(c(cost, offset, rep(0, (out$np - 2)))),
+ cv = as.double(0), ngrid = as.integer(NG),
+ xg = as.double(xgrid),
+ yg = as.double(rep(0, NG)),
+ job = as.integer(job),
+ ideriv = as.integer(0),
+ ierr = as.integer(0)
+ )
+ if (find.diagA) {
+ diagA[, k] <- b$diag
+ }
+ # note distinction between yM and y, xM and x
+ # these are residuals at all data point locations not just the
+ # unique set.
+ trace[k] <- b$trace
+ fitted.values[ , k] <- splint(out$xM, b$sy, out$x)
+ residuals[ , k] <- out$y - fitted.values[,k]
+ predicted[ , k] <- b$yg
+ }
+ out$call <- call
+ out$lambda <- lam
+ out$do.cv <- do.cv
+ out$residuals <- residuals
+ out$trace <- trace
+ out$fitted.values <- fitted.values
+ out$predicted <- list(x = xgrid, y = predicted)
+ if (length(lambda[1]) == 1) {
+ out$eff.df <- out$trace[1]
+ }
+ if (find.diagA) {
+ out$diagA <- diagA
+ }
+ class(out) <- "sreg"
+ return(out)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.df.to.lambda" <- function(df, x, wt, guess = 1,
+ tol = 1e-05) {
+ if (is.na(df))
+ return(NA)
+ n <- length(unique(x))
+ info <- list(x = x, wt = wt, df = df)
+ if (df > n) {
+ warning(" df too large to match a lambda value")
+ return(NA)
+ }
+ h1 <- log(guess)
+ ########## find upper lambda
+ for (k in 1:25) {
+ tr <- sreg.trace(h1, info)
+ if (tr <= df)
+ break
+ h1 <- h1 + 1.5
+ }
+ ########## find lower lambda
+ h2 <- log(guess)
+ for (k in 1:25) {
+ tr <- sreg.trace(h2, info)
+ if (tr >= df)
+ break
+ h2 <- h2 - 1.5
+ }
+ out <- bisection.search(h1, h2, sreg.fdf, tol = tol, f.extra = info)$x
+ exp(out)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fdf" <- function(h, info) {
+ sreg.trace(h, info) - info$df
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fgcv" <- function(lam, obj) {
+ sreg.fit(lam, obj)$gcv
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fgcv.model" <- function(lam, obj) {
+ sreg.fit(lam, obj)$gcv.model
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fgcv.one" <- function(lam, obj) {
+ sreg.fit(lam, obj)$gcv.one
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fit" <- function(lam, obj, verbose = FALSE) {
+ np <- obj$np
+ N <- obj$N
+ nt <- 2
+ if (is.null(obj$cost)) {
+ cost <- 1
+ }
+ else {
+ cost <- obj$cost
+ }
+ if (is.null(obj$offset)) {
+ offset <- 0
+ }
+ else {
+ offset <- obj$offset
+ }
+ if (is.null(obj$shat.pure.error)) {
+ shat.pure.error <- 0
+ }
+ else {
+ shat.pure.error <- obj$shat.pure.error
+ }
+ if (is.null(obj$pure.ss)) {
+ pure.ss <- 0
+ }
+ else {
+ pure.ss <- obj$pure.ss
+ }
+ #print(np)
+ #\tNOTE h <- log(lam)
+ temp <- .Fortran("css", PACKAGE="fields",
+ h = as.double(log(lam)), npoint = as.integer(np),
+ x = as.double(obj$xM), y = as.double(obj$yM), wt = as.double(sqrt(1/obj$weightsM)),
+ sy = as.double(rep(0, np)), trace = as.double(0), diag = as.double(rep(0,
+ np)), cv = as.double(0), ngrid = as.integer(0), xg = as.double(0),
+ yg = as.double(0), job = as.integer(c(3, 0, 0)), ideriv = as.integer(0),
+ ierr = as.integer(0))
+ rss <- sum((temp$sy - obj$yM)^2 * obj$weightsM)
+ MSE <- rss/np
+ if ((N - np) > 0) {
+ MSE <- MSE + pure.ss/(N - np)
+ }
+ trA <- temp$trace
+ den <- (1 - (cost * (trA - nt - offset) + nt)/np)
+ den1 <- (1 - (cost * (trA - nt - offset) + nt)/N)
+ # If the denominator is negative then flag this as a bogus case
+ # by making the GCV function 'infinity'
+ #
+ shat <- sqrt((rss + pure.ss)/(N - trA))
+ GCV <- ifelse(den > 0, MSE/den^2, NA)
+ gcv.model <- ifelse((den > 0) & ((N - np) > 0), pure.ss/(N -
+ np) + (rss/np)/(den^2), NA)
+ gcv.one <- ifelse(den > 0, ((pure.ss + rss)/N)/(den1^2),
+ NA)
+ list(trace = trA, gcv = GCV, rss = rss, shat = shat, gcv.model = gcv.model,
+ gcv.one = gcv.one)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.fs2hat" <- function(lam, obj) {
+ sreg.fit(lam, obj)$shat^2
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"sreg.trace" <- function(h, info) {
+ N <- length(info$x)
+ #\th <- log(lam)
+ temp <- .Fortran("css", PACKAGE="fields",
+ h = as.double(h),
+ npoint = as.integer(N),
+ x = as.double(info$x),
+ y = as.double(rep(0, N)),
+ wt = as.double(1/sqrt(info$wt)),
+ sy = as.double(rep(0, N)),
+ trace = as.double(0),
+ diag = as.double(rep(0, N)),
+ cv = as.double(0),
+ ngrid = as.integer(0),
+ xg = as.double(0),
+ yg = as.double(0), job = as.integer(c(3, 0, 0)),
+ ideriv = as.integer(0),
+ ierr = as.integer(0)
+ )$trace
+ return(temp)
+}
diff --git a/R/stationary.cov.R b/R/stationary.cov.R
new file mode 100644
index 0000000..eb91006
--- /dev/null
+++ b/R/stationary.cov.R
@@ -0,0 +1,148 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"stationary.cov" <- function(x1, x2=NULL, Covariance = "Exponential", Distance = "rdist",
+ Dist.args = NULL, theta = 1, V = NULL, C = NA, marginal = FALSE,
+ derivative = 0, distMat = NA, onlyUpper = FALSE, ...) {
+
+ # get covariance function arguments from call
+ cov.args <- list(...)
+ # coerce x1 and x2 to matrices
+ if (is.data.frame(x1))
+ x1 <- as.matrix(x1)
+ if (!is.matrix(x1))
+ x1 <- matrix(c(x1), ncol = 1)
+
+ if(is.null(x2))
+ x2 = x1
+ if (is.data.frame(x2))
+ x2 <- as.matrix(x2)
+ if (!is.matrix(x2)& !is.null(x2))
+ x2 <- matrix(c(x2), ncol = 1)
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ #
+ # separate out a single scalar transformation and a
+ # more complicated scaling and rotation.
+ # this is done partly to have the use of great circle distance make sense
+ # by applying the scaling _after_ finding the distance.
+ #
+ if (length(theta) > 1) {
+ stop("theta as a vector matrix has been depreciated use the V argument")
+ }
+ #
+ # following now treats V as a full matrix for scaling and rotation.
+ #
+ # try to catch incorrect conbination of great circle distance and V
+ if (Distance == "rdist.earth" & !is.null(V)) {
+ stop("V not supported with great circle distance")
+ }
+ if (!is.null(V)) {
+ if (theta != 1) {
+ stop("can't specify both theta and V!")
+ }
+ x1 <- x1 %*% t(solve(V))
+ x2 <- x2 %*% t(solve(V))
+ }
+ #
+ # locations are now scaled and rotated correctly
+ # now apply covariance function to pairwise distance matrix, or multiply
+ # by C vector or just find marginal variance
+ # this if block finds the cross covariance matrix
+ if (is.na(C[1]) && !marginal) {
+ #
+ # if distMat is supplied, evaluate covariance for upper triangular part only
+ #
+ if(is.na(distMat[1])) {
+ # distMat not supplied so must compute it along with covariance matrix
+ # note overall scalling by theta (which is just theta under isotropic case)
+ if(is.null(x2))
+ distMat <- do.call(Distance, c(list(x1), Dist.args))
+ else
+ distMat <- do.call(Distance, c(list(x1=x1, x2=x2), Dist.args))
+
+ }
+
+ #
+ # now convert distance matrix to covariance matrix
+ #
+ if(inherits(distMat, "dist")) {
+ #
+ # distMat is in compact form, so evaluate covariance over all distMat and convert to matrix form
+
+ diagVal = do.call(Covariance, c(list(d=0), cov.args))
+
+ if(onlyUpper)
+ return(compactToMat(do.call(Covariance, c(list(d=distMat*(1/theta)), cov.args)), diagVal))
+ else
+ # if onlyUpper==FALSE, also set lower triangle of covariance matrix
+ return(compactToMat(do.call(Covariance, c(list(d=distMat*(1/theta)), cov.args)), diagVal, lower.tri=TRUE))
+ }
+ else {
+ # distMat is a full matrix
+ return(do.call(Covariance, c(list(d = distMat/theta), cov.args)))
+ }
+ }
+ # or multiply cross covariance by C
+ # as coded below this is not particularly efficient of memory
+ else if(!is.na(C[1])) {
+ if(onlyUpper) {
+ #the onlyUpper option is not compatible with the C option
+ onlyUpper = FALSE
+ }
+
+ if(is.null(x2))
+ bigD <- do.call(Distance, c(list(x1, x1), Dist.args))
+ else
+ bigD <- do.call(Distance, c(list(x1=x1, x2=x2), Dist.args))
+
+ if (derivative == 0) {
+ return(do.call(Covariance, c(list(d = bigD*(1/theta)), cov.args)) %*% C)
+ }
+ else {
+ # find partial derivatives
+ tempW <- do.call(Covariance, c(list(d = bigD*(1/theta)),
+ cov.args, derivative = derivative))
+ # loop over dimensions and knock out each partial accumulate these in
+ # in temp
+ temp <- matrix(NA, ncol = d, nrow = n1)
+ for (kd in 1:d) {
+ # Be careful if the distance (tempD) is close to zero.
+ # Note that the x1 and x2 are in transformed ( V inverse) scale
+ sM <- ifelse(bigD == 0, 0, tempW * outer(x1[, kd], x2[, kd], "-")/(theta * bigD))
+ # accumlate the new partial
+ temp[, kd] <- sM %*% C
+ }
+ # transform back to original coordinates.
+ if (!is.null(V)) {
+ temp <- temp %*% t(solve(V))
+ }
+ return(temp)
+ }
+ }
+ # or find marginal variance and return a vector.
+ else if (marginal) {
+ sigma2 <- do.call(Covariance, c(list(d = 0), cov.args))
+ return(rep(sigma2, nrow(x1)))
+ }
+
+ # should not get here based on sequence of conditional if statements above.
+}
diff --git a/R/stationary.image.cov.R b/R/stationary.image.cov.R
new file mode 100644
index 0000000..0d83f73
--- /dev/null
+++ b/R/stationary.image.cov.R
@@ -0,0 +1,96 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+stationary.image.cov <- function(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, M = NULL, N = NULL, cov.function="stationary.cov",delta=NULL, cov.args=NULL, ...) {
+ #
+ # if cov object is missing then create
+ # basically need to enlarge domain and find the FFT of the
+ # covariance
+ #
+ cov.args<-c( cov.args, list(...))
+ if (is.null(cov.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ #
+ # determine size of padding
+ # default is twice domain and will then yeild exact results
+ # delta indicates that covariance is zero beyond a distance delta
+ # so using a smaller grid than twice domain will stil give exact results.
+ if(!is.null(delta)){
+ M<- ceiling(m + 2*delta/dx)
+ N<- ceiling(n + 2*delta/dy)
+ }
+ if (is.null(M))
+ M <- (2 * m)
+ if (is.null(N))
+ N <- (2 * n)
+ xg <- make.surface.grid(list((1:M) * dx, (1:N) * dy))
+ center <- matrix(c((dx * M)/2, (dy * N)/2), nrow = 1,
+ ncol = 2)
+ #
+ # here is where the actual covariance form is used
+ # note passed arguments from call for parameters etc.
+ #
+ out<- do.call(cov.function, c(cov.args, list(x1 = xg, x2 = center)))
+ # check if this is a sparse result and if so expand to full size
+ if( class( out)=="spam"){
+ out <- spam2full(out)
+ }
+ # coerce to a matrix (image)
+ out<- matrix( c(out), nrow = M, ncol = N)
+ temp <- matrix(0, nrow = M, ncol = N)
+ #
+ # a simple way to normalize. This could be avoided by
+ # translating image from the center ...
+ #
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+ #
+ # wght is the discrete FFT for the covariance suitable for fast
+ # multiplication by convolution.
+ #
+ cov.obj <- list(m = m, n = n, grid = grid, N = N, M = M,
+ wght = wght, call = match.call())
+ if (setup) {
+ return(cov.obj)
+ }
+ }
+ temp <- matrix(0, nrow = cov.obj$M, ncol = cov.obj$N)
+ if (missing(ind1)) {
+ temp[1:cov.obj$m, 1:cov.obj$n] <- Y
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[1:cov.obj$m,
+ 1:cov.obj$n])
+ }
+ else {
+ if (missing(ind2)) {
+ temp[ind1] <- Y
+ }
+ else {
+ temp[ind2] <- Y
+ }
+ #
+ # as promised this is a single clean step
+ #
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[ind1])
+ }
+}
diff --git a/R/stationary.taper.cov.R b/R/stationary.taper.cov.R
new file mode 100644
index 0000000..facb27d
--- /dev/null
+++ b/R/stationary.taper.cov.R
@@ -0,0 +1,144 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"stationary.taper.cov" <- function(x1, x2=NULL, Covariance = "Exponential",
+ Taper = "Wendland", Dist.args = NULL, Taper.args = NULL,
+ theta = 1, V = NULL, C = NA, marginal = FALSE, spam.format = TRUE,
+ verbose = FALSE, ...) {
+ # get covariance function arguments from call
+ Cov.args <- list(...)
+ # coerce x1 and x2 to matrices
+ if (is.data.frame(x1))
+ x1 <- as.matrix(x1)
+ if (!is.matrix(x1))
+ x1 <- matrix(c(x1), ncol = 1)
+ if (is.null(x2))
+ x2 <- x1
+ if (is.data.frame(x2))
+ x2 <- as.matrix(x1)
+ if (!is.matrix(x2))
+ x2 <- matrix(c(x2), ncol = 1)
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ # Default taper arguments that are particular to the Wendland.
+ # Make sure dimension argument is added.
+ if (Taper == "Wendland") {
+ if (is.null(Taper.args)) {
+ Taper.args <- list(theta = 1, k = 2, dimension = ncol(x1))
+ }
+ if (is.null(Taper.args$dimension)) {
+ Taper.args$dimension <- ncol(x1)
+ }
+ }
+ #
+ # Add in general defaults for taper arguments if not Wendland
+ # theta = 1.0 is the default range for the taper.
+ if (is.null(Taper.args)) {
+ Taper.args <- list(theta = 1)
+ }
+ #
+ # separate out a single scalar transformation and a
+ # more complicated scaling and rotation.
+ # this is done partly to have the use of great circle distance make sense
+ # by applying the scaling _after_ finding the distance.
+ #
+ # flag for great circle distance
+ great.circle <- ifelse(is.null(Dist.args$method), FALSE,
+ Dist.args$method == "greatcircle")
+ # check form of theta
+ if (length(theta) > 1) {
+ stop("theta as a matrix has been depreciated, use the V argument")
+ }
+ #
+ # following now treats V as a full matrix for scaling and rotation.
+ #
+ if (!is.null(V)) {
+ # try to catch error of mixing great circle distance with a
+ # linear scaling of coordinates.
+ if (theta != 1) {
+ stop("can't specify both theta and V!")
+ }
+ if (great.circle) {
+ stop("Can not mix great circle distance\nwith general scaling (V argument or vecotr of theta's)")
+ }
+ x1 <- x1 %*% t(solve(V))
+ x2 <- x2 %*% t(solve(V))
+ }
+ #
+ # locations are now scaled and rotated correctly
+ # copy taper range
+ if (great.circle) {
+ # set the delta cutoff to be in scale of angular latitude.
+ # figure out if scale is in miles or kilometers
+ miles <- ifelse(is.null(Dist.args$miles), TRUE, Dist.args$miles)
+ delta <- (180/pi) * Taper.args$theta/ifelse(miles, 3963.34,
+ 6378.388)
+ }
+ else {
+ delta <- Taper.args$theta
+ }
+ if (length(delta) > 1) {
+ stop("taper range must be a scalar")
+ }
+ #NOTE tapering is applied to the _scaled_ locations.
+ # now apply covariance function to pairwise distance matrix, or multiply
+ # by C vector or just find marginal variance
+ if (!marginal) {
+ # find nearest neighbor distances based on taper threshhold.
+ # This is hardwired to 'nearest.dist' function from spam.
+ # note that delta is taken from the taper range not theta or V
+ sM <- do.call("nearest.dist", c(list(x1, x2, delta = delta,
+ upper = NULL), Dist.args))
+ # sM at entries are the pairwise distances up to distance taper.range.
+ # apply covariance and taper to these.
+ # note rescaling by theta and taper ranges.
+ sM at entries <- do.call(Covariance, c(list(d = sM at entries/theta),
+ Cov.args)) * do.call(Taper, c(list(d = sM at entries),
+ Taper.args))
+ # if verbose print out each component separately
+ if (verbose) {
+ print(sM at entries/theta)
+ print(do.call(Covariance, c(list(d = sM at entries/theta),
+ Cov.args)))
+ print(do.call(Taper, c(list(d = sM at entries), Taper.args)))
+ }
+ if (is.na(C[1])) {
+ # decide whether to return sM in spam sparse form or as a full matrix
+ if (spam.format) {
+ return(sM)
+ }
+ else {
+ return(as.matrix(sM))
+ }
+ }
+ else {
+ # other option is to sparse multiply cross covariance by C
+ return(sM %*% C)
+ }
+ }
+ else {
+ # find marginal variance and return a vector.
+ sigma2 <- do.call(Covariance, c(list(d = 0), Cov.args)) *
+ do.call(Taper, c(list(d = 0), Taper.args))
+ return(rep(sigma2, nrow(x1)))
+ }
+ # should not get here!
+}
diff --git a/R/stats.R b/R/stats.R
new file mode 100644
index 0000000..5b9c63b
--- /dev/null
+++ b/R/stats.R
@@ -0,0 +1,47 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"stats" <- function(x, by) {
+ if (!missing(by)) {
+ x <- cat.to.list(c(x), by)
+ }
+ if (!is.list(x) & !is.matrix(x))
+ x <- matrix(x, ncol = 1)
+ if (is.list(x)) {
+ ncol <- length(x)
+ out <- matrix(NA, ncol = ncol, nrow = length(describe()))
+ dimnames(out) <- list(describe(), names(x))
+ for (j in (1:ncol)) {
+ if (is.numeric(x[[j]])) {
+ out[, j] <- describe(x[[j]])
+ }
+ }
+ return(out)
+ }
+ if (is.matrix(x)) {
+ nc <- ncol(x)
+ out <- matrix(NA, ncol = nc, nrow = length(describe()))
+ dimnames(out) <- list(describe(), dimnames(x)[[2]])
+ for (j in (1:nc)) {
+ out[, j] <- describe(x[, j])
+ }
+ return(out)
+ }
+}
diff --git a/R/stats.bin.R b/R/stats.bin.R
new file mode 100644
index 0000000..84d6548
--- /dev/null
+++ b/R/stats.bin.R
@@ -0,0 +1,36 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"stats.bin" <- function(x, y, N = 10, breaks = NULL) {
+ out <- list()
+ if (is.null(breaks)) {
+ breaks <- pretty(x, N)
+ }
+ NBIN <- length(breaks) - 1
+ centers <- (breaks[1:NBIN] + breaks[2:(NBIN + 1)])/2
+ test <- describe()
+ obj <- matrix(NA, ncol = NBIN, nrow = length(test))
+ dimnames(obj) <- list(test, format(1:NBIN))
+ obj[, 1] <- describe(y[x <= breaks[2] & x >= breaks[1]])
+ for (k in 2:NBIN) {
+ obj[, k] <- describe(y[x <= breaks[k + 1] & x > breaks[k]])
+ }
+ list(centers = centers, breaks = breaks, stats = obj)
+}
diff --git a/R/summary.Krig.R b/R/summary.Krig.R
new file mode 100644
index 0000000..f7a19a7
--- /dev/null
+++ b/R/summary.Krig.R
@@ -0,0 +1,50 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summary.Krig" <- function(object, digits = 4, ...) {
+ x <- object
+ # lambda est may not be available if lambda has been supplied by user.
+ if (!is.na(x$lambda.est[1])) {
+ l.est <- x$lambda.est
+ }
+ else {
+ l.est <- NA
+ }
+ summary <- list(call = x$call, num.observation = length(x$residuals),
+ enp = x$eff.df, nt = x$nt, df.drift = sum(x$ind.drift),
+ res.quantile = quantile(x$residuals, seq(0, 1, 0.25)),
+ shat.MLE = x$shat.MLE, shat.GCV = x$shat.GCV, rhohat = x$rhohat,
+ m = x$m, lambda = x$lambda, cost = x$cost, rho = x$rho,
+ sigma2 = x$sigma2, num.uniq = length(x$yM), knot.model = x$knot.model,
+ np = x$np, method = x$method, lambda.est = l.est, shat.pure.error = x$shat.pure.error,
+ args = x$args)
+ class(summary) <- "summary.Krig"
+ summary$covariance <- cor(x$fitted.values * sqrt(x$weights),
+ (x$y) * sqrt(x$weights))^2
+ hold <- (sum((x$y - mean(x$y))^2) - sum(x$residuals^2))/(sum((x$y -
+ mean(x$y))^2))
+ summary$adjr2 <- 1 - ((length(x$residuals) - 1)/(length(x$residuals) -
+ x$eff.df)) * (1 - hold)
+ summary$digits <- digits
+ summary$cov.function <- as.character(x$cov.function.name)
+ summary$correlation.model <- x$correlation.model
+ summary$sum.gcv.lambda <- summaryGCV.Krig(x, x$lambda)
+ summary
+}
diff --git a/R/summary.ncdf.R b/R/summary.ncdf.R
new file mode 100644
index 0000000..cb95790
--- /dev/null
+++ b/R/summary.ncdf.R
@@ -0,0 +1,53 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+
+"summary.ncdf" <- function (object, ...)
+{
+ tempList<- NULL
+ varNames<- NULL
+#
+ cat("DIMENSIONS", fill=TRUE)
+ for (i in names(object$dim) ) {
+ vname = i
+ ndims = length(object$dim[[i]]$vals)
+ cat(vname, " has size", ndims, fill=TRUE)
+ }
+ cat(fill=TRUE)
+ cat("VARIABLES", fill=TRUE)
+ for (i in 1:object$nvars) {
+ vname = object$var[[i]]$name
+ ndims = object$var[[i]]$ndims
+ dimstring = paste(vname, "( variable ",i ,
+ ") has shape")
+ dimTemp<- NULL
+ for (j in 1:ndims) {
+ dimTemp<- c( dimTemp, object$var[[i]]$dim[[j]]$len)
+ }
+ temp<- ( dimTemp)
+ varNames<- c(varNames, vname)
+ tempList<- c( tempList, list(dimTemp))
+ if( is.null(dimTemp) ){
+ dimTemp<- NA}
+ cat( i,":", vname, "has size ", dimTemp, sep=" ", fill = TRUE)
+ }
+ names(tempList) <- varNames
+ invisible( tempList)
+}
diff --git a/R/summary.spatialDesign.R b/R/summary.spatialDesign.R
new file mode 100644
index 0000000..37022cd
--- /dev/null
+++ b/R/summary.spatialDesign.R
@@ -0,0 +1,26 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summary.spatialDesign" <- function(object, digits = 4,
+ ...) {
+ x <- object
+ class(x) <- ("summarySpatialDesign")
+ x
+}
diff --git a/R/summary.spatialProcess.R b/R/summary.spatialProcess.R
new file mode 100644
index 0000000..713879e
--- /dev/null
+++ b/R/summary.spatialProcess.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summary.spatialProcess" <- function(object, digits = 4, ...) {
+ print.spatialProcess( object, digits=digits, ...)
+}
diff --git a/R/summary.sreg.R b/R/summary.sreg.R
new file mode 100644
index 0000000..40c8590
--- /dev/null
+++ b/R/summary.sreg.R
@@ -0,0 +1,42 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summary.sreg" <- function(object, digits = 4, ...) {
+ x <- object
+ if (length(x$lambda) > 1) {
+ stop("Can't do a summary on an object with a grid of smoothing\nparameters")
+ }
+ summary <- list(call = x$call, num.observation = length(x$residuals),
+ enp = x$trace, nt = x$nt, res.quantile = quantile(x$residuals,
+ seq(0, 1, 0.25)), shat.GCV = x$shat.GCV, m = x$m,
+ lambda = x$lambda, cost = x$cost, num.uniq = length(x$y),
+ np = x$np, method = x$method, lambda.est = x$lambda.est[!is.na(x$lambda.est[,
+ 1]), ], shat.pure.error = x$shat.pure.error)
+ class(summary) <- "summary.sreg"
+ summary$covariance <- cor(x$fitted.values * sqrt(x$weights),
+ (x$y) * sqrt(x$weights))^2
+ hold <- (sum((x$y - mean(x$y))^2) - sum(x$residuals^2))/(sum((x$y -
+ mean(x$y))^2))
+ summary$adjr2 <- 1 - ((length(x$residuals) - 1)/(length(x$residuals) -
+ x$eff.df)) * (1 - hold)
+ summary$digits <- digits
+ summary$sum.gcv.lambda <- summaryGCV.sreg(x, x$lambda)
+ summary
+}
diff --git a/R/summaryGCV.Krig.R b/R/summaryGCV.Krig.R
new file mode 100644
index 0000000..f45f566
--- /dev/null
+++ b/R/summaryGCV.Krig.R
@@ -0,0 +1,57 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summaryGCV.Krig" <- function(object, lambda, cost = 1,
+ verbose = FALSE, offset = 0, y = NULL, ...) {
+ out <- object
+ nt <- out$nt
+ np <- out$np
+ N <- out$N
+ D <- out$matrices$D
+ if (is.null(y)) {
+ u <- out$matrices$u
+ shat.pure.error <- out$shat.pure.error
+ pure.ss <- out$pure.ss
+ }
+ else {
+ out2 <- Krig.coef(out, y)
+ u <- out2$u
+ shat.pure.error <- out2$shat.pure.error
+ pure.ss <- out2$pure.ss
+ }
+ info <- list(matrices = list(D = D, u = u), N = N, nt = nt,
+ cost = cost, pure.ss = pure.ss, shat.pure.error = shat.pure.error,
+ offset = offset)
+ if (verbose) {
+ print(info)
+ }
+ lambda.est <- rep(NA, 6)
+ names(lambda.est) <- c("lambda", "trA", "GCV", "GCV.one",
+ "GCV.model", "shat")
+ lambda.est[1] <- lambda
+ lambda.est[2] <- Krig.ftrace(lambda, D)
+ lambda.est[3] <- Krig.fgcv(lambda, info)
+ lambda.est[4] <- Krig.fgcv.one(lambda, info)
+ if (!is.na(shat.pure.error)) {
+ lambda.est[5] <- Krig.fgcv.model(lambda, info)
+ }
+ lambda.est[6] <- sqrt(Krig.fs2hat(lambda, info))
+ lambda.est
+}
diff --git a/R/summaryGCV.sreg.R b/R/summaryGCV.sreg.R
new file mode 100644
index 0000000..266dbf0
--- /dev/null
+++ b/R/summaryGCV.sreg.R
@@ -0,0 +1,46 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"summaryGCV.sreg" <- function(object, lambda, cost = 1,
+ nstep.cv = 20, offset = 0, verbose = TRUE, ...) {
+ out <- object
+ shat.pure.error <- out$shat.pure.error
+ pure.ss <- out$pure.ss
+ nt <- 2
+ np <- out$np
+ N <- out$N
+ out$cost <- cost
+ out$offset <- offset
+ lambda.est <- rep(NA, 6)
+ names(lambda.est) <- c("lambda", "trA", "GCV", "GCV.one",
+ "GCV.model", "shat")
+ #
+ # fill in stuff for this lambda
+ lambda.est[1] <- lambda
+ temp <- sreg.fit(lambda, out)
+ lambda.est[2] <- temp$trace
+ lambda.est[3] <- temp$gcv
+ lambda.est[4] <- temp$gcv.one
+ if (!is.na(shat.pure.error)) {
+ lambda.est[5] <- temp$gcv.model
+ }
+ lambda.est[6] <- temp$shat
+ lambda.est
+}
diff --git a/R/supportsArg.R b/R/supportsArg.R
new file mode 100644
index 0000000..56e0a8a
--- /dev/null
+++ b/R/supportsArg.R
@@ -0,0 +1,30 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+supportsArg = function(fun=stationary.cov, arg) {
+
+ if(is.null(fun)) {
+ #set fun to the default covariance function if not specified
+ fun = stationary.cov
+ }
+
+ argNames = names(as.list(args(fun)))
+ return(any(argNames == arg))
+}
diff --git a/R/surface.family.R b/R/surface.family.R
new file mode 100644
index 0000000..a6d4f18
--- /dev/null
+++ b/R/surface.family.R
@@ -0,0 +1,90 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"surface.Krig" <- function(object, grid.list = NULL,
+ extrap = FALSE, graphics.reset = NULL, xlab = NULL, ylab = NULL,
+ main = NULL, zlab = NULL, zlim = NULL, levels = NULL, type = "C",
+ nx = 80, ny = 80, ...) {
+ ## modified so that you can give main, and ylab as arguments
+ ## in ... and have them passed correctly
+ out.p <- predictSurface(object, grid.list = grid.list, extrap = extrap,
+ nx = nx, ny = ny, drop.Z = TRUE)
+ if (!is.null(ylab))
+ out.p$ylab <- ylab
+ if (!is.null(xlab))
+ out.p$xlab <- xlab
+ if (!is.null(zlab))
+ out.p$zlab <- zlab
+ if (!is.null(main))
+ out.p$main <- main
+ ## else
+ ## out.p$main <- NULL
+ plot.surface(out.p, type = type, graphics.reset = graphics.reset,
+ levels = levels, zlim = zlim, ...)
+ invisible()
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"surface" <- function(object, ...) {
+ UseMethod("surface")
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"surface.default" <- function(object, ...) {
+ plot.surface(object, ...)
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+"surface.mKrig" <- function(object, grid.list = NULL,
+ extrap = FALSE, graphics.reset = NULL, xlab = NULL, ylab = NULL,
+ main = NULL, zlab = NULL, zlim = NULL, levels = NULL, type = "C",
+ nx = 80, ny = 80, ...) {
+ ## modified so that you can give main, and ylab as arguments
+ ## in ... and have them passed correctly
+ out.p <- predictSurface(object, grid.list = grid.list, extrap = extrap,
+ nx = nx, ny = ny, drop.Z = TRUE)
+ if (!is.null(ylab))
+ out.p$ylab <- ylab
+ if (!is.null(xlab))
+ out.p$xlab <- xlab
+ if (!is.null(zlab))
+ out.p$zlab <- zlab
+ if (!is.null(main))
+ out.p$main <- main
+ ## else
+ ## out.p$main <- NULL
+ plot.surface(out.p, type = type, graphics.reset = graphics.reset,
+ levels = levels, zlim = zlim, ...)
+ invisible()
+}
+# fields, Tools for spatial data
+# Copyright 2015, Institute for Mathematics Applied Geosciences
+# University Corporation for Atmospheric Research
+# Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+#"surface.surface" <- function(object, ...) {
+# #
+# plot.surface(object, ...)
+#}
diff --git a/R/test.for.zero.R b/R/test.for.zero.R
new file mode 100644
index 0000000..c6bb59a
--- /dev/null
+++ b/R/test.for.zero.R
@@ -0,0 +1,40 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+test.for.zero <- function(xtest, xtrue, tol = 1e-08,
+ relative = TRUE, tag = NULL) {
+ denom <- ifelse(relative, mean(abs(c(xtrue))), 1)
+ test.value <- sum(abs(c(xtest) - c(xtrue)))/denom
+ if (!is.null(tag)) {
+ cat("Testing: ", tag, fill = TRUE)
+ }
+ if (test.value < tol) {
+ cat("PASSED test at tolerance ", tol, fill = TRUE)
+ }
+ else {
+
+ cat("FAILED test value = ", test.value, " at tolerance ",
+ tol)
+# generate an "error" to signal failed test
+ if (exists("test.for.zero.flag")) {
+ stop()
+ }
+ }
+}
diff --git a/R/tim.colors.R b/R/tim.colors.R
new file mode 100644
index 0000000..f00dbe4
--- /dev/null
+++ b/R/tim.colors.R
@@ -0,0 +1,55 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"tim.colors" <- function(n = 64, alpha = 1) {
+ # tims original 64 color definition definition:
+ orig <- c("#00008F", "#00009F", "#0000AF", "#0000BF", "#0000CF",
+ "#0000DF", "#0000EF", "#0000FF", "#0010FF", "#0020FF",
+ "#0030FF", "#0040FF", "#0050FF", "#0060FF", "#0070FF",
+ "#0080FF", "#008FFF", "#009FFF", "#00AFFF", "#00BFFF",
+ "#00CFFF", "#00DFFF", "#00EFFF", "#00FFFF", "#10FFEF",
+ "#20FFDF", "#30FFCF", "#40FFBF", "#50FFAF", "#60FF9F",
+ "#70FF8F", "#80FF80", "#8FFF70", "#9FFF60", "#AFFF50",
+ "#BFFF40", "#CFFF30", "#DFFF20", "#EFFF10", "#FFFF00",
+ "#FFEF00", "#FFDF00", "#FFCF00", "#FFBF00", "#FFAF00",
+ "#FF9F00", "#FF8F00", "#FF8000", "#FF7000", "#FF6000",
+ "#FF5000", "#FF4000", "#FF3000", "#FF2000", "#FF1000",
+ "#FF0000", "#EF0000", "#DF0000", "#CF0000", "#BF0000",
+ "#AF0000", "#9F0000", "#8F0000", "#800000")
+ if (n == 64 & alpha == 1)
+ return(orig)
+ rgb.tim <- t(col2rgb(orig))
+ temp <- matrix(NA, ncol = 3, nrow = n)
+ x <- seq(0, 1, , 64)
+ xg <- seq(0, 1, , n)
+ for (k in 1:3) {
+ hold <- splint(x, rgb.tim[, k], xg)
+ hold[hold < 0] <- 0
+ hold[hold > 255] <- 255
+ temp[, k] <- round(hold)
+ }
+ if (alpha == 1) {
+ rgb(temp[, 1], temp[, 2], temp[, 3], maxColorValue = 255)
+ }
+ else {
+ rgb(temp[, 1], temp[, 2], temp[, 3], maxColorValue = 255,
+ alpha = alpha)
+ }
+}
diff --git a/R/transformx.R b/R/transformx.R
new file mode 100644
index 0000000..3ccf151
--- /dev/null
+++ b/R/transformx.R
@@ -0,0 +1,49 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"transformx" <- function(x, scale.type = "unit.sd",
+ x.center, x.scale) {
+ if (scale.type == "unscaled") {
+ x.center <- rep(0, ncol(x))
+ x.scale <- rep(1, ncol(x))
+ }
+ else if (scale.type == "unit.sd") {
+ x.center <- apply(x, 2, mean)
+ x.scale <- sqrt(apply(x, 2, var))
+ x <- scale(x)
+ }
+ else if (scale.type == "range") {
+ x.center <- apply(x, 2, min)
+ x.scale <- apply(x, 2, max) - apply(x, 2, min)
+ x <- scale(x, center = x.center, scale = x.scale)
+ }
+ else if (scale.type == "user") {
+ if (missing(x.center))
+ x.center <- apply(x, 2, mean)
+ if (missing(x.scale) || length(x.scale) != ncol(x))
+ stop("Error: x.scale must be a vector of length d")
+ x <- scale(x, center = x.center, scale = x.scale)
+ }
+ else stop(paste("Error: scale.type must be one of", "unit.sd, range, user, unscaled"))
+ attr(x, "x.center") <- x.center
+ attr(x, "x.scale") <- x.scale
+ attr(x, "x.scale.type") <- scale.type
+ x
+}
diff --git a/R/unrollZGrid.R b/R/unrollZGrid.R
new file mode 100644
index 0000000..3ab752d
--- /dev/null
+++ b/R/unrollZGrid.R
@@ -0,0 +1,42 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+unrollZGrid<- function( grid.list, ZGrid){
+ if( is.null(ZGrid)){
+ return(ZGrid)
+ }
+ if( is.list( ZGrid) ){
+ if( any(grid.list[[1]] != ZGrid[[1]]) |any(grid.list[[2]] != ZGrid[[2]]) ){
+ stop("grid list does not match grid for covariates")
+ }
+# wipe out the x and y components of ZGrid because grid.list will be used
+ ZGrid<- ZGrid$z
+ }
+# check dimensions
+ Zdim<- dim( ZGrid)
+ nx<- length( grid.list[[1]])
+ ny<- length( grid.list[[2]])
+ if( (Zdim[1] != nx) | (Zdim[2] != ny) ){
+ stop( "Dimension of ZGrid does not match dimensions of location grid list.")
+ }
+# reshape as a matrix where rows index locations.
+# Note that this works whether Zdim[3] exists or not!
+ return( matrix( c(ZGrid), nrow= Zdim[1]*Zdim[2] ))
+ }
diff --git a/R/unscale.R b/R/unscale.R
new file mode 100644
index 0000000..e3dce50
--- /dev/null
+++ b/R/unscale.R
@@ -0,0 +1,25 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"unscale" <- function(x, x.center, x.scale) {
+ x <- scale(x, center = FALSE, scale = 1/x.scale)
+ x <- scale(x, center = -x.center, scale = FALSE)
+ x
+}
diff --git a/R/vgram.family.R b/R/vgram.family.R
new file mode 100644
index 0000000..7dfee8f
--- /dev/null
+++ b/R/vgram.family.R
@@ -0,0 +1,322 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"vgram" <- function(loc, y, id = NULL, d = NULL, lon.lat = FALSE,
+ dmax = NULL, N = NULL, breaks = NULL,
+ type=c("variogram", "covariogram", "correlogram")) {
+
+ type=match.arg(type)
+
+ # coerce to matrix
+ y <- cbind(y)
+ # if nearest neighbor indices are missing create all possible pairs.
+ if (is.null(id)) {
+ n <- nrow(loc)
+ is = rep(1:n, n)
+ js = rep(1:n, rep(n, n))
+ ind <- is > js
+ id <- cbind(is, js)[ind, ]
+ }
+
+ # if distances are missing calculate these
+ if (is.null(d)) {
+ loc <- as.matrix(loc)
+ if (lon.lat) {
+ d <- rdist.earth.vec(loc[id[,1],], loc[id[,2],]) #we want result in miles, not meters
+ }
+ else {
+ d <- rdist.vec(loc[id[,1],], loc[id[,2],])
+ }
+ }
+
+ # normalize columns to create correlogram, if necessary
+ #
+ if(type == "correlogram") {
+ sigma = apply(y, 2, sd, na.rm=TRUE)
+ y = sweep(y, 2, (1/sigma), FUN="*")
+ }
+
+ # center the columns by their mean and get row means if y is a matrix
+ #
+ colMeans <- apply(y, 2, mean, na.rm=TRUE)
+ yCntr = sweep(y, 2, colMeans)
+ y1Cntr = yCntr[id[,1],]
+ y2Cntr = yCntr[id[,2],]
+ if(type == "variogram") {
+ vg <- 0.5 * rowMeans(cbind((y1Cntr - y2Cntr)^2),
+ na.rm = TRUE)
+ }
+ else {
+ vg <- rowMeans(cbind(y1Cntr * y2Cntr),
+ na.rm = TRUE)
+ }
+ #
+ #information for returned object
+ #
+ call <- match.call()
+ if (is.null(dmax)) {
+ dmax <- max(d)
+ }
+ od <- order(d)
+ d <- d[od]
+ vg <- vg[od]
+ ind <- d <= dmax & !is.na(vg)
+
+ ## add a binned variogram if breaks are supplied
+ out <- list(d = d[ind], vgram = vg[ind], call = call, type=type)
+ if (!is.null(breaks) | !is.null(N)) {
+ out <- c(out, stats.bin(d[ind], vg[ind], N = N, breaks = breaks))
+ }
+ class(out) = c("vgram", class(out))
+ out
+}
+
+#calculating cross-covariogram and cross-correlogram (cross-covariance and
+#cross-correlation)
+crossCoVGram = function(loc1, loc2, y1, y2, id = NULL, d = NULL, lon.lat = FALSE,
+ dmax = NULL, N = NULL, breaks = NULL,
+ type=c("cross-covariogram", "cross-correlogram")) {
+
+ type=match.arg(type)
+
+ # coerce to matrix
+ y1 <- cbind(y1)
+ y2 <- cbind(y2)
+
+ # if nearest neighbor indices are missing create all possible pairs.
+ if (is.null(id)) {
+ n1 <- nrow(loc1)
+ n2 <- nrow(loc2)
+ id <- cbind(rep(1:n1, n2), rep(1:n2, rep(n1, n2)))
+ }
+
+ # if distances are missing calculate these
+ if (is.null(d)) {
+ loc1 <- as.matrix(loc1)
+ loc2 <- as.matrix(loc2)
+ if (lon.lat) {
+ d <- rdist.earth.vec(loc1[id[,1],], loc2[id[,2],]) #we want result in miles, not meters
+ }
+ else {
+ d <- rdist.vec(loc1[id[,1],], loc2[id[,2],])
+ }
+ }
+ #
+ # calculating covariogram will center the columns by their mean and get row means if y is a matrix
+ #
+ colMeans1 <- apply(y1, 2, mean, na.rm=TRUE)
+ colMeans2 <- apply(y2, 2, mean, na.rm=TRUE)
+ y1Cntr = sweep(data.matrix(y1), 2, colMeans1) # subtract the column means
+ y2Cntr = sweep(data.matrix(y2), 2, colMeans2) # subtract the column means
+ #
+ # normalize to create cross-correlogram, if necessary
+ #
+ if(type == "cross-correlogram") {
+ sigma1 = apply(y1Cntr, 2, sd, na.rm=TRUE)
+ sigma2 = apply(y2Cntr, 2, sd, na.rm=TRUE)
+ y1Cntr = sweep(y1Cntr, 2, 1/sigma1, FUN="*")
+ y2Cntr = sweep(y2Cntr, 2, 1/sigma2, FUN="*")
+ }
+ #
+ # calculate covariance for the given points
+ #
+ y1Cntr = y1Cntr[id[,1],]
+ y2Cntr = y2Cntr[id[,2],]
+ vg <- rowMeans(cbind(y1Cntr*y2Cntr), na.rm = TRUE)
+ #
+ #information for returned object
+ #
+ call <- match.call()
+ if (is.null(dmax)) {
+ dmax <- max(d)
+ }
+ od <- order(d)
+ d <- d[od]
+ vg <- vg[od]
+ ind <- d <= dmax & !is.na(vg)
+ ## add a binned variogram if breaks are supplied
+ out <- list(d = d[ind], vgram = vg[ind], call = call, type=type)
+ if (!is.null(breaks) | !is.null(N)) {
+ out <- c(out, stats.bin(d[ind], vg[ind], N = N, breaks = breaks))
+ }
+ class(out) = c("vgram", class(out))
+ out
+}
+
+#plot only the line of the empirical variogram, where the y coordinates of the line are
+#at the means of the bins
+plot.vgram = function(x, N=10, breaks = pretty(x$d, N, eps.correct = 1), add=FALSE, ...) {
+ otherArgs = list(...)
+ type=x$type
+
+ #set y axis label if not set by user
+ if(is.null(otherArgs$ylab)) {
+ if(type=="variogram")
+ otherArgs$ylab = "sqrt(Variance)"
+ else if(type == "covariogram" || type=="cross-covariogram")
+ otherArgs$ylab = "Covariance"
+ else if(type == "correlogram" || type=="cross-correlogram")
+ otherArgs$ylab = "Correlation"
+ else
+ stop("vgram 'type' argument must be either 'variogram', 'covariogram', 'correlogram', 'cross-covariogram', or 'cross-correlogram'")
+ }
+
+ #set x axis label if not set by user
+ if(is.null(otherArgs$xlab))
+ otherArgs$xlab = "Distance"
+
+ #set plot title if not set by user
+ if(is.null(otherArgs$main)) {
+ if(type=="variogram")
+ otherArgs$main = "Empirical Variogram"
+ else if(type=="covariogram")
+ otherArgs$main = "Empirical Covariogram"
+ else if(type=="correlogram")
+ otherArgs$main = "Empirical Correlogram"
+ else if(type=="cross-covariogram")
+ otherArgs$main = "Empirical Cross-Covariogram"
+ else if(type=="cross-correlogram")
+ otherArgs$main = "Empirical Cross-Correlogram"
+ else
+ stop("vgram 'type' argument must be either 'variogram', 'covariogram', 'correlogram', 'cross-covariogram', or 'cross-correlogram'")
+ }
+
+ #set ylim for correlogram if not set by user
+ if(is.null(otherArgs$ylim)) {
+ if(type == "correlogram" || type=="cross-correlogram")
+ otherArgs$ylim = c(-1, 1)
+ }
+
+ #set line type if not set by user
+ if(is.null(otherArgs$type))
+ otherArgs$type = "o"
+
+ #get bin data
+ dat = getVGMean(x, breaks=breaks)
+
+ #get bin centers versus bin means
+ centers = dat$centers
+ ys = dat$ys
+
+ #remove NAs
+ notNas = !is.na(ys)
+ centers = centers[notNas]
+ ys = ys[notNas]
+
+ #plot
+ if(!add)
+ do.call("plot", c(list(centers, ys), otherArgs))
+ else
+ do.call("lines", c(list(centers, ys), otherArgs))
+}
+
+"boxplotVGram" = function(x, N=10, breaks = pretty(x$d, N, eps.correct = 1), plot=TRUE,
+ plot.args=NULL, ...) {
+ dists = x$d
+ type=x$type
+ if(type == "variogram")
+ y = sqrt(x$vgram)
+ else
+ y = x$vgram
+ otherArgs = list(...)
+
+ #set y axis label if not set by user
+ if(is.null(otherArgs$ylab)) {
+ if(type=="variogram")
+ otherArgs$ylab = "sqrt(Variance)"
+ else if(type == "covariogram" || type=="cross-covariogram")
+ otherArgs$ylab = "Covariance"
+ else if(type == "correlogram" || type=="cross-correlogram")
+ otherArgs$ylab = "Correlation"
+ else
+ stop("vgram 'type' argument must be either 'variogram', 'covariogram', 'correlogram', 'cross-covariogram', or 'cross-correlogram'")
+ }
+
+ #set x axis label if not set by user
+ if(is.null(otherArgs$xlab))
+ otherArgs$xlab = "Distance"
+
+ #set plot title if not set by user
+ if(is.null(otherArgs$main)) {
+ if(type=="variogram")
+ otherArgs$main = "Empirical Variogram"
+ else if(type=="covariogram")
+ otherArgs$main = "Empirical Covariogram"
+ else if(type=="correlogram")
+ otherArgs$main = "Empirical Correlogram"
+ else if(type=="cross-covariogram")
+ otherArgs$main = "Empirical Cross-Covariogram"
+ else if(type=="cross-correlogram")
+ otherArgs$main = "Empirical Cross-Correlogram"
+ else
+ stop("vgram 'type' argument must be either 'variogram', 'covariogram', 'correlogram', 'cross-covariogram', or 'cross-correlogram'")
+ }
+
+ #set ylim for correlogram if not set by user
+ if(is.null(otherArgs$ylim)) {
+ if(type == "correlogram" || type=="cross-correlogram")
+ otherArgs$ylim = c(-1, 1)
+ }
+
+ #make boxplot
+ bplot = do.call("bplot.xy", c(list(x=dists, y=y, N=N, breaks=breaks, plot=plot), otherArgs))
+
+ #return bplot.xy statistics if plot==FALSE
+ if(!plot)
+ return(bplot)
+
+ #plot bin means with plot parameters given in plot.args (with defaults to look pretty)
+ plot.args$x=x
+ plot.args$add=TRUE
+ plot.args$breaks=breaks
+ if(is.null(plot.args$col))
+ plot.args$col = "red"
+ if(is.null(plot.args$type))
+ plot.args$type = "p"
+ do.call("plot.vgram", plot.args)
+}
+
+# Returns the variogram bin centers and means
+getVGMean = function(x, N = 10, breaks = pretty(x$d, N, eps.correct = 1))
+{
+ # Can calculate mean or other statistical functions of the values in the bins
+ VGstat = function(VG, minD=-Inf, maxD=Inf, statFun="mean", ...) {
+ ind = (VG$d > minD) & (VG$d < maxD)
+ do.call(statFun, c(list(VG$vgram[ind]), list(...)))
+ }
+
+ #helper function to get mean from any single bin
+ meansFromBreak = function(breakBounds = c(-Inf, Inf)) {
+ VGstat(x, minD=breakBounds[1], maxD=breakBounds[2], na.rm=TRUE)
+ }
+
+ #apply helper function to all bins
+ lowBreaks = breaks
+ highBreaks = c(breaks[2:length(breaks)], Inf)
+ breakBounds = cbind(lowBreaks, highBreaks)
+ centers = apply(breakBounds, 1, mean, na.rm=TRUE)
+ ys = apply(breakBounds, 1, meansFromBreak)
+
+ #take square root if variogram
+ if(x$type == "variogram")
+ ys=sqrt(ys)
+
+ return(list(centers=centers, ys=ys, type=x$type))
+}
diff --git a/R/vgram.matrix.R b/R/vgram.matrix.R
new file mode 100644
index 0000000..3040daa
--- /dev/null
+++ b/R/vgram.matrix.R
@@ -0,0 +1,86 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+vgram.matrix <- function(dat, R = 5, dx = 1, dy = 1) {
+
+# a useful function for matching shifted indices
+# (the kind of internal function Dorit does not like!)
+ SI <- function(ntemp, delta) {
+ n1 <- 1:ntemp
+ n2 <- n1 + delta
+ good <- (n2 >= 1) & (n2 <= ntemp)
+ return(cbind(n1[good], n2[good]))
+ }
+#
+ M<- nrow(dat)
+ N<- ncol( dat)
+ # create all possible separations for a grid up to a distance R
+ m <- min( c(round(R/dx),M) )
+ n <- min( c(round(R/dy),N) )
+ #
+ # all relavent combinations: note that negative increments are
+ # needed as well as positive ones
+ ind <- rbind(as.matrix(expand.grid(0, 1:n)), as.matrix(expand.grid(1:m,
+ 0)), as.matrix(expand.grid(c(-(m:1), 1:m), 1:n)))
+ # distances - only take those within a distance R.
+ # and trim everything to this bound
+ d <- sqrt((dx * ind[, 1])^2 + (dy * ind[, 2])^2)
+ good <- (d > 0) & (d <= R)
+ ind <- ind[good, ]
+ d <- d[good]
+ ind <- ind[order(d), ]
+ d <- sort(d)
+ #
+ # arrays to hold statistics
+ nbin <- nrow(ind)
+ holdVG <- rep(NA, nbin)
+ holdRVG <- rep(NA, nbin)
+ holdN <- rep(0, nbin)
+ # loop over each separation
+ for (k in 1:nbin) {
+ # indices for original and shifted image that are within array bounds
+ MM <- SI(M, ind[k, 1])
+ NN <- SI(N, ind[k, 2])
+ if( length(MM)>0 & length(NN)>0){
+ # find differences
+ BigDiff <- (dat[MM[, 1], NN[, 1] ] - dat[MM[, 2], NN[,2] ] )
+ # standard and the Cressie robust version.
+ # modified to handle NAs
+ holdVG[k] <- mean(0.5 * (BigDiff)^2, na.rm = TRUE)
+ holdRVG[k] <- mean(abs(BigDiff)^0.5, na.rm = TRUE)
+ holdN[k] <- sum( !is.na(BigDiff) )
+ }
+ }
+ # finish robust estimate Cressie (1993) formula 2.4.12
+ holdRVG <- 0.5 * (holdRVG^4)/(0.457 + 0.494 * holdN)
+ # collapsed variogram to common distances this what one would look
+ # at under the stationary case.
+ top <- tapply(holdVG * holdN, d, FUN = "sum")
+ bottom <- tapply(holdN, d, FUN = "sum")
+ dcollapsed <- as.numeric(names(bottom))
+ vgram <- top/bottom
+ # wipe out pesky row names
+ dimnames(vgram) <- NULL
+ out <- list(vgram = vgram, d = dcollapsed, ind = ind, d.full = d,
+ vgram.full = holdVG, robust.vgram = holdRVG, N = holdN,
+ dx = dx, dy = dy)
+ class(out) <- "vgram.matrix"
+ return(out)
+}
diff --git a/R/wendland.family.R b/R/wendland.family.R
new file mode 100644
index 0000000..e567d98
--- /dev/null
+++ b/R/wendland.family.R
@@ -0,0 +1,368 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+Wendland2.2 <- function(d, theta = 1) {
+ # Cari's test function with explicit form for d=2 k=2
+ # taper range is 1.0
+ d <- d/theta
+ if (any(d < 0))
+ stop("d must be nonnegative")
+ return(((1 - d)^6 * (35 * d^2 + 18 * d + 3))/3 * (d < 1))
+}
+#
+# the monster
+#
+"wendland.cov" <- function(x1, x2=NULL, theta = 1, V = NULL,
+ k = 2, C = NA, marginal = FALSE, Dist.args = list(method = "euclidean"),
+ spam.format = TRUE, derivative = 0, verbose = FALSE) {
+ #
+ # if marginal variance is needed
+ # this is a quick return
+ if (marginal) {
+ return(rep(1, nrow(x1)))
+ }
+ # the rest of the possiblities require some computing
+ # setup the two matrices of locations
+ #
+ if (!is.matrix(x1)) {
+ x1 <- as.matrix(x1)
+ }
+ if( is.null( x2) ) {
+ x2<- x1}
+ if (!is.matrix(x2) ) {
+ x2 <- as.matrix(x2)
+ }
+ d <- ncol(x1)
+ n1 <- nrow(x1)
+ n2 <- nrow(x2)
+ # logical to figure out if this is great circle distance or not
+ # great circle needs to handled specially due to how things are scaled.
+ great.circle <- Dist.args$method == "greatcircle"
+ # derivatives are tricky for great circle and other distances and have not been implemented ...
+ if (Dist.args$method != "euclidean" & derivative > 0) {
+ stop("derivatives not supported for this distance metric")
+ }
+ # catch bad theta format
+ if (length(theta) > 1) {
+ stop("theta as a matrix or vector has been depreciated")
+ }
+ # catch using V with great circle
+ if (!is.null(V) & great.circle) {
+ stop("V is not implemented with great circle distance")
+ }
+ if (!is.null(V)) {
+ if (theta != 1) {
+ stop("can't specify both theta and V!")
+ }
+ x1 <- x1 %*% t(solve(V))
+ x2 <- x2 %*% t(solve(V))
+ }
+ # if great circle distance set the delta cutoff to be in scale of angular latitude.
+ # also figure out if scale is in miles or kilometers
+ if (great.circle) {
+ miles <- ifelse(is.null(Dist.args$miles), TRUE, Dist.args$miles)
+ delta <- (180/pi) * theta/ifelse(miles, 3963.34, 6378.388)
+ }
+ else {
+ delta <- theta
+ }
+ if (verbose) {
+ print(delta)
+ }
+ # once scaling is done taper is applied with default range of 1.0
+ # find polynomial coeffients that define
+ # wendland on [0,1]
+ # d dimension and k is the order
+ # first find sparse matrix of Euclidean distances
+ # ||x1-x2||**2 (or any other distance that may be specified by
+ # the method component in Dist.args
+ # any distance beyond delta is set to zero -- anticipating the
+ # tapering to zero by the Wendland.
+ #
+ sM <- do.call("nearest.dist", c(list(x1, x2, delta = delta,
+ upper = NULL), Dist.args))
+ # scale distances by theta
+ # note: if V is passed then theta==1 and all the scaling should be done with the V matrix.
+ # there are two possible actions listed below:
+ # find Wendland cross covariance matrix
+ # return either in sparse or matrix format
+ if (is.na(C[1])) {
+ sM at entries <- Wendland(sM at entries/theta, k = k, dimension = d)
+ if (!spam.format) {
+ return(as.matrix(sM))
+ }
+ else {
+ return(sM)
+ }
+ }
+ else {
+ #
+ # multiply cross covariance matrix by the matrix C where
+ # columns are usually the 'c' coefficients
+ # note multiply happens in spam format
+ #
+ if (derivative == 0) {
+ sM at entries <- Wendland(sM at entries/theta, k = k, dimension = d)
+ return(sM %*% C)
+ }
+ else {
+ # otherwise evaluate partial derivatives with respect to x1
+ # big mess of code and an explicit for loop!
+ # this only is for euclidean distance
+ if (is.matrix(C)) {
+ if (ncol(C) > 1) {
+ stop("C should be a vector")
+ }
+ }
+ L <- length(coef)
+ # loop over dimensions and accumulate partial derivative matrix.
+ tempD <- sM at entries
+ tempW <- Wendland(tempD/theta, k = k, dimension = d,
+ derivative = derivative)
+ # loop over dimensions and knock out each partial accumulate these in
+ # in temp
+ temp <- matrix(NA, ncol = d, nrow = n1)
+ # Create rowindices vector
+ sMrowindices <- rep(1:n1, diff(sM at rowpointers))
+ for (kd in 1:d) {
+ #
+ # Be careful if the distance (tempD) is close to zero.
+ # Note that the x1 and x2 are in transformed ( V inverse) scale
+ #
+ #
+ sM at entries <- ifelse(tempD == 0, 0, (tempW *
+ (x1[sMrowindices, kd] - x2[sM at colindices, kd])/(theta *
+ tempD)))
+ #
+ # accumlate the new partial
+ temp[, kd] <- sM %*% C
+ }
+ # transform back to original coordinates.
+ if (!is.null(V)) {
+ temp <- temp %*% t(solve(V))
+ }
+ return(temp)
+ }
+ }
+ # should not get here!
+}
+#
+#
+#
+Wendland2.2 <- function(d, theta = 1) {
+ # Cari Kaufman's test case with explicit form for d=2 k=2
+ # taper range is 1.0
+ d <- d/theta
+ if (any(d < 0))
+ stop("d must be nonnegative")
+ return(((1 - d)^6 * (35 * d^2 + 18 * d + 3))/3 * (d < 1))
+}
+############## basic evaluation of Wendland and its derivatives.
+###########################
+# n: Wendland interpolation matrix is positive definite on R^n, i.e. n is
+# the dimension of the locations.
+# k: Wendland function is 2k times continuously
+# differentiable.
+# The proofs can be found in the work of Wendland(1995).
+# H. Wendland. Piecewise polynomial , positive definite and compactly supported radial
+# functions of minimal degree. AICM 4(1995), pp 389-396.
+#########################################
+## top level function:
+Wendland = function(d, theta = 1, dimension, k, derivative = 0,
+ phi = NA) {
+ if (!is.na(phi)) {
+ stop("phi argument has been depreciated")
+ }
+ if (any(d < 0)) {
+ stop("d must be nonnegative")
+ }
+ # find scaling so that function at zero is 1.
+ scale.constant <- wendland.eval(0, n = dimension, k, derivative = 0)
+ # adjust by theta
+ if (derivative > 0) {
+ scale.constant <- scale.constant * (theta^(derivative))
+ }
+ # scale distances by theta.
+ if( theta!=1){
+ d <- d/theta}
+ # at this point d the distances shouls be scaled so that
+ # covariance is zero beyond 1
+ if( (k==2)& (dimension==2) & (derivative==0)){
+ ((1 - d)^6 * (35 * d^2 + 18 * d + 3))/3 * (d < 1)}
+ else{
+ ifelse(d < 1, wendland.eval(d, n = dimension, k, derivative)/scale.constant,
+ 0)
+ }
+ }
+
+####################
+# [M] = wm(n, k)
+# Compute the matrix coeficient in Wendland(1995)
+# Input:
+#\tn: Wendland interpolation matrix is positive definite on R^n
+# \tk: Wendland function is 2k times continuously differentiable
+####################
+Wendland.beta = function(n, k) {
+ l = floor(n/2) + k + 1
+ M = matrix(0, nrow = k + 1, ncol = k + 1)
+ #
+ # top corner is 1
+ #
+ M[1, 1] = 1
+ #
+ # Compute across the columns and down the rows, filling out upper triangle of M (including diagonal). The indexing is done from 0, thus we have to adjust by +1 when accessing our matrix element.
+ #
+ if (k == 0) {
+ stop
+ }
+ else {
+ for (col in 0:(k - 1)) {
+ #
+ # Filling out the col+1 column
+ #
+ # As a special case, we need a different formula for the top row
+ #
+ row = 0
+ beta = 0
+ for (m in 0:col) {
+ beta = beta + M[m + 1, col + 1] * fields.pochdown(m +
+ 1, m - row + 1)/fields.pochup(l + 2 * col -
+ m + 1, m - row + 2)
+ }
+ M[row + 1, col + 2] = beta
+ #
+ # Now do the rest of rows
+ #
+ for (row in 1:(col + 1)) {
+ beta = 0
+ for (m in (row - 1):col) {
+ beta = beta + M[m + 1, col + 1] * fields.pochdown(m +
+ 1, m - row + 1)/fields.pochup(l + 2 * col -
+ m + 1, m - row + 2)
+ }
+ M[row + 1, col + 2] = beta
+ }
+ }
+ }
+ M
+}
+########################################
+# [phi] = wendland.eval(r, n, k, derivative).
+# Compute the compacted support basis function in Wendland(1995).
+# Input:
+#\tr: a scalar representing the distance between locations. r should be scaled into [0,1] beforehand.
+# \tn: Wendland interpolation matrix is positive definite on R^n. Or, we could say n is the dimension of the locations.
+# \tk: Wendland function is 2k times continuously differentiable.
+#\tderivative: the derivative of wendland function.
+# Output:
+#\tphi: a scalar evaluated by the Wendland function at distance r.
+# example:
+#\tr = 0.5
+#\tphi = wendland.eval(r, 2, 1,derivative = 1 )
+# The proofs can be found in the work of Wendland(1995).
+# H. Wendlamd. Piecewise polynomial , positive definite and compactly supported radial functions of minimal degree. AICM 4(1995), pp 389-396.
+#########################################
+wendland.eval = function(r, n, k, derivative = 0) {
+ #
+ # check if the distances are between [0,1]
+ #
+ beta = Wendland.beta(n, k)
+ l = floor(n/2) + k + 1
+ if (derivative == 0) {
+ #
+ # first evaluate outside for loop with m =0
+ phi = beta[1, k + 1] * (1 - r)^(l + 2 * k)
+ # now accumulate terms for other m values up to k
+ for (m in 1:k) {
+ phi = phi + beta[m + 1, k + 1] * r^m * (1 - r)^(l +
+ 2 * k - m)
+ }
+ }
+ else {
+ # evaluate derivative note use of symbolic differtiation.
+ f.my = expression((1 - r)^(l + 2 * k))
+ f.deriv = fields.D(f.my, "r", order = derivative)
+ f.eval = eval(f.deriv)
+ phi = beta[1, k + 1] * f.eval
+ for (m in 1:k) {
+ f.my = expression(r^m * (1 - r)^(l + 2 * k - m))
+ f.deriv = fields.D(f.my, "r", order = derivative)
+ f.eval = eval(f.deriv)
+ phi = phi + beta[m + 1, k + 1] * f.eval
+ }
+ }
+ phi
+}
+#######################
+# [n] = fields.pochup(q, k)
+# Calculate the Pochhammer symbol for rising factorial q(q+1)(q+2)...(q+k-1)
+#######################
+fields.pochup = function(q, k) {
+ n = q
+ if (k == 0) {
+ n = 1
+ }
+ else {
+ for (j in 1:(k - 1)) {
+ if ((k - 1) < 1) {
+ stop
+ }
+ else {
+ n = n * (q + j)
+ }
+ }
+ }
+ n
+}
+#########################
+# [n] = fields.pochdown(q, k)
+# Calculate the Pochhammer symbol for falling factorial q(q-1)(q-2)...(q-k+1)
+#########################
+fields.pochdown = function(q, k) {
+ n = q
+ if (k == 0) {
+ n = 1
+ }
+ else {
+ for (j in 1:(k - 1)) {
+ if ((k - 1) < 1) {
+ stop
+ }
+ else {
+ n = n * (q - j)
+ }
+ }
+ }
+ n
+}
+#############################
+# fields.D(f,name = x,order = n) forms the n-th derivative of function f with respect to the variable x
+################################
+fields.D = function(f, name, order = 1) {
+ if (order < 1) {
+ stop("'order' must be >= 1")
+ }
+ if (order == 1) {
+ d = D(f, name)
+ }
+ else {
+ fields.D(D(f, name), name, order - 1)
+ }
+}
diff --git a/R/wendland.image.cov.R b/R/wendland.image.cov.R
new file mode 100644
index 0000000..3b09079
--- /dev/null
+++ b/R/wendland.image.cov.R
@@ -0,0 +1,99 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+wendland.image.cov <- function(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, M = NULL, N = NULL, cov.args=NULL, ...) {
+ #
+ # if cov object is missing then create
+ # basically need to enlarge domain and find the FFT of the
+ # covariance
+ #
+ cov.args<-c( cov.args, list(...))
+ delta<- cov.args$theta
+ if (is.null(cov.obj)) {
+ dx <- grid$x[2] - grid$x[1]
+ dy <- grid$y[2] - grid$y[1]
+ m <- length(grid$x)
+ n <- length(grid$y)
+ #
+ # determine size of padding
+ # default is twice domain and will then yeild exact results
+ # delta indicates that covariance is zero beyond a distance delta
+ # so using a smaller grid than twice domain will still give exact results.
+ if(!is.null(delta)){
+ M<- ceiling(m + 2*delta/dx)
+ N<- ceiling(n + 2*delta/dy)
+ }
+ if (is.null(M))
+ M <- (2 * m)
+ if (is.null(N))
+ N <- (2 * n)
+# make sure M and N are even.
+# (not sure what it means if this is not the case!)
+ if( M%%2 !=0) {
+ M<- M+1}
+ if( N%%2 !=0) {
+ N<- N+1}
+#
+# print( c(m,n, M,N))
+ xGrid<- (1:M) * dx - (dx * M)/2
+ yGrid<- (1:N) * dy - (dy * N)/2
+ bigDistance<-
+ sqrt(
+ matrix( xGrid^2, M,N, byrow=FALSE) + matrix( yGrid^2, M,N, byrow=TRUE))
+ # cat("Wendland", fill=TRUE)
+ out<- Wendland( bigDistance / cov.args$theta, dimension=2, k=cov.args$k )
+ temp <- matrix(0, nrow = M, ncol = N)
+ #
+ # a simple way to normalize. This could be avoided by
+ # translating image from the center ...
+ #
+ temp[M/2, N/2] <- 1
+ wght <- fft(out)/(fft(temp) * M * N)
+
+ #
+ # wght is the discrete FFT for the covariance suitable for fast
+ # multiplication by convolution.
+ #
+ cov.obj <- list(m = m, n = n, grid = grid, N = N, M = M,
+ wght = wght, call = match.call())
+ if (setup) {
+ return(cov.obj)
+ }
+ }
+ temp <- matrix(0, nrow = cov.obj$M, ncol = cov.obj$N)
+ if (missing(ind1)) {
+ temp[1:cov.obj$m, 1:cov.obj$n] <- Y
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[1:cov.obj$m,
+ 1:cov.obj$n])
+ }
+ else {
+ if (missing(ind2)) {
+ temp[ind1] <- Y
+ }
+ else {
+ temp[ind2] <- Y
+ }
+ #
+ # as promised this is a single clean step
+ #
+ Re(fft(fft(temp) * cov.obj$wght, inverse = TRUE)[ind1])
+ }
+}
diff --git a/R/which.max.matrix.R b/R/which.max.matrix.R
new file mode 100644
index 0000000..a1da863
--- /dev/null
+++ b/R/which.max.matrix.R
@@ -0,0 +1,39 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+which.max.matrix <- function(z) {
+ if (!is.matrix(z)) {
+ stop("Not a matrix")
+ }
+ m <- nrow(z)
+ n <- ncol(z)
+ # take care of NAs
+ ind <- which.max(z)
+ iy <- trunc((ind - 1)/m) + 1
+ ix <- ind - (iy - 1) * m
+ return(cbind(ix, iy))
+}
+
+
+which.max.image <- function(obj) {
+ ind.z <- which.max.matrix(obj$z)
+ return(list(x = obj$x[ind.z[, 1]], y = obj$y[ind.z[, 2]],
+ z = obj$z[ind.z], ind = ind.z))
+}
diff --git a/R/world.R b/R/world.R
new file mode 100644
index 0000000..1d7409f
--- /dev/null
+++ b/R/world.R
@@ -0,0 +1,39 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"world" <- function(...) {
+ map("world", ...)
+ invisible()
+}
+world.color <- function(...) {
+ cat("world.color has been depreciated. Please use fill options in\nthe world/map function.",
+ fill = TRUE)
+}
+world.land <- function(...) {
+ cat("world.land has been depreciated. Please use fill options in\nthe world/map function.",
+ fill = TRUE)
+}
+
+in.land.grid <- function(...) {
+ cat("world.land has been depreciated. Please refer to fields 6.7.1 or earlier to acces this function.",
+ fill = TRUE)
+}
+
+
diff --git a/R/xline.R b/R/xline.R
new file mode 100644
index 0000000..961e623
--- /dev/null
+++ b/R/xline.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"xline" <- function(x, ...) {
+ abline(v = x, ...)
+}
diff --git a/R/yline.R b/R/yline.R
new file mode 100644
index 0000000..b871600
--- /dev/null
+++ b/R/yline.R
@@ -0,0 +1,23 @@
+# fields is a package for analysis of spatial data written for
+# the R software environment .
+# Copyright (C) 2017
+# University Corporation for Atmospheric Research (UCAR)
+# Contact: Douglas Nychka, nychka at ucar.edu,
+# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with the R software environment if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+# or see http://www.r-project.org/Licenses/GPL-2
+"yline" <- function(y, ...) {
+ abline(h = y, ...)
+}
diff --git a/data/CO2.rda b/data/CO2.rda
new file mode 100644
index 0000000..949589d
Binary files /dev/null and b/data/CO2.rda differ
diff --git a/data/COmonthlyMet.rda b/data/COmonthlyMet.rda
new file mode 100644
index 0000000..fa4cb9a
Binary files /dev/null and b/data/COmonthlyMet.rda differ
diff --git a/data/NorthAmericanRainfall.rda b/data/NorthAmericanRainfall.rda
new file mode 100644
index 0000000..be5cfe0
Binary files /dev/null and b/data/NorthAmericanRainfall.rda differ
diff --git a/data/PRISMelevation.rda b/data/PRISMelevation.rda
new file mode 100644
index 0000000..4005686
Binary files /dev/null and b/data/PRISMelevation.rda differ
diff --git a/data/RCMexample.rda b/data/RCMexample.rda
new file mode 100644
index 0000000..2a94314
Binary files /dev/null and b/data/RCMexample.rda differ
diff --git a/data/RMelevation.rda b/data/RMelevation.rda
new file mode 100644
index 0000000..ad90f87
Binary files /dev/null and b/data/RMelevation.rda differ
diff --git a/data/US.dat.rda b/data/US.dat.rda
new file mode 100644
index 0000000..48393aa
Binary files /dev/null and b/data/US.dat.rda differ
diff --git a/data/WorldBankCO2.rda b/data/WorldBankCO2.rda
new file mode 100644
index 0000000..dc6015f
Binary files /dev/null and b/data/WorldBankCO2.rda differ
diff --git a/data/datalist b/data/datalist
new file mode 100644
index 0000000..5fbcd9e
--- /dev/null
+++ b/data/datalist
@@ -0,0 +1,11 @@
+CO2: CO2 CO2.true
+COmonthlyMet: CO.elev CO.id CO.loc CO.names CO.ppt CO.ppt.MAM CO.ppt.MAM.climate CO.tmax CO.tmax.MAM CO.tmax.MAM.climate CO.tmean.MAM.climate CO.tmin CO.tmin.MAM CO.tmin.MAM.climate CO.years
+PRISMelevation
+RCMexample
+RMelevation
+US.dat
+WorldBankCO2
+lennon
+ozone2
+rat.diet
+world.dat
diff --git a/data/lennon.rda b/data/lennon.rda
new file mode 100644
index 0000000..11f395c
Binary files /dev/null and b/data/lennon.rda differ
diff --git a/data/ozone2.rda b/data/ozone2.rda
new file mode 100644
index 0000000..928ae49
Binary files /dev/null and b/data/ozone2.rda differ
diff --git a/data/rat.diet.rda b/data/rat.diet.rda
new file mode 100644
index 0000000..d8a31c6
Binary files /dev/null and b/data/rat.diet.rda differ
diff --git a/data/world.dat.rda b/data/world.dat.rda
new file mode 100644
index 0000000..be48ea0
Binary files /dev/null and b/data/world.dat.rda differ
diff --git a/debian/README.source b/debian/README.source
deleted file mode 100644
index c8e5bab..0000000
--- a/debian/README.source
+++ /dev/null
@@ -1,51 +0,0 @@
-Explanation for binary files inside source package according to
- http://lists.debian.org/debian-devel/2013/09/msg00332.html
-
-Files: data/CO2.rda
-Documented: man/CO2.Rd
- Simulated global CO2 observations
-
-Files: data/COmonthlyMet.rda
-Documented: man/CO.Rd
- Monthly surface meterology for Colorado 1895-1997
-
-Files: data/lennon.rda
-Documented: man/lennon.Rd
- Gray image of John Lennon
-
-Files: data/NorthAmericanRainfall.rda
-Documented: man/NorthAmericanRainfall.Rd
- Observed North American summer precipitation from the
- historical climate network.
-
-Files: data/ozone2.rda
-Documented: man/ozone2.Rd
- Daily 8-hour ozone averages for sites in the Midwest
-
-Files: data/PRISMelevation.rda
- data/RMelevation.rda
-Documented: man/RMprecip.Rd
- Monthly total precipitation (mm) for August 1997 in the Rocky Mountain
- Region and some gridded 4km elevation data sets (m).
-
-Files: data/rat.diet.rda
-Documented: man/rat.diet.Rd
- Experiment studying an appetite supressant in rats
-
-Files: data/RCMexample.rda
-Documented: man/RCMexample.Rd
- 3-hour precipitation fields from a regional climate model
-
-Files: data/US.dat.rda
-Documented: man/US.dat.Rd
- Outline of coterminous US and states.
-
-Files: data/WorldBankCO2.rda
-Documented: man/WorldBank.Rd
- Carbon emissions and demographic covariables by country for 1999
-
-Files: data/world.dat.rda
-Documented: man/fields-internal.Rd
- Fields internal and secondary functions
-
- -- Andreas Tille <tille at debian.org> Fri, 08 Sep 2017 19:34:06 +0200
diff --git a/debian/README.test b/debian/README.test
deleted file mode 100644
index 55a9142..0000000
--- a/debian/README.test
+++ /dev/null
@@ -1,8 +0,0 @@
-Notes on how this package can be tested.
-────────────────────────────────────────
-
-To run the unit tests provided by the package you can do
-
- sh run-unit-test
-
-in this directory.
diff --git a/debian/changelog b/debian/changelog
deleted file mode 100644
index 690cb1e..0000000
--- a/debian/changelog
+++ /dev/null
@@ -1,36 +0,0 @@
-r-cran-fields (9.0-2) unstable; urgency=medium
-
- * Team upload
- * Fix autopkgtests by ignoring difference in loading required package
-
- -- Graham Inggs <ginggs at debian.org> Mon, 11 Sep 2017 09:53:46 +0000
-
-r-cran-fields (9.0-1) unstable; urgency=medium
-
- * New upstream version
- * Standards-Version: 4.1.0 (no changes needed)
- * Add README.source to document binary files
-
- -- Andreas Tille <tille at debian.org> Fri, 08 Sep 2017 19:34:06 +0200
-
-r-cran-fields (8.10-1) unstable; urgency=medium
-
- * New upstream version
-
- -- Andreas Tille <tille at debian.org> Thu, 22 Dec 2016 20:01:59 +0100
-
-r-cran-fields (8.7-1) unstable; urgency=medium
-
- * New upstream version
- * Convert to dh-r
- * Canonical homepage for CRAN
- * debhelper 10
- * d/watch: version=4
-
- -- Andreas Tille <tille at debian.org> Fri, 16 Dec 2016 08:23:51 +0100
-
-r-cran-fields (8.4-1-1) unstable; urgency=low
-
- * Initial release (closes: #828859)
-
- -- Andreas Tille <tille at debian.org> Tue, 28 Jun 2016 17:09:05 +0200
diff --git a/debian/compat b/debian/compat
deleted file mode 100644
index f599e28..0000000
--- a/debian/compat
+++ /dev/null
@@ -1 +0,0 @@
-10
diff --git a/debian/control b/debian/control
deleted file mode 100644
index f6ed4de..0000000
--- a/debian/control
+++ /dev/null
@@ -1,36 +0,0 @@
-Source: r-cran-fields
-Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.org>
-Uploaders: Andreas Tille <tille at debian.org>
-Section: gnu-r
-Priority: optional
-Build-Depends: debhelper (>= 10),
- dh-r,
- r-base-dev,
- r-cran-spam,
- r-cran-maps
-Standards-Version: 4.1.0
-Vcs-Browser: https://anonscm.debian.org/viewvc/debian-med/trunk/packages/R/r-cran-fields/trunk/
-Vcs-Svn: svn://anonscm.debian.org/debian-med/trunk/packages/R/r-cran-fields/trunk/
-Homepage: https://cran.r-project.org/package=fields
-
-Package: r-cran-fields
-Architecture: any
-Depends: ${shlibs:Depends},
- ${misc:Depends},
- ${R:Depends}
-Recommends: ${R:Recommends}
-Suggests: ${R:Suggests}
-Description: GNU R tools for spatial data
- For curve, surface and function fitting with an emphasis on splines,
- spatial data and spatial statistics. The major methods include cubic,
- and thin plate splines, Kriging and compact covariances for large data
- sets. The splines and Kriging methods are supported by functions that
- can determine the smoothing parameter (nugget and sill variance) and
- other covariance parameters by cross validation and also by restricted
- maximum likelihood. For Kriging there is an easy to use function that
- also estimates the correlation scale (range). A major feature is that
- any covariance function implemented in R and following a simple fields
- format can be used for spatial prediction. There are also many useful
- functions for plotting and working with spatial data as images. This
- package also contains an implementation of sparse matrix methods for
- large spatial data sets.
diff --git a/debian/copyright b/debian/copyright
deleted file mode 100644
index ef39197..0000000
--- a/debian/copyright
+++ /dev/null
@@ -1,33 +0,0 @@
-Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
-Upstream-Name: fields
-Upstream-Contact: Douglas Nychka <nychka at ucar.edu>
-Source: https://cran.r-project.org/package=fields
-
-Files: *
-Copyright: 2009-2016, Douglas Nychka, Reinhard Furrer, John Paige, Stephan Sain
- 2004-2007, Institute for Mathematics Applied Geosciences
- 2016, University Corporation for Atmospheric Research
- 2015, Institute for Mathematics Applied Geosciences
-License: GPL-2+
-
-Files: debian/*
-Copyright: 2016 Andreas Tille <tille at debian.org>
-License: GPL-2+
-
-License: GPL-2+
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
- .
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
- .
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- .
- On Debian systems, the complete text of the GNU General Public
- License can be found in `/usr/share/common-licenses/GPL-2'.
diff --git a/debian/docs b/debian/docs
deleted file mode 100644
index 3adf0d6..0000000
--- a/debian/docs
+++ /dev/null
@@ -1,3 +0,0 @@
-debian/README.test
-debian/tests/run-unit-test
-tests
diff --git a/debian/rules b/debian/rules
deleted file mode 100755
index 68d9a36..0000000
--- a/debian/rules
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/usr/bin/make -f
-
-%:
- dh $@ --buildsystem R
diff --git a/debian/source/format b/debian/source/format
deleted file mode 100644
index 163aaf8..0000000
--- a/debian/source/format
+++ /dev/null
@@ -1 +0,0 @@
-3.0 (quilt)
diff --git a/debian/tests/control b/debian/tests/control
deleted file mode 100644
index 25377fc..0000000
--- a/debian/tests/control
+++ /dev/null
@@ -1,3 +0,0 @@
-Tests: run-unit-test
-Depends: @, r-cran-runit
-Restrictions: allow-stderr
diff --git a/debian/tests/run-unit-test b/debian/tests/run-unit-test
deleted file mode 100644
index e46657d..0000000
--- a/debian/tests/run-unit-test
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/bin/sh -e
-
-pkg=r-cran-fields
-
-# The saved result files do contain some differences in metadata and we also
-# need to ignore version differences of R
-filter() {
- grep -v -e '^R version' \
- -e '^Copyright (C)' \
- -e '^R : Copyright 20' \
- -e '^Version 2.0' \
- -e '^Platform:' \
- -e '^Spam version .* is loaded.' \
- -e '^Loading required package:' \
- $1 | \
- sed -e '/^> *proc\.time()$/,$d'
-}
-
-if [ "$ADTTMP" = "" ] ; then
- ADTTMP=`mktemp -d /tmp/${pkg}-test.XXXXXX`
-fi
-cd $ADTTMP
-cp /usr/share/doc/${pkg}/tests/* $ADTTMP
-gunzip *.gz
-for htest in `ls *.R | sed 's/\.R$//'` ; do
- LC_ALL=C R --no-save < ${htest}.R 2>&1 | tee > ${htest}.Rout
- filter ${htest}.Rout.save > ${htest}.Rout.save_
- filter ${htest}.Rout > ${htest}.Rout_
- diff -u --ignore-all-space ${htest}.Rout.save_ ${htest}.Rout_
- if [ ! $? ] ; then
- echo "Test ${htest} failed"
- exit 1
- else
- echo "Test ${htest} passed"
- fi
-done
-rm -f $ADTTMP/*
-
-exit 0
diff --git a/debian/watch b/debian/watch
deleted file mode 100644
index 65a1998..0000000
--- a/debian/watch
+++ /dev/null
@@ -1,2 +0,0 @@
-version=4
-http://cran.r-project.org/src/contrib/fields_([-\d.]*)\.tar\.gz
diff --git a/inst/CITATION b/inst/CITATION
new file mode 100644
index 0000000..ce30e44
--- /dev/null
+++ b/inst/CITATION
@@ -0,0 +1,19 @@
+
+NOTE<- paste( "R package version", meta$Version )
+bibentry(bibtype="Misc",
+ mheader = "Please cite fields including its version and DOI as",
+ title = "fields: Tools for spatial data",
+ author = c(
+ person("Douglas Nychka"),
+ person("Reinhard Furrer"),
+ person("John Paige"),
+ person("Stephan Sain")
+ ),
+ note = NOTE,
+ organization = "University Corporation for Atmospheric Research",
+ address = "Boulder, CO, USA",
+ year = 2015,
+ url = "www.image.ucar.edu/fields",
+ doi ="10.5065/D6W957CT"
+ )
+
diff --git a/man/BD.Rd b/man/BD.Rd
new file mode 100644
index 0000000..49dbe73
--- /dev/null
+++ b/man/BD.Rd
@@ -0,0 +1,76 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{BD}
+\alias{BD}
+\title{
+ Data frame of the effect of buffer compositions on DNA strand displacement
+amplification. A 4-d regression data set with
+with replication. This is a useful test data set for exercising function
+fitting methods.
+}
+\description{
+The \code{BD} data frame has 89 rows and 5 columns. There are 89 runs with
+four buffer components (KCL, MgCl2, KP04, dnTP) systematically varied in
+a space-filliing design. The response is the DNA amplification rate.
+}
+
+\format{
+This data frame contains the following columns:
+
+\describe{
+\item{KCl}{
+Buffer component.
+}
+\item{MgCl2}{
+Buffer component.
+}
+\item{KPO4}{
+Buffer component.
+}
+\item{dNTP}{
+Buffer component, deoxyribonucleotides.
+}
+\item{lnya}{
+Exponential amplification rate on a log scale, i.e. the actual amplification
+rate.
+}
+}
+}
+
+\source{
+Thanks to Perry Haaland and Michael OConnell.
+
+Becton Dickinson Research Center
+Research Triangle Park, NC
+}
+\seealso{
+Tps
+}
+\examples{
+# fitting a DNA strand
+# displacement amplification surface to various buffer compositions
+fit<- Tps(BD[,1:4],BD$lnya,scale.type="range")
+surface(fit) # plots fitted surface and contours
+}
+\keyword{datasets}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/CO.Rd b/man/CO.Rd
new file mode 100644
index 0000000..011772e
--- /dev/null
+++ b/man/CO.Rd
@@ -0,0 +1,265 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Colorado Monthly Meteorological Data}
+\alias{COmonthlyMet}
+\alias{CO.elev}
+\alias{CO.id}
+\alias{CO.loc}
+\alias{CO.names}
+\alias{CO.ppt}
+\alias{CO.ppt.MAM}
+\alias{CO.tmax}
+\alias{CO.tmax.MAM}
+\alias{CO.tmin}
+\alias{CO.tmin.MAM}
+\alias{CO.years}
+\alias{CO.ppt.MAM.climate}
+\alias{CO.tmax.MAM.climate}
+\alias{CO.tmean.MAM.climate}
+\alias{CO.tmin.MAM.climate}
+\alias{CO.elevGrid}
+\alias{CO.Grid}
+
+\title{Monthly surface meterology for Colorado 1895-1997}
+
+\description{
+
+Source:
+ These is a group of R data sets for monthly min/max temperatures and
+precipitation over the period 1895-1997. It is a subset extracted from
+the more extensive US data record described in at
+\url{http://www.image.ucar.edu/Data/US.monthly.met}. Observed monthly
+precipitation, min and max temperatures for the conterminous US
+1895-1997. See also
+ \url{http://www.image.ucar.edu/Data/US.monthly.met/CO.shtml} for an
+on line document of this Colorado subset. Temperature is in degrees C and
+precipitation is total monthly accumulation in millimeters. Note that
+minimum (maximum) monthly tempertuare is the mean of the daily minimum
+(maximum) temperatures.
+
+Data domain:
+
+A rectagular lon/lat region [-109.5,-101]x [36.5,41.5] larger than the
+boundary of Colorado comprises approximately 400 stations. Although
+there are additional stations reported in this domain, stations that
+only report preicipitation or only report temperatures have been
+excluded. In addition stations that have mismatches between locations
+and elevations from the two meta data files have also been excluded. The
+net result is 367 stations that have colocated temperatures and
+precipitation.
+
+}
+
+\format{
+This group of data sets is organized with the following objects:
+\describe{
+\item{CO.info}{A data frame with columns: station id, elev, lon, lat, station name}
+
+
+\item{CO.elev}{elevation in meters}
+
+\item{CO.elevGrid}{An image object being elevation in meters on a 4 km grid covering Colorado. }
+
+\item{CO.id}{ alphanumeric station id codes}
+
+\item{CO.loc}{locations in lon/lat}
+
+\item{CO.Grid}{Just the grid.list used in the CO.elevGrid.}
+
+\item{CO.ppt CO.tmax CO.tmin}{Monthly means as three dimensional arrays ( Year, Month, Station).
+Temperature is in degrees C and precipitation in total monthly
+accumulation in millimeters.}
+
+\item{CO.ppt.MAM CO.tmax.MAM CO.tmin.MAM}{Spring seasonal means
+(March, April,May) as two dimensional arrays
+ (Year, Station).}
+
+ \item{CO.MAM.ppt.climate CO.MAM.tmax.climate CO.MAM.tmin.climate}{Spring seasonal means
+(March, April,May) means by station for the period 1960-1990. If less than 15 years are present over this period an NA is recorded.
+No detreding or other adjustments have been made for these mean estimates.
+}
+}
+
+
+}
+
+\examples{
+
+data(COmonthlyMet)
+
+#Spatial plot of 1997 Spring average daily maximum temps
+ quilt.plot( CO.loc,CO.tmax.MAM[103,] )
+ US( add=TRUE)
+ title( "Recorded MAM max temperatures (1997)")
+
+# min and max temperatures against elevation
+
+matplot( CO.elev, cbind( CO.tmax.MAM[103,], CO.tmin.MAM[103,]),
+ pch="o", type="p",
+ col=c("red", "blue"), xlab="Elevation (m)", ylab="Temperature (C)")
+title("Recorded MAM max (red) and min (blue) temperatures 1997")
+
+#Fitting a spatial model:
+obj<- Tps(CO.loc,CO.tmax.MAM.climate, Z= CO.elev )
+good<- !is.na(CO.tmax.MAM.climate )
+out<- MLE.Matern(CO.loc[good,],CO.tmax.MAM.climate[good],
+ smoothness=1.0, Z= CO.elev[good] )
+#MLE search on range suggests Tps model
+
+
+}
+\section{Creation of data subset}{
+Here is the precise R script used to create this data subset from the
+larger US monthly data set. This parent data set is available from
+\url{http://www.image.ucar.edu/public/Data} with a general description at
+\url{http://www.image.ucar.edu/Data/US.monthly.met}.
+
+ These technical details are not needed for casual use of the data --
+skip down to examples for some R code that summarizes these data.
+
+\preformatted{
+
+attach("RData.USmonthlyMet.bin")
+
+#To find a subset that covers Colorado (with a bit extra):
+
+
+indt<- UStinfo$lon< -101 & UStinfo$lon > -109.5
+indt<- indt & UStinfo$lat<41.5 & UStinfo$lat>36.5
+
+# check US(); points( UStinfo[indt,3:4])
+
+#find common names restricting choices to the temperature names
+tn<- match( UStinfo$station.id, USpinfo$station.id)
+indt<- !is.na(tn) & indt
+
+# compare metadata locations and elevations.
+# initial matches to precip stations
+CO.id<- UStinfo[indt,1]
+CO.names<- as.character(UStinfo[indt,5])
+pn<- match( CO.id, USpinfo$station.id)
+
+loc1<- cbind( UStinfo$lon[indt], UStinfo$lat[indt], UStinfo$elev[indt])
+loc2<- cbind( USpinfo$lon[pn], USpinfo$lat[pn], USpinfo$elev[pn])
+
+abs(loc1- loc2) -> temp
+indbad<- temp[,1] > .02 | temp[,2]> .02 | temp[,3] > 100
+
+# tolerance at 100 meters set mainly to include the CLIMAX station
+# a high altitude station.
+
+data.frame(CO.names[ indbad], loc1[indbad,], loc2[indbad,], temp[indbad,] )
+
+
+# CO.names.indbad. X1 X2 X3 X1.1 X2.1 X3.1 X1.2 X2.2 X3.2
+#1 ALTENBERN -108.38 39.50 1734 -108.53 39.58 2074 0.15 0.08 340
+#2 CAMPO 7 S -102.57 37.02 1311 -102.68 37.08 1312 0.11 0.06 1
+#3 FLAGLER 2 NW -103.08 39.32 1519 -103.07 39.28 1525 0.01 0.04 6
+#4 GATEWAY 1 SE -108.98 38.68 1391 -108.93 38.70 1495 0.05 0.02 104
+#5 IDALIA -102.27 39.77 1211 -102.28 39.70 1208 0.01 0.07 3
+#6 KARVAL -103.53 38.73 1549 -103.52 38.80 1559 0.01 0.07 10
+#7 NEW RAYMER -103.85 40.60 1458 -103.83 40.58 1510 0.02 0.02 52
+
+# modify the indt list to exclude these mismatches (there are 7 here)
+
+badones<- match( CO.id[indbad], UStinfo$station.id)
+indt[ badones] <- FALSE
+
+###### now have working set of CO stations have both temp and precip
+##### and are reasonably close to each other.
+
+N<- sum( indt)
+# put data in time series order instead of table of year by month.
+CO.tmax<- UStmax[,,indt]
+CO.tmin<- UStmin[,,indt]
+
+CO.id<- as.character(UStinfo[indt,1])
+CO.elev<- UStinfo[indt,2]
+CO.loc <- UStinfo[indt,3:4]
+CO.names<- as.character(UStinfo[indt,5])
+
+CO.years<- 1895:1997
+
+# now find precip stations that match temp stations
+pn<- match( CO.id, USpinfo$station.id)
+# number of orphans
+sum( is.na( pn))
+
+pn<- pn[ !is.na( pn)]
+CO.ppt<- USppt[,,pn]
+
+# checks --- all should zero
+
+ind<- match( CO.id[45], USpinfo$station.id)
+mean( abs( c(USppt[,,ind]) - c(CO.ppt[,,45]) ) , na.rm=TRUE)
+
+ind<- match( CO.id[45], UStinfo$station.id)
+mean( abs(c((UStmax[,,ind])) - c(CO.tmax[,,45])), na.rm=TRUE)
+
+mean( abs(c((UStmin[,,ind])) - c(CO.tmin[,,45])), na.rm=TRUE)
+
+
+# check order
+ind<- match( CO.id, USpinfo$station.id)
+ sum( CO.id != USpinfo$station.id[ind])
+ind<- match( CO.id, UStinfo$station.id)
+ sum( CO.id != UStinfo$station.id[ind])
+
+
+# (3 4 5) (6 7 8) (9 10 11) (12 1 2)
+N<- ncol( CO.tmax)
+
+CO.tmax.MAM<- apply( CO.tmax[,3:5,],c(1,3), "mean")
+
+CO.tmin.MAM<- apply( CO.tmin[,3:5,],c(1,3), "mean")
+
+CO.ppt.MAM<- apply( CO.ppt[,3:5,],c(1,3), "sum")
+
+# Now average over 1961-1990
+ind<- CO.years>=1960 & CO.years < 1990
+
+ temp<- stats( CO.tmax.MAM[ind,])
+ CO.tmax.MAM.climate<- ifelse( temp[1,] >= 15, temp[2,], NA)
+
+ temp<- stats( CO.tmin.MAM[ind,])
+ CO.tmin.MAM.climate<- ifelse( temp[1,] >= 15, temp[2,], NA)
+
+ CO.tmean.MAM.climate<- (CO.tmin.MAM.climate + CO.tmin.MAM.climate)/2
+
+ temp<- stats( CO.ppt.MAM[ind,])
+ CO.ppt.MAM.climate<- ifelse( temp[1,] >= 15, temp[2,], NA)
+
+
+save( list=c( "CO.tmax", "CO.tmin", "CO.ppt",
+ "CO.id", "CO.loc","CO.years",
+ "CO.names","CO.elev",
+ "CO.tmin.MAM", "CO.tmax.MAM", "CO.ppt.MAM",
+ "CO.tmin.MAM.climate", "CO.tmax.MAM.climate",
+ "CO.ppt.MAM.climate", "CO.tmean.MAM.climate"),
+ file="COmonthlyMet.rda")
+}
+
+}
+
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/CO2.Rd b/man/CO2.Rd
new file mode 100644
index 0000000..e940fc4
--- /dev/null
+++ b/man/CO2.Rd
@@ -0,0 +1,100 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{CO2}
+\alias{CO2}
+\alias{CO2.true}
+\docType{data}
+\title{Simulated global CO2 observations}
+\description{
+ This is an example of moderately large spatial data set and consists of
+simulated CO2 concentrations that are irregularly sampled from a lon/lat
+grid. Also included is the complete CO2 field (CO2.true) used to generate the
+synthetic observations.}
+ \usage{data(CO2) }
+\format{
+ The format of \code{CO2} is a list with two components:
+\itemize{
+\item lon.lat: 26633x2 matrix of the longitude/latitude locations.
+ These are a subset of a larger lon/lat grid (see example below).
+\item y: 26633 CO2 concentrations in parts per million.
+}
+ The format of \code{CO2.true}
+ is a list in "image" format with components:
+\itemize{
+ \item x longitude grid values.
+ \item y latitude grid values.
+ \item z an image matrix with CO2 concentration in parts per million
+ \item mask a logical image that indicates with grid locations were
+selected for the synthetic data set \code{CO2}.
+}
+
+}
+\details{
+This data was generously provided by Dorit Hammerling and Randy Kawa as a
+test example for the spatial analysis of remotely sensed (i.e.
+satellite) and irregular observations. The synthetic data is based on a
+true CO2 field simulated from a geophysical, numerical model.
+}
+\examples{
+\dontrun{
+
+data(CO2)
+#
+# A quick look at the observations with world map
+quilt.plot( CO2$lon.lat, CO2$y)
+world( add=TRUE)
+
+# Note high concentrations in Borneo (biomass burning), Amazonia and
+# ... Michigan (???).
+
+# spatial smoothing using the wendland compactly supported covariance
+# see help( fastTps) for details
+# First smooth using locations and Euclidean distances
+# note taper is in units of degrees
+out<-fastTps( CO2$lon.lat, CO2$y, theta=4, lambda=2.0)
+#summary of fit note about 7300 degrees of freedom
+# associated with fitted surface
+ print( out)
+# image plot on a grid (this takes a while)
+surface( out, type="I", nx=300, ny=150)
+# smooth with respect to great circle distance
+out2<-fastTps( CO2$lon.lat, CO2$y, lon.lat=TRUE,lambda=1.5, theta=4*68)
+print(out2)
+#surface( out2, type="I", nx=300, ny=150)
+
+# these data are actually subsampled from a grid.
+# create the image object that holds the data
+#
+
+temp<- matrix( NA, ncol=ncol(CO2.true$z), nrow=nrow(CO2.true$z))
+temp[ CO2.true$mask] <- CO2$y
+
+# look at gridded object.
+ image.plot(CO2.true$x,CO2.true$y, temp)
+
+# to predict _exactly_ on this grid for the second fit;
+# (this take a while)
+look<- predictSurface( out2, grid.list=list( x=CO2.true$x, y=CO2.true$y))
+image.plot(look)
+
+}
+}
+\keyword{datasets}
diff --git a/man/CovarianceUpper.Rd b/man/CovarianceUpper.Rd
new file mode 100644
index 0000000..5267965
--- /dev/null
+++ b/man/CovarianceUpper.Rd
@@ -0,0 +1,70 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{CovarianceUpper}
+\alias{ExponentialUpper}
+\title{
+Evaluate covariance over upper triangle of distance matrix
+}
+\description{
+Evaluates the covariance over the upper triangle of a distance matrix
+ rather than over the entire matrix to reduce computation time. Note
+ that the \code{chol} function only requires the upper triangle of
+ the covariance matrix to perform the Cholesky decomposition.
+}
+\usage{
+ExponentialUpper(distMat, range = 1, alpha = 1/range)
+}
+\arguments{
+ \item{distMat}{
+The distance matrix to evaluate the covariance over.
+}
+ \item{range}{
+Range parameter default is one. Note that the scale can also
+ be specified through the "theta" scaling argument used in
+ fields covariance functions)
+}
+ \item{alpha}{
+1/range
+}
+}
+\value{
+The covariance matrix, where only the upper triangle is calculated.
+}
+\author{
+John Paige
+}
+\seealso{
+\code{\link[fields]{Exponential}}
+}
+\examples{
+set.seed(123)
+
+#make distance matrix using the random locations
+coords = matrix(runif(10), ncol=2)
+distMat = rdist(coords)
+
+#compute covariance matrix, but only over the upper triangle
+upperCov = ExponentialUpper(distMat, range=.1)
+
+print(distMat)
+print(upperCov)
+}
+\keyword{ covariance }
diff --git a/man/Exponential.Rd b/man/Exponential.Rd
new file mode 100644
index 0000000..84fc6fb
--- /dev/null
+++ b/man/Exponential.Rd
@@ -0,0 +1,156 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Exponential, Matern, Radial Basis}
+\alias{Exponential}
+\alias{Matern}
+\alias{Matern.cor.to.range}
+\alias{RadialBasis}
+\title{Covariance functions}
+\description{
+Functional form of covariance function assuming the argument is a
+distance between locations. As they are defined here, they are in
+fact correlation functions. To set the marginal variance (sill)
+parameter, use the \code{rho} argument in \code{mKrig} or \code{Krig}.
+To set the nugget variance, use te \code{sigma2} argument in
+\code{mKrig} or \code{Krig}.
+}
+\usage{
+Exponential(d, range = 1, alpha = 1/range, phi=1.0)
+Matern(d , range = 1,alpha=1/range, smoothness = 0.5,
+ nu= smoothness, phi=1.0)
+Matern.cor.to.range(d, nu, cor.target=.5, guess=NULL,...)
+RadialBasis(d,M,dimension, derivative = 0)
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+
+ \item{d}{ Vector of distances or for \code{Matern.cor.to.range} just a single distance. }
+
+ \item{range}{ Range parameter default is one. Note
+ that the scale can also be specified through the "theta"
+ scaling argument used in fields covariance functions) }
+
+ \item{alpha}{1/range }
+
+ \item{phi}{This parameter option is added to be compatible with older
+ versions of fields and refers to the marginal variance of the process.
+ e.g. \code{phi* exp( -d/theta)} is the exponential covariance for points
+ separated by distance and range theta. Throughout fields this parameter
+ is equivalent to rho and it recommended that rho be used. If one is
+ simulating random fields. See the help on \code{\link{sim.rf}} for
+ more details. }
+
+ \item{smoothness}{ Smoothness parameter in Matern. Controls the number
+of derivatives in the process. Default is 1/2 corresponding to an exponential
+covariance.}
+
+ \item{nu}{ Same as smoothness}
+ \item{M}{Interpreted as a spline M is the order of the derivatives in the
+ penalty.}
+ \item{dimension}{Dimension of function}
+ \item{cor.target}{Correlation used to match the range parameter. Default is .5.}
+ \item{guess}{An optional starting guess for solution. This should not be needed.}
+ \item{derivative}{If greater than zero finds the first derivative of this function.}
+ \item{\dots}{Additional arguments to pass to the bisection search function.}
+}
+
+\details{
+Exponential:
+
+exp( -d/range)
+
+Matern:
+
+ con*(d\^nu) * besselK(d , nu )
+
+ Matern covariance function transcribed from Stein's book page 31
+nu==smoothness, alpha == 1/range
+
+ GeoR parameters map to kappa==smoothness and phi == range
+check for negative distances
+
+\code{con} is a constant that normalizes the expression to be 1.0 when d=0.
+
+Matern.cor.to.range:
+ This function is useful to find Matern covariance parameters that are
+comparable for different smoothness parameters. Given a distance \code{d},
+smoothness \code{nu}, target correlation \code{cor.target} and
+range \code{theta}, this function determines numerically the value of
+theta so that
+
+\code{Matern( d, range=theta, nu=nu) == cor.target}
+
+See the example for how this might be used.
+
+Radial basis functions:
+\preformatted{
+ C.m,d r**(2m-d) d- odd
+
+ C.m,d r**(2m-d)ln(r) d-even
+}
+where C.m.d is a constant based on spline theory and r is the radial distance
+between points. See \code{radbas.constant} for the computation of the constant.
+NOTE: Earlier versions of fields used ln(r^2) instead of ln(r) and so differ by a factor of 2.
+}
+\value{
+
+For the covariance functions: a vector of covariances.
+
+For Matern.cor.to.range: the value of the range parameter.
+
+}
+\references{ Stein, M.L. (1999) Statistical Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York.}
+\author{Doug Nychka}
+\seealso{stationary.cov, stationary.image.cov, Wendland,stationary.taper.cov
+ rad.cov}
+
+\examples{
+# a Matern correlation function
+ d<- seq( 0,10,,200)
+ y<- Matern( d, range=1.5, smoothness=1.0)
+ plot( d,y, type="l")
+
+# Several Materns of different smoothness with a similar correlation
+# range
+
+# find ranges for nu = .5, 1.0 and 2.0
+# where the correlation drops to .1 at a distance of 10 units.
+
+ r1<- Matern.cor.to.range( 10, nu=.5, cor.target=.1)
+ r2<- Matern.cor.to.range( 10, nu=1.0, cor.target=.1)
+ r3<- Matern.cor.to.range( 10, nu=2.0, cor.target=.1)
+
+# note that these equivalent ranges
+# with respect to this correlation length are quite different
+# due the different smoothness parameters.
+
+ d<- seq( 0, 15,,200)
+ y<- cbind( Matern( d, range=r1, nu=.5),
+ Matern( d, range=r2, nu=1.0),
+ Matern( d, range=r3, nu=2.0))
+
+ matplot( d, y, type="l", lty=1, lwd=2)
+ xline( 10)
+ yline( .1)
+}
+\keyword{spatial}% at least one, from doc/KEYWORDS
diff --git a/man/FORTRAN.internal.Rd b/man/FORTRAN.internal.Rd
new file mode 100644
index 0000000..19ca0b2
--- /dev/null
+++ b/man/FORTRAN.internal.Rd
@@ -0,0 +1,58 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{fields exported FORTRAN}
+ \alias{css}
+ \alias{ddfind}
+ \alias{dmaket}
+ \alias{evlpoly}
+ \alias{evlpoly2}
+ \alias{igpoly}
+ \alias{inpoly}
+ \alias{multeb}
+ \alias{multrb}
+ \alias{radbas}
+ \alias{rcss}
+\title{
+FORTRAN subroutines used in fields functions
+}
+\description{
+These functions implement cubic smoothing splines and also provide some basic
+computations for radial basis functions. All are called using the \code{.FORTRAN}
+interface.
+}
+\details{
+For these low level FORTRAN subroutines refer to the R functions for the calling
+sequence and to the src subdirectory of the fields pacakage for the source code.
+\describe{
+\item{css}{Cubic smoothing spline see \code{sreg} and \code{splint}}
+\item{ddfind}{Finds nearest neighbor points within a fixed distance. See \code{fields.rdist.near}}
+\item{dmaket}{Creates matrix of all polynomial terms up to fixed order. See \code{fields.mkpoly}}
+\item{evlpoly}{evaluates a univariate polynomial. See code{fields.evlpoly}}
+\item{evlpoly2}{ evaluates a multivariate polynomial. See code{fields.evlpoly2}}
+ \item{inpoly}{Determine which 2-d locations are within a polynomial. see \code{in.poly}}
+ \item{igpoly}{Determine which 2-d grid points locations are within a polynomial. see \code{in.poly.grid}}
+ \item{multeb}{Multiply an exponential cross covariance matrix by another matrix. See \code{exp.cov}}
+ \item{multrb}{Multiply an radial basis function matrix by another matrix. See \code{rad.cov}}
+ \item{radbas}{Evaluates radial basis functions. See \code{rdist.R}}
+ \item{rcss}{Robust cubic smoothing spline. See \code{qsreg}}
+}
+}
+\keyword{internal}
diff --git a/man/Krig.Amatrix.Rd b/man/Krig.Amatrix.Rd
new file mode 100644
index 0000000..bc6ac07
--- /dev/null
+++ b/man/Krig.Amatrix.Rd
@@ -0,0 +1,110 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Krig.Amatrix}
+\alias{Krig.Amatrix}
+\title{
+ Smoother (or "hat") matrix relating predicted values to the dependent
+(Y) values.
+}
+\description{
+For a fixed value of the smoothing parameter or the covariance
+function some nonparametric curve estimates are linear functions of
+the observed data. This is a intermediate level function that
+computes the linear weights to be applied to the observations to
+estimate the curve at a particular point. For example the predicted
+values can be represented as Ay where A is an N X N matrix of
+coefficients and Y is the vector of observed dependent variables. For
+linear smoothers the matrix A may depend on the smoothing parameter (
+or covariance function and the independent variables (X) but NOT on Y.
+}
+\usage{
+Krig.Amatrix(object, x0 = object$x, lambda=NULL,
+ eval.correlation.model = FALSE,...)
+}
+\arguments{
+Output object from fitting a data set using a FIELD regression
+method.
+Currently this is supported only for Krig ( and Tps) functions.
+
+\item{object}{
+A Krig object produced by the Krig ( or Tps) function.
+}
+\item{x0}{
+Locations for prediction default is the observation locations.
+}
+\item{lambda}{
+Value of the smoothing parameter.
+}
+\item{eval.correlation.model}{This applies to a correlation model
+where the observations have been standardized -- e.g.
+ y standardized = (yraw - mean) / (standard deviation). If TRUE the
+prediction in the correlation scale is transformed by the standard
+deviation and mean to give a prediction in the raw scale.
+If FALSE predictions are left in the correlation scale.}
+
+\item{\dots}{ Other arguments that can used by predict.Krig.}
+
+}
+\value{
+A matrix where the number of rows is equal to the number of predicted points
+and the number of columns is equal to the length of the Y vector.
+}
+\details{
+The main use of this function is in finding prediction standard errors.
+
+For the Krig ( and Tps) functions the A matrix is constructed based on the
+representation of the estimate as a generalized ridge regression. The
+matrix expressions are explained in the references from the FIELDS manual.
+For linear regression the matrix that gives predicted values is often
+referred to as the "hat" matrix and is useful for regression diagnostics.
+For smoothing problems the effective number of parameters in the fit is
+usually taken to be the trace of the A matrix. Note that while the A
+matrix is usually constructed to predict the estimated curve at the data
+points Amatrix.Krig does not have such restrictions. This
+is possible
+because any value of the estimated curve will be a linear function of Y.
+
+The actual calculation in this function is simple. It invovles
+loop through the unit vectors at each observation and computation of the
+prediction for each of these delta functions. This approach makes it easy to
+handle different options such as including covariates.
+
+}
+\section{References}{
+Nychka (2000) "Spatial process estimates as smoothers."
+}
+\seealso{
+Krig, Tps, predict.Krig
+}
+\examples{
+# Compute the A matrix or "hat" matrix for a thin plate spline
+# check that this gives the same predicted values
+tps.out<-Tps( ChicagoO3$x, ChicagoO3$y)
+A<-Krig.Amatrix( tps.out, ChicagoO3$x)
+test<- A\%*\%ChicagoO3$y
+# now compare this to predict( tps.out) or tps.out$fitted.values
+# they should be the same
+stats( test- tps.out$fitted.values)
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/Krig.Rd b/man/Krig.Rd
new file mode 100644
index 0000000..4d3568d
--- /dev/null
+++ b/man/Krig.Rd
@@ -0,0 +1,690 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Krig}
+\alias{Krig}
+\alias{resid.Krig}
+\alias{fitted.Krig}
+\alias{coef.Krig}
+
+\title{
+ Kriging surface estimate
+}
+\description{
+Fits a surface to irregularly spaced data. The Kriging model assumes
+that the unknown function is a realization of a Gaussian
+random spatial processes. The assumed model is additive
+Y = P(x) + Z(X) + e, where P is a low order polynomial and Z is a
+mean zero,
+Gaussian stochastic process with a
+covariance that is unknown up to a scale constant. The main advantages of
+this function are the flexibility in specifying the covariance as an
+R language function and also the supporting functions plot, predict,
+predictSE, surface for
+subsequent analysis. Krig also supports a correlation model where the mean
+and marginal variances are supplied.
+}
+\usage{
+Krig(x, Y, cov.function = "stationary.cov", lambda = NA, df
+ = NA, GCV = FALSE, Z = NULL, cost = 1, knots = NA,
+ weights = NULL, m = 2, nstep.cv = 200, scale.type =
+ "user", x.center = rep(0, ncol(x)), x.scale = rep(1,
+ ncol(x)), rho = NA, sigma2 = NA, method = "REML",
+ verbose = FALSE, mean.obj = NA, sd.obj = NA,
+ null.function = "Krig.null.function", wght.function =
+ NULL, offset = 0, na.rm = TRUE, cov.args = NULL,
+ chol.args = NULL, null.args = NULL, wght.args = NULL,
+ W = NULL, give.warnings = TRUE, ...)
+
+\method{fitted}{Krig}(object,...)
+
+\method{coef}{Krig}(object,...)
+
+resid.Krig(object,...)
+
+}
+\arguments{
+
+
+
+
+
+\item{chol.args}{
+Arguments to be passed to the cholesky decomposition in Krig.engine.fixed.
+The default if NULL, assigned at the top level of this function, is
+list( pivot=FALSE). This argument is useful when working with
+the sparse matrix package. }
+
+
+\item{cov.args}{
+A list with the arguments to call the covariance function. (in addition to the locations)
+}
+\item{cov.function}{
+Covariance function for data in the form of an R function (see
+Exp.simple.cov as an example).
+Default assumes that correlation is an exponential function of distance.
+See also \code{stationary.cov} for more general choice of covariance
+shapes. \code{exponential.cov} will be faster if only the exponential
+covariance form is needed.
+}
+\item{cost}{
+Cost value used in GCV criterion. Corresponds to a penalty for
+increased number of parameters. The default is 1.0 and corresponds to the
+usual GCV function.
+}
+\item{df}{
+The effective number of parameters for the fitted surface. Conversely,
+N- df, where N is the total number of observations is the degrees of
+freedom associated with the residuals.
+This is an alternative to specifying lambda and much more interpretable.
+NOTE: GCV argument defaults to TRUE if this argument is used.
+}
+
+\item{GCV}{ If TRUE matrix decompositions are done to allow estimating
+lambda by GCV or REML and specifying smoothness by the effective degrees of
+freedom. So the GCV switch does more than just supply a GCV estimate. Also if
+lambda or df are passed the estimate will be evaluated at those values,
+not at the GCV/REML estimates of lambda.
+If FALSE Kriging estimate is found under a fixed lambda model. }
+
+\item{give.warnings}{ If TRUE warnings are given in gcv grid search limits.
+If FALSE warnings are not given. Best to leave this TRUE!
+This argument is set ot FALSE if warn is less than zero in the
+top level, R options function. See options()$warn}
+
+\item{knots}{
+A matrix of locations similar to x. These can define an alternative set of
+basis functions for representing the estimate. One choice may be a
+space-filling subset of the original x locations, thinning out the
+design where locations cluster. The
+default is to put a "knot" at all unique locations. (See details.)
+}
+\item{lambda}{
+Smoothing parameter that is the ratio of the error variance (sigma**2)
+to the scale parameter of the
+covariance function (rho). If omitted this is estimated by GCV ( see
+method below).
+}
+
+
+
+
+
+
+
+\item{method}{
+Determines what "smoothing" parameter should be used. The default
+is to estimate standard GCV
+Other choices are: GCV.model, GCV.one, RMSE, pure error and REML. The
+differences are explained below.
+}
+
+
+
+\item{mean.obj}{
+Object to predict the mean of the spatial process. This used in when
+fitting a correlation model with varying spatial means and varying
+marginal variances. (See details.)
+}
+
+
+
+\item{m}{
+A polynomial function of degree (m-1) will be
+included in the model as the drift (or spatial trend) component.
+The "m" notation is from thin-plate splines where m is the
+derivative in the penalty function. With m=2 as the default a linear
+model in the locations will be fit a fixed part of the model.
+}
+
+\item{na.rm}{If TRUE NAs will be removed from the \code{y} vector and the
+corresponding rows of \code{x} -- with a warning.
+If FALSE Krig will just stop with a message. Once NAs are removed all
+subsequent analysis in fields does not use those data. }
+
+\item{nstep.cv}{ Number of grid points for the coarse grid search to
+minimize the GCV RMLE and other related criteria for finding lambda,
+the smoothing parameter. Default is 200, fairly large to avoid some
+cases of closely spaced local minima. Evaluations of the GCV and
+related objective functions are cheap given the matrix decompositions
+described below. }
+
+\item{null.args}{ Extra arguments for the null space function
+\code{null.function}. If \code{fields.mkpoly} is passed as
+\code{null.function} then this is set to a list with the value of
+\code{m}. So the default is use a polynomial of degree m-1 for the
+null space (fixed part) of the model. }
+
+\item{null.function}{
+An R function that creates the matrices for the null space model.
+The default is fields.mkpoly, an R function that creates a polynomial
+regression matrix with all terms up to degree m-1. (See Details)
+}
+
+\item{offset}{ The offset to be used in the GCV criterion. Default is
+0. This would be used when Krig is part of a backfitting algorithm and
+the offset is other model degrees of freedom from other regression
+components. }
+
+\item{rho}{
+Scale factor for covariance.
+}
+\item{scale.type}{
+This is a character string among: "range", "unit.sd", "user", "unscaled".
+The independent variables and knots are scaled to the specified scale.type.
+By default no scaling is done. This usuall makes sense for spatial locations.
+Scale type of "range" scales the data to the interval (0,1) by forming
+(x-min(x))/range(x) for each x. Scale type of "unit.sd"
+Scale type of "user" allows specification of an x.center and x.scale by the
+user. The default for "user" is mean 0 and standard deviation 1. Scale
+type of "unscaled" does not scale the data.
+}
+
+\item{sd.obj}{
+Object to predict the marginal standard deviation of the spatial process.
+}
+
+\item{sigma2}{
+Variance of the errors, often called the nugget variance. If weights are
+specified then the error variance is sigma2 divided by weights.
+Note that lambda is defined as the ratio sigma2/rho.
+}
+
+\item{verbose}{
+If true will print out all kinds of intermediate stuff. Default is false,
+of course as this is used mainly for debugging.
+}
+
+\item{weights}{
+Weights are proportional to the reciprocal variance of the measurement
+error. The default is equal weighting i.e. vector of unit weights.
+}
+
+\item{wght.function}{
+An R function that creates a weights matrix to the observations.
+This is only needed if the weight matirx has off diagonal elements.
+The default is NULL indicating that the weight matrix is a diagonal, based
+on the weights argument. (See details)
+}
+
+\item{W}{The observation weight matrix.}
+
+\item{wght.args}{
+Optional arguments to be passed to the weight function (wght.function)
+used to create the observation weight matrix.}
+
+\item{x}{
+Matrix of independent variables. These could the locations for spatial
+data or the indepedent variables in a regression.
+}
+
+\item{x.center}{
+Centering values to be subtracted from each column of the x matrix.
+}
+
+\item{x.scale}{
+Scale values that are divided into each column after centering.
+}
+
+\item{Y}{
+Vector of dependent variables. These are the values of the surface
+(perhaps with measurement error) at the locations or the dependent
+response in a regression.
+}
+\item{Z}{
+A vector of matrix of covariates to be include in the fixed part of the
+model. If NULL (default) no addtional covariates are included.}
+
+\item{\dots}{
+ Optional arguments that appear are assumed to be additional arguments
+to the covariance function. Or are included in methods functions (resid,
+fitted, coef) as a
+required argument.}
+
+\item{object}{ A Krig object}
+
+
+
+}
+\value{
+A object of class Krig. This includes the predicted values in
+fitted.values and the residuals in residuals. The results of the grid
+search to minimize the generalized cross validation function are
+returned in gcv.grid.
+
+The coef.Krig function only returns the coefficients, "d", associated with the
+fixed part of the model (also known as the null space or spatial drift).
+
+\item{call}{
+Call to the function
+}
+\item{y}{
+Vector of dependent variables.
+}
+\item{x}{
+Matrix of independent variables.
+}
+\item{weights}{
+Vector of weights.
+}
+\item{knots}{
+Locations used to define the basis functions.
+}
+\item{transform}{
+List of components used in centering and scaling data.
+}
+\item{np}{
+Total number of parameters in the model.
+}
+\item{nt}{
+Number of parameters in the null space.
+}
+\item{matrices}{
+List of matrices from the decompositions (D, G, u, X, qr.T).
+}
+\item{gcv.grid}{
+Matrix of values from the GCV grid search. The first column
+is the grid of lambda values used in the search, the second column
+is the trace of the A matrix, the third column is the GCV values and
+the fourth column is the estimated value of sigma conditional on the vlaue
+of lambda.
+}
+\item{lambda.est}{
+A table of estimated smoothing parameters with corresponding degrees
+of freedom and estimates of sigma found by different methods.
+}
+\item{cost}{
+Cost value used in GCV criterion.
+}
+\item{m}{
+Order of the polynomial space: highest degree polynomial is (m-1).
+This is a fixed part of the surface often referred to as the drift
+or spatial trend.
+}
+\item{eff.df}{
+Effective degrees of freedom of the model.
+}
+\item{fitted.values}{
+Predicted values from the fit.
+}
+\item{residuals}{
+Residuals from the fit.
+}
+\item{lambda}{
+Value of the smoothing parameter used in the fit.
+Lambda is defined as sigma**2/rho. See discussion in details.
+}
+\item{yname}{
+Name of the response.
+}
+\item{cov.function}{
+Covariance function of the model.
+}
+\item{beta}{
+Estimated coefficients in the ridge regression format
+}
+\item{d}{
+Estimated coefficients for the polynomial basis functions that span the
+null space
+}
+\item{fitted.values.null}{
+Fitted values for just the polynomial part of the estimate
+}
+\item{trace}{
+Effective number of parameters in model.
+}
+\item{c}{
+Estimated coefficients for the basis functions derived from the
+covariance.
+}
+\item{coefficients}{
+Same as the beta vector.
+}
+\item{just.solve}{
+Logical describing if the data has been interpolated using the basis
+functions.
+}
+\item{shat}{
+Estimated standard deviation of the measurement error (nugget effect).
+}
+\item{sigma2}{
+Estimated variance of the measurement error (shat**2).
+}
+\item{rho}{
+Scale factor for covariance. COV(h(x),h(x\code{)) = rho*cov.function(x,x})
+If the covariance is actually a
+correlation function then rho is also the "sill".
+}
+\item{mean.var}{
+Normalization of the covariance function used to find rho.
+}
+\item{best.model}{
+Vector containing the value of lambda, the estimated variance of the
+measurement error and the scale factor for covariance used in the fit.
+}
+}
+\details{
+
+This function produces a object of class Krig. With this object it is
+easy to subsequently predict with this fitted surface, find standard
+errors, alter the y data ( but not x), etc.
+
+The Kriging model is: Y.k= f(x.k) = P(x.k) + Z(x.k) + e.k
+
+where ".k" means subscripted by k, Y is the dependent variable observed
+at location x.k, P is a low order polynomial, Z is a mean zero, Gaussian
+field with covariance function K and e is assumed to be independent
+normal errors. The estimated surface is the best linear unbiased
+estimate (BLUE) of f(x)= P(x) + Z(x) given the observed data. For this
+estimate K, is taken to be rho*cov.function and the errors have variance
+sigma**2. In more conventional geostatistical terms rho is the "sill" if
+the covariance function is actually a correlation function and sigma**2
+is the nugget variance or measure error variance (the two are confounded
+in this model.) If the weights are given then the variance of e.k is
+sigma**2/ weights.k . In the case that the weights are specified as a
+matrix, W, using the wght.function option then the assumed covariance
+matrix for the errors is sigma**2 Wi, where Wi is the inverse of W. It
+is straightforward to show that the estimate of f only depends on sigma
+and rho through the ratio lambda = sigma**2/ rho. This parameter, termed
+the smoothing parameter plays a central role in the statistical
+computations within \code{Krig}. See also the help for thin plate
+splines, (\code{Tps}) to get another perspective on the smoothing
+parameter.
+
+This function also supports a modest extension of the Generalized
+Kriging model to include other covariates as fixed regression type
+components. In matrix form Y = Zb + F + E where Z is a matrix of
+covariates and b a fixed parameter vector, F the vector of function values at
+the observations and E a vector of errors. The The \code{Z} argument in
+the function is the way to specify this additional component.
+
+If the parameters rho and sigma2 are omitted in the call, then they are
+estimated in the following way. If lambda is given, then sigma2 is
+estimated from the residual sum of squares divided by the degrees of
+freedom associated with the residuals. Rho is found as the difference
+between the sums of squares of the predicted values having subtracted off
+the polynomial part and sigma2. These estimates are the MLE's under Gaussian
+assumptions on the process and errors.
+ If lambda is also omitted is it estimated
+from the data using a variety of approaches and then the values for sigma
+and rho are found in the same way from the estimated lambda.
+
+A useful extension of a stationary correlation to a nonstationary
+covariance is what we term a correlation model.
+If mean and marginal standard deviation objects are included in the call.
+Then the observed data is standardized based on these functions. The
+spatial process is then estimated with respect to the standardized scale.
+However for predictions and standard errors the mean and standard
+deviation surfaces are used to produce results in the original scale of
+the observations.
+
+The GCV function has several alternative definitions when replicate
+observations are present or if one uses a reduced set knots. Here are the
+choices based on the method argument:
+
+GCV: leave-one-out GCV. But if
+there are replicates it is leave one group out. (Wendy and Doug prefer
+this one.)
+
+GCV.one: Really leave-one-out GCV even if there are replicate
+points. This what the old tps function used in FUNFITS.
+
+rmse: Match the estimate of sigma**2 to a external value ( called rmse)
+
+pure error: Match the estimate of sigma**2 to the estimate based on
+replicated data (pure error estimate in ANOVA language).
+
+GCV.model:
+Only considers the residual sums of squares explained by the basis
+functions.
+
+REML:
+The process and errors are assumed to the Gaussian and the likelihood is
+concentrated (or profiled) with respect to lambda. The MLE of lambda is
+found from this criterion. Restricted means that the likelihood is formed from a linear transformation of the observations that is orthogonal to the
+column space of P(x).
+
+
+WARNING: The covariance functions often have a nonlinear parameter(s) that
+often control the strength of the correlations as a function of separation,
+usually referred to as the range parameter. This parameter must be
+specified in the call to Krig and will not be estimated.
+
+}
+\section{References}{
+See "Additive Models" by Hastie and Tibshirani, "Spatial Statistics" by
+Cressie and the FIELDS manual.
+}
+\seealso{
+summary.Krig, predict.Krig, predictSE.Krig, predictSurfaceSE,
+predictSurface, plot.Krig,
+surface.Krig
+}
+\examples{
+
+# a 2-d example
+# fitting a surface to ozone
+# measurements. Exponential covariance, range parameter is 20 (in miles)
+
+fit <- Krig(ChicagoO3$x, ChicagoO3$y, theta=20)
+
+summary( fit) # summary of fit
+set.panel( 2,2)
+plot(fit) # four diagnostic plots of fit
+set.panel()
+surface( fit, type="C") # look at the surface
+
+# predict at data
+predict( fit)
+
+# predict using 7.5 effective degrees of freedom:
+predict( fit, df=7.5)
+
+
+# predict on a grid ( grid chosen here by defaults)
+ out<- predictSurface( fit)
+ surface( out, type="C") # option "C" our favorite
+
+# predict at arbitrary points (10,-10) and (20, 15)
+ xnew<- rbind( c( 10, -10), c( 20, 15))
+ predict( fit, xnew)
+
+# standard errors of prediction based on covariance model.
+ predictSE( fit, xnew)
+
+# surface of standard errors on a default grid
+ predictSurfaceSE( fit)-> out.p # this takes some time!
+ surface( out.p, type="C")
+ points( fit$x)
+
+\dontrun{
+# Using another stationary covariance.
+# smoothness is the shape parameter for the Matern.
+
+fit <- Krig(ChicagoO3$x, ChicagoO3$y, Covariance="Matern", theta=10, smoothness=1.0)
+summary( fit)
+
+#
+# Roll your own: creating very simple user defined Gaussian covariance
+#
+
+test.cov <- function(x1,x2,theta,marginal=FALSE,C=NA){
+ # return marginal variance
+ if( marginal) { return(rep( 1, nrow( x1)))}
+
+ # find cross covariance matrix
+ temp<- exp(-(rdist(x1,x2)/theta)**2)
+ if( is.na(C[1])){
+ return( temp)}
+ else{
+ return( temp\%*\%C)}
+ }
+#
+# use this and put in quadratic polynomial fixed function
+
+
+ fit.flame<- Krig(flame$x, flame$y, cov.function="test.cov", m=3, theta=.5)
+
+#
+# note how range parameter is passed to Krig.
+# BTW: GCV indicates an interpolating model (nugget variance is zero)
+# This is the content of the warning message.
+
+# take a look ...
+ surface(fit.flame, type="I")
+}
+
+#
+# Thin plate spline fit to ozone data using the radial
+# basis function as a generalized covariance function
+#
+# p=2 is the power in the radial basis function (with a log term added for
+# even dimensions)
+# If m is the degree of derivative in penalty then p=2m-d
+# where d is the dimension of x. p must be greater than 0.
+# In the example below p = 2*2 - 2 = 2
+#
+
+ out<- Krig( ChicagoO3$x, ChicagoO3$y,cov.function="Rad.cov",
+ m=2,p=2,scale.type="range")
+
+# See also the Fields function Tps
+# out should be identical to Tps( ChicagoO3$x, ChicagoO3$y)
+#
+
+# A Knot example
+
+ data(ozone2)
+ y16<- ozone2$y[16,]
+
+# there are some missing values -- remove them
+ good<- !is.na( y16)
+ y<- y16[good]
+ x<- ozone2$lon.lat[ good,]
+
+#
+# the knots can be arbitrary but just for fun find them with a space
+# filling design. Here we select 50 from the full set of 147 points
+#
+ xknots<- cover.design( x, 50, num.nn= 75)$design # select 50 knot points
+
+ out<- Krig( x, y, knots=xknots, cov.function="Exp.cov", theta=300)
+ summary( out)
+# note that that trA found by GCV is around 17 so 50>17 knots may be a
+# reasonable approximation to the full estimator.
+#
+\dontrun{
+# the plot
+ surface( out, type="C")
+ US( add=TRUE)
+ points( x, col=2)
+ points( xknots, cex=2, pch="O")
+}
+\dontrun{
+## A quick way to deal with too much data if you intend to smooth it anyway
+## Discretize the locations to a grid, then apply Krig
+## to the discretized locations:
+##
+RM.approx<- as.image(RMprecip$y, x=RMprecip$x, nx=20, ny=20)
+
+# take a look:
+image.plot( RM.approx)
+# discretized data (observations averaged if in the same grid box)
+# 336 locations -- down form the full 806
+
+# convert the image format to locations, obs and weight vectors
+yd<- RM.approx$z[RM.approx$ind]
+weights<- RM.approx$weights[RM.approx$ind] # takes into account averaging
+xd<- RM.approx$xd
+
+obj<- Krig( xd, yd, weights=weights, theta=4)
+
+# compare to the full fit:
+# Krig( RMprecip$x, RMprecip$y, theta=4)
+}
+
+\dontrun{
+# A correlation model example
+# fit krig surface using a mean and sd function to standardize
+# first get stats from 1987 summer Midwest O3 data set
+ data(ozone2)
+ stats.o3<- stats( ozone2$y)
+ mean.o3<- Tps( ozone2$lon.lat, c( stats.o3[2,]))
+ sd.o3<- Tps( ozone2$lon.lat, c( stats.o3[3,]))
+
+#
+# Now use these to fit particular day ( day 16)
+# and use great circle distance
+
+
+ fit<- Krig( ozone2$lon.lat, ozone2$y[16,],
+ theta=350, mean.obj=mean.o3, sd.obj=sd.o3,
+ Covariance="Matern", Distance="rdist.earth",
+ smoothness=1.0,
+ na.rm=TRUE) #
+
+
+# the finale
+ surface( fit, type="I")
+ US( add=TRUE)
+ points( fit$x)
+ title("Estimated ozone surface")
+}
+\dontrun{
+#
+#
+# explore some different values for the range and lambda using REML
+ theta <- seq( 100,500,,40)
+ PLL<- matrix( NA, 40,80)
+# the loop
+ for( k in 1:40){
+# call to Krig with different ranges
+# also turn off warnings for GCV search
+# to avoid lots of messages. (not recommended in general!)
+ PLL[k,]<- Krig( ozone2$lon.lat,ozone2$y[16,],
+ cov.function="stationary.cov",
+ theta=theta[k], mean.obj=mean.o3, sd.obj=sd.o3,
+ Covariance="Matern",smoothness=.5,
+ Distance="rdist.earth", nstep.cv=80,
+ give.warnings=FALSE, na.rm=TRUE)$gcv.grid[,7]
+#
+# gcv.grid is the grid search output from
+# the optimization for estimating different estimates for lambda including
+# REML
+# default grid is equally spaced in eff.df scale ( and should the same across theta)
+# here
+ }
+# get lambda grid from looping
+ k<- 1
+ lam<- Krig( ozone2$lon.lat,ozone2$y[16,],
+ cov.function="stationary.cov",
+ theta=theta[k], mean.obj=mean.o3, sd.obj=sd.o3,
+ Covariance="Matern",smoothness=.5,
+ Distance="rdist.earth", nstep.cv=80,
+ give.warnings=FALSE, na.rm=TRUE)$gcv.grid[,1]
+# see the 2 column of $gcv.grid to get the effective degress of freedom.
+ contour( theta,log(lam) , PLL)
+
+}
+}
+
+\keyword{spatial}
+% docclass is function
diff --git a/man/Krig.engine.default.Rd b/man/Krig.engine.default.Rd
new file mode 100644
index 0000000..c98b9d1
--- /dev/null
+++ b/man/Krig.engine.default.Rd
@@ -0,0 +1,291 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{The Engines:}
+\alias{Krig.engine.default}
+\alias{Krig.engine.knots}
+\alias{Krig.engine.fixed}
+\alias{Krig.coef}
+\alias{Krig.check.xY}
+\alias{Krig.cor.Y}
+\alias{Krig.transform.xY}
+\alias{Krig.make.u}
+\alias{Krig.make.W}
+\alias{Krig.make.Wi}
+\alias{\%d*\%}
+\title{ Basic linear algebra utilities and other
+ computations supporting the Krig function. }
+\description{
+These are internal functions to Krig that compute the basic matrix
+decompositions or solve the linear systems needed to evaluate the
+Krig/Tps estimate. Others listed below do some simple housekeeping and
+formatting. Typically they are called from within Krig but can also be
+used directly if passed a Krig object list.
+}
+\usage{
+Krig.engine.default(out, verbose = FALSE)
+Krig.engine.knots(out, verbose = FALSE)
+Krig.engine.fixed( out, verbose=FALSE, lambda=NA)
+
+Krig.coef(out, lambda = out$lambda, y = NULL, yM = NULL, verbose = FALSE)
+Krig.make.u(out, y = NULL, yM = NULL, verbose = FALSE)
+Krig.check.xY(x, Y,Z, weights, na.rm, verbose = FALSE)
+Krig.cor.Y(obj, verbose = FALSE)
+Krig.transform.xY(obj, knots, verbose = FALSE)
+
+Krig.make.W( out, verbose=FALSE)
+Krig.make.Wi ( out, verbose=FALSE)
+
+}
+\arguments{
+
+ \item{out}{ A complete or partial Krig object. If partial it must have
+all the information accumulated to this calling point within the Krig
+function. }
+
+\item{obj}{Same as \code{out}. }
+
+\item{verbose}{If TRUE prints out intermediate results for
+debugging.}
+
+\item{lambda}{Value of smoothing parameter "hard wired" into decompositions.
+Default is NA, i.e. use the value in \code{out\$lambda}. }
+
+
+\item{y}{New y vector for recomputing coefficients. OR for \%d*\% a
+vector or matrix. }
+
+\item{yM}{New y vector for recomputing coefficients but
+the values have already been collapsed into replicate group means.}
+
+\item{Y}{raw data Y vector}
+
+\item{x}{raw x matrix of spatial locations
+OR
+In the case of \%d*\%, y is either a matrix or a vector. As a vector, y, is
+interpreted to be the elements of a digaonal matrix.
+}
+
+\item{weights}{ Raw \code{weights} vector passed to Krig}
+\item{Z}{ Raw vector or matrix of additional covariates.}
+
+\item{na.rm}{ NA action logical values passed to Krig}
+
+\item{knots}{Raw \code{knots} matrix passed to Krig}
+
+}
+
+\details{
+
+ENGINES:
+
+The engines are the
+code modules that handle the basic linear algebra needed to
+computed the estimated curve or surface coefficients.
+All the engine work on the data that has been reduced to unique
+locations and possibly replicate group means with the weights
+adjusted accordingly. All information needed for the decomposition are
+components in the Krig object passed to these functions.
+
+ \code{Krig.engine.default} finds the decompositions for a Universal
+Kriging estimator. by simultaneously diagonalizing the linear system
+system for the coefficients of the estimator. The main advantage of this
+form is that it is fairly stable numerically, even with ill-conditioned
+covariance matrices with lambda > 0. (i.e. provided there is a "nugget"
+or measure measurement error. Also the eigendecomposition allows for
+rapid evaluation of the likelihood, GCV and coefficients for new data
+vectors under different values of the smoothing parameter, lambda.
+
+
+ \code{Krig.engine.knots} finds the decompositions in the case that the
+covariance is evaluated at arbitrary locations possibly different than
+the data locations (called knots). The intent of these decompositions is
+to facilitate the evaluation at different values for lambda. There will
+be computational savings when the number of knots is less than the
+number of unique locations. (But the knots are as densely distributed as
+the structure in the underlying spatial process.) This function call
+fields.diagonalize, a function that computes the matrix and eigenvalues
+that simultaneous diagonalize a nonnegative definite and a positive
+definite matrix. These decompositions also facilitate multiple
+evaluations of the likelihood and GCV functions in estimating a
+smoothing parameter and also multiple solutions for different y vectors.
+
+ \code{Krig.engine.fixed} are specific decomposition based on the Cholesky
+factorization assuming that the smoothing parameter is fixed. This
+is the only case that works in the sparse matrix.
+Both knots and the full set of locations can be handled by this case.
+The difference between the "knots" engine above is that only a single value
+of lambda is considered in the fixed engine.
+
+
+OTHER FUNCTIONS:
+
+\code{Krig.coef} Computes the "c" and "d" coefficients to represent the
+estimated curve. These coefficients are used by the predict functions for
+evaluations. Krig.coef can be used outside of the call to Krig to
+recompute the fit with different Y values and possibly with different
+lambda values. If new y values are not passed to this function then the yM
+vector in the Krig object is used. The internal function
+\code{Krig.ynew} sorts out the logic of what to do and use based on the
+passed arguments.
+
+\code{Krig.make.u} Computes the "u" vector, a transformation of the collapsed
+observations that allows for rapid evaluation of the GCV function and
+prediction. This only makes sense when the decomposition is WBW or DR, i.e.
+an eigen decomposition. If the decompostion is the Cholesky based then this
+function returns NA for the u component in the list.
+
+\code{Krig.check.xY} Checks for removes missing values (NAs).
+
+\code{Krig.cor.Y} Standardizes the data vector Y based on a correlation model.
+
+ \code{Krig.transform.xY} Finds all replicates and collapse to unique
+locations and mean response and pooled variances and weights. These are
+the xM, yM and weightsM used in the engines. Also scales the x locations
+and the knots according to the transformation.
+
+
+ \code{Krig.make.W} and \code{Krig.make.Wi} These functions create an
+off-diagonal weight matrix and its symmetric square root or the inverse
+of the weight matrix based on the information passed to Krig. If
+\code{out$nondiag} is TRUE W is constructed based on a call to the passed
+function wght.function along with additional arguments. If this flag is
+FALSE then W is just \code{diag(out$weightsM)} and the square root and inverse
+are computed directly.
+
+
+\code{\%d*\%} Is a simple way to implement efficient diagonal
+multiplications. x\%d*\%y is interpreted to mean diag(x)\%*\% y
+if x is a vector. If x is a matrix then this becomes the same as the usual
+matrix multiplication.
+}
+
+\section{Returned Values}{
+
+ENGINES:
+
+The returned value is a list with the matrix decompositions and
+other information. These are incorporated into the complete Krig object.
+
+Common to all engines:
+\describe{
+ \item{decomp}{Type of decomposition}
+ \item{nt}{dimension of T matrix}
+ \item{np}{number of knots}
+}
+
+\code{Krig.engine.default}:
+
+\describe{
+\item{u}{Transformed data using eigenvectors.}
+\item{D}{Eigenvalues}
+\item{G}{Reduced and weighted matrix of the eigenvectors}
+\item{qr.T}{QR decomposition of fixed regression matrix}
+\item{V}{The eigenvectors}
+}
+
+\code{Krig.engine.knots}:
+
+\describe{
+ \item{u}{A transformed vector that is based on the data vector.}
+ \item{D}{Eigenvalues of decomposition}
+ \item{G}{Matrix from diagonalization}
+ \item{qr.T}{QR decomposition of the matrix for the fixed component.
+ i.e. sqrt( Wm)\%*\%T}
+ \item{pure.ss}{pure error sums of squares including both the
+ variance from replicates and also the sums of squared residuals
+ from fitting the full knot model with lambda=0 to the replicate means. }
+}
+
+\code{Krig.engine.fixed}:
+
+\describe{
+\item{d}{estimated coefficients for the fixed part of model}
+\item{c}{estimated coefficients for the basis functions derived from the
+ covariance function.}
+}
+
+Using all data locations
+
+\describe{
+\item{qr.VT}{QR decomposition of the inverse Cholesky factor times the
+T matrix. }
+\item{MC}{Cholesky factor}
+}
+
+Using knot locations
+\describe{
+\item{qr.Treg}{QR decomposition of regression matrix modified by the
+estimate of the nonparametric ( or spatial) component.}
+\item{lambda.fixed}{Value of lambda used in the decompositions}
+}
+
+OTHER FUNCTIONS:
+
+\code{Krig.coef}
+\describe{
+\item{yM}{Y values as replicate group means}
+\item{shat.rep}{Sample standard deviation of replicates}
+\item{shat.pure.error}{Same as shat.rep}
+\item{pure.ss}{Pure error sums of squares based on replicates}
+\item{c}{The "c" basis coefficients associated with the covariance
+or radial basis functions.}
+\item{d}{The "d" regression type coefficients that are from the fixed part of the model
+or the linear null space.}
+\item{u}{When the default decomposition is used the data vector transformed by the orthogonal matrices. This facilitates evaluating the GCV function
+at different values of the smoothing parameter.}
+}
+\code{Krig.make.W}
+\describe{
+\item{W}{The weight matrix}
+\item{W2}{ Symmetric square root of weight matrix}
+}
+
+\code{Krig.make.Wi}
+\describe{
+\item{ Wi}{The inverse weight matrix}
+\item{W2i}{ Symmetric square root of inverse weight matrix}
+}
+
+
+}
+
+\author{Doug Nychka }
+
+\seealso{ \code{\link{Krig}}, \code{\link{Tps}} }
+\examples{
+
+Krig( ChicagoO3$x, ChicagoO3$y, theta=100)-> out
+
+Krig.engine.default( out)-> stuff
+
+# compare "stuff" to components in out$matrices
+
+look1<- Krig.coef( out)
+look1$c
+# compare to out$c
+
+look2<- Krig.coef( out, yM = ChicagoO3$y)
+look2$c
+# better be the same even though we pass as new data!
+
+}
+\keyword{ spatial }
diff --git a/man/Krig.null.function.Rd b/man/Krig.null.function.Rd
new file mode 100644
index 0000000..95eea45
--- /dev/null
+++ b/man/Krig.null.function.Rd
@@ -0,0 +1,51 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Krig.null.function}
+\alias{Krig.null.function}
+\title{Default function to create fixed matrix part of spatial process model.}
+\description{
+Constructs a matrix of terms representing a low order polynomial
+and binds additional columns due to covariates ( the Z matrix)
+}
+\usage{
+Krig.null.function(x, Z = NULL, drop.Z = FALSE, m)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{x}{Spatial locations }
+ \item{Z}{ Other covariates to be associated with each location.}
+ \item{drop.Z}{If TRUE only the low order polynomial part is created. }
+ \item{m}{ The polynomial order is (m-1). }
+}
+\value{
+A matrix where the first columns are the polynomial terms and the
+following columns are from Z.
+
+}
+\details{
+This function can be modified to produce a different fixed part of the
+spatial model. The arguments x, Z and drop.Z are required but other arguments
+can be passed as part of a list in null.args in the call to Krig.
+}
+\author{Doug Nychka }
+\seealso{Krig}
+\keyword{ spatial}% at least one, from doc/KEYWORDS
diff --git a/man/Krig.replicates.Rd b/man/Krig.replicates.Rd
new file mode 100644
index 0000000..d4b24fe
--- /dev/null
+++ b/man/Krig.replicates.Rd
@@ -0,0 +1,90 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{Krig.replicates}
+\alias{Krig.replicates}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+Collapse repeated spatial locations into unique locations
+}
+\description{
+In case that several observations are available for a single spatial location find the
+group means and replicate variability
+}
+\usage{
+Krig.replicates(out, x, y, Z, weights=rep( 1, length(y)), verbose = FALSE)
+}
+
+\arguments{
+ \item{out}{ A list with components \code{x}, \code{y}, \code{weights}, and possibily
+\code{Z}.}
+\item{x}{Spatial locations.}
+\item{y}{Spatial observations}
+\item{Z}{Spatial covariates.}
+\item{weights}{Weights proportional to reciprocal varainces of observations.}
+
+\item{verbose}{ If TRUE print out details for debugging. }
+}
+\details{
+This function figures out which locations are the same and within the function fast.1way
+use \code{tapply} to find replicate group means and standard deviations.
+NOTE: it is assumed the Z covariates are unique at the locations. Currently these functions
+can not handle a model with common spatial locations but different values for the Z covariates.
+
+}
+\value{
+A list with components:
+ \item{yM }{Data at unique locations and where more than one observation is
+ available this is the mean of the replicates.}
+
+ \item{xM }{Unique spatial locations.}
+\item{weightsM}{Weights matching the unique lcoations proportional to reciprocal variances
+This is found as a combination of the original weights at each location.}
+\item{ZM}{Values of the covariates at the unique lcoations.}
+\item{uniquerows}{Index for unique rows of \code{x}.}
+\item{shat.rep, shat.pure.error}{Standard deviation of pure error estimate based on replicate groups
+ (and adjusting for possibly different weights.)}
+\item{rep.info}{Integer tags indicating replicate groups.}
+
+}
+
+\author{
+Douglas Nychka
+}
+
+\examples{
+
+#create some spatial replicates
+ set.seed( 123)
+ x0<- matrix( runif(10*2), 10,2)
+ x<- x0[ c(rep(1,3), 2:8, rep( 9,5),10) , ]
+ y<- rnorm( 16)
+
+ out<- Krig.replicates( x=x, y=y)
+# compare
+# out$yM[1] ; mean( y[1:3])
+# out$yM[9] ; mean( y[11:15])
+# mean( y[ out$rep.info==9])
+
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{ ~kwd1 }
+\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
diff --git a/man/MLESpatialProcess.Rd b/man/MLESpatialProcess.Rd
new file mode 100644
index 0000000..13f6816
--- /dev/null
+++ b/man/MLESpatialProcess.Rd
@@ -0,0 +1,222 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{MLESpatialProcess}
+\alias{MLESpatialProcess}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+ Estimates key covariance parameters for a spatial process.
+%% ~~function to do ... ~~
+}
+\description{
+Maximizes the likelihood to determine the nugget variance (sigma^2), the sill
+( rho) and the range (theta) for a spatial process.
+%% ~~ A concise (1-5 lines) description of what the function does. ~~
+}
+\usage{
+MLESpatialProcess(x, y, weights = rep(1, nrow(x)), Z = NULL, mKrig.args
+ = NULL, cov.function = "stationary.cov", cov.args =
+ list(Covariance = "Matern", smoothness = 1),
+ lambda.start = 0.5, theta.start = NULL, theta.range =
+ NULL, gridN = 20, optim.args = NULL, na.rm = TRUE,
+ verbose = FALSE, abstol = 1e-04, REML = FALSE, ...)
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+\item{x}{
+A matrix of spatial locations with rows indexing location
+ and columns the dimension (e.g. longitude/latitude)
+}
+\item{y}{
+Spatial observations
+}
+ \item{weights}{
+%% ~~Describe \code{weights} here~~
+Precision ( 1/variance) of each observation
+}
+ \item{Z}{
+%% ~~Describe \code{Z} here~~
+Linear covariates to be included in fixed part of the
+ model that are distinct from the default low order
+ polynomial in \code{x}
+}
+\item{mKrig.args}{A list containing other objects to pass to mKrig.}
+\item{lambda.start}{
+The initial guess for lambda, the nugget to sill ratio.
+}
+\item{theta.start}{
+The initial guess for theta, the correlation range parameter.
+}
+\item{theta.range}{Range of range parameters (aka theta) to search over. Default is the range from the 2 and 97 percent quantiles of the pairwise distances among locations.}
+
+\item{gridN}{Number of points to use in grid search over theta.}
+\item{cov.function}{
+The name of the covariance function (See help on Krig for details. )
+}
+\item{cov.args}{
+A list with arguments for the covariance functions. These are
+ usually parameters and other options such as the type of
+ distance function.
+}
+
+\item{optim.args}{
+Additional arguments passed to the optim function for likelihood
+ maximization. The default value is:
+ \code{optim.args = list(method = "BFGS",
+ control = list(fnscale = -1, parscale = c(0.5, 0.5),
+ ndeps = c(0.05,0.05)))}
+}
+\item{na.rm}{If TRUE remove missing values in y and corresponding locations in x.}
+
+\item{verbose}{
+If TRUE print out intermediate information for debugging.
+}
+
+\item{abstol}{Absolute tolerance used to judeg convergence in optim.}
+
+\item{REML}{If TRUE use maximize the restricted Likelihood instead of the concentrated likelihood.(Preliminary experience suggests this does not make much difference.) }
+
+\item{\dots}{
+Additional arguments to pass to the mKrig function.
+}
+}
+\details{
+ MLESpatialProcess is designed to be a simple and easy to use function for
+ maximizing the likelihood for a Gaussian spatial process. For other fixed,
+ covariance parameters, the likelihood is maximized over the nugget and sill
+ parameters using the \code{mKrig} function. \code{lambda} and \code{theta}
+ are optimized using the \code{mKrigMLEJoint} function on a log scale.
+
+ MLESpatialProcess.fast is an older fields function also using the \code{optim}
+ function to maximize the likelihood computed from the \code{mKrig} function. It will
+ eventually be removed from later versions of fields but is still useful as a cross
+ check on newer functions
+
+ Note the likelihood can be maximized analytically over the parameters of the fixed
+ part of the spatial model and with the nugget (sigma) and sill (rho) reduced to the
+ single parameter lambda= sigma^2/rho. The
+likelihood is maximized numerically over lambda and theta if there are additional
+covariance parameters ( such as smoothness for the Matern) these need to be fixed
+and so the MLE is found for the covariance conditional on these additional
+parameter values. From a practical point of view it is often difficult to estimate
+just these three from a moderate spatial data set and the user is encourage to try
+different combinations of fixing covariance parameters with ML for the remaining
+ones.
+
+}
+\value{
+ \code{MLESpatialProcess}:
+ A list that includes components:
+\code{theta.MLE, rho.MLE, sigma.MLE, lambda.MLE} being the maximum
+ likelihood estimates of these
+parameters. The component \code{REML.grid} is a two column matrix
+ with the
+first column being the theta grid and the second column being the
+ profiled and restricted likelihood for that value of theta. Here profile means that
+ the likelihood has already been evaluated at the maximum over sigma
+ and rho for this value of theta.
+ \code{eval.grid} is a more complete "capture" of the
+ evaluations being a
+ 6 column matrix with the parameters theta, lambda, sigma,
+ rho, profile likelihood and the effective degrees of
+ freedom.
+
+ \code{MLESpatialProcess.fast} has been depreciated and is included for backward compatibility.
+
+ }
+
+\author{
+Doug Nychka, John Paige
+}
+
+\seealso{
+\code{\link{Krig}}, \code{\link{mKrigMLEGrid}}, \code{\link{mKrigMLEJoint}}, \code{\link{optim}}, \code{\link{fastTps.MLE}}, \code{\link{spatialProcess}}
+}
+\examples{
+#
+#
+#generate observation locations (100 is small just to make this run quickly)
+n=100
+set.seed(124)
+x = matrix(runif(2*n), nrow=n)
+#generate observations at the locations
+trueTheta = .1
+trueSigma = .01
+Sigma = exp( -rdist(x,x) /trueTheta )
+# y = t(chol(Sigma))%*% (rnorm(n)) + trueSigma * rnorm( n)
+y = t(chol(Sigma))\%*\% (rnorm(n)) + trueSigma * rnorm( n)
+# Use exponential covariance estimate constant function for mean
+out = MLESpatialProcess(x, y,
+ smoothness=.5,
+ mKrig.args = list( m = 1)
+ )
+# Use exponential covariance, use a range to determine MLE of range parameter
+\dontrun{
+#Use Matern covariance, compute joint MLE of range, smoothness, and lambda.
+#This may take a few seconds
+testSmoothness = c(.5, 1, 2)
+for( nu in testSmoothness){
+ out = MLESpatialProcess(x, y, cov.args=list(Covariance="Matern"), smoothness=nu)
+ print( out$MLEJoint$summary)
+}
+
+}
+
+# example with a covariate
+\dontrun{
+data(COmonthlyMet)
+ind<- !is.na( CO.tmean.MAM.climate)
+x<- CO.loc[ind,]
+y<- CO.tmean.MAM.climate[ind]
+elev<- CO.elev[ind]
+obj2<- MLESpatialProcess( x,y)
+obj3<- MLESpatialProcess( x,y, Z=elev)
+
+# elevation makes a difference
+obj2$MLEJoint$summary
+obj3$MLEJoint$summary
+
+ }
+ \dontrun{
+# fits for first 10 days from ozone data
+data( ozone2)
+NDays<- 10
+O3MLE<- matrix( NA, nrow= NDays, ncol=7)
+for( day in 1: NDays){
+ cat( day, " ")
+ ind<- !is.na(ozone2$y[day,] )
+ x<- ozone2$lon.lat[ind,]
+ y<- ozone2$y[day,ind]
+ print( length( y))
+ O3MLE[day,]<- MLESpatialProcess( x,y,
+ Distance="rdist.earth")$MLEJoint$summary
+}
+# NOTE: names of summary:
+#[1] "lnProfileLike.FULL" "lambda"
+#[3] "theta" "sigmaMLE"
+#[5] "rhoMLE" "funEval"
+#[7] "gradEval"
+plot( log(O3MLE[,2]), log(O3MLE[,3]))
+}
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{spatial}
diff --git a/man/NorthAmericanRainfall.Rd b/man/NorthAmericanRainfall.Rd
new file mode 100644
index 0000000..02e453a
--- /dev/null
+++ b/man/NorthAmericanRainfall.Rd
@@ -0,0 +1,76 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{NorthAmericanRainfall}
+\alias{NorthAmericanRainfall}
+\docType{data}
+\title{
+Observed North American summer precipitation from the
+historical climate network.
+}
+\description{
+Average rainfall in tenths of millimeters for the months
+of June, July and August for the period 1950-2010. Data is based on
+1720 stations located in North America.
+}
+%\usage{data(NorthAmericanRainfall)}
+\format{
+ The format is a list with components:
+"longitude" "latitude" "precip" "elevation" "precipSE" "trend" "trendSE" "type" "x.s" "sProjection"
+with elevation in meters, longitude as (-180,180), latitude as (-90, 90) and precipitaion in 1/10 mm
+( precip/254 converts to inches of rainfall)
+
+\code{precip} is the intercept for 1980.5 when a straight line least squares regression is fit to
+each station's record. SE is the companion standard error from the least squares fit.
+If the station is complete, then \code{precip} and \code{precipSE} will just be the mean and standard deviation adjusted for a linear trend. The estimated trend \code{trend} and and its standard error \code{trendSE} are also included.
+ Also due to the centering, for complete data the intercept and trend estimate will be uncorrelated. The component \code{type} indicates whether the station has been "adjusted" (see below) or is still in
+ "unadjusted" form.
+
+\code{x.s} is a useful transformation of locations into stereographic coordinates that reduces the
+inflation of North Canada due to the usual lon/lat coordinates. Specifically it is found by:
+\preformatted{
+ library(mapproj)
+ xStereo<- mapproject( NorthAmericanRainfall$lon,NorthAmericanRainfall$lat, projection="stereographic")
+ NorthAmericanRainfall$x.s<- cbind( xStereo$x, xStereo$y)
+ NorthAmericanRainfall$projection<- .Last.projection
+}
+Use \code{NorthAmericanRainfall$orientation} to access the stereographic projection orientation.
+
+}
+
+\source{
+The monthly data used to construct this summary was generously provided by Xuebin Zhang, however,
+the orignal source is freely available as the Global Historical Climate Network Version 2 Precipitation
+quality controlled, curated and served by the US National Climatic Data Center (NCDC).
+The adjusted data from this archive has been modified from its raw form to make the record more homogenous. Heterogenities can come from a variety of sources such as a moving the station a short distance or changes in instruments. See \url{http://www.ncdc.noaa.gov/ghcnm}
+}
+
+\examples{
+data(NorthAmericanRainfall)
+x<- cbind(NorthAmericanRainfall$longitude, NorthAmericanRainfall$latitude)
+y<- NorthAmericanRainfall$precip
+quilt.plot( x,y)
+world( add=TRUE)
+
+Zstat<- NorthAmericanRainfall$trend / NorthAmericanRainfall$trendSE
+quilt.plot( x, Zstat)
+
+}
+\keyword{datasets}
diff --git a/man/QTps.Rd b/man/QTps.Rd
new file mode 100644
index 0000000..6745cee
--- /dev/null
+++ b/man/QTps.Rd
@@ -0,0 +1,263 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{QTps}
+\alias{QTps}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+%% ~~function to do ... ~~
+Robust and Quantile smoothing using a thin-plate spline
+}
+\description{
+%% ~~ A concise (1-5 lines) description of what the function does. ~~
+This function uses the standard thin plate spline function \code{Tps} and a algorithm based on
+psuedo data to compute robust smoothers based on the Huber weight function. By modifying the
+symmetry of the Huber function and changing the scale one can also approximate a quantile
+smoother. This function is experimental in that is not clear how efficient the psuedo-data
+algorithm is acheiving convergence to a solution.
+}
+\usage{
+QTps(x, Y, ..., f.start = NULL, psi.scale = NULL, C = 1, alpha = 0.5, Niterations = 100,
+ tolerance = 0.001, verbose = FALSE)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{x}{
+Locations of observations.
+}
+ \item{Y}{
+Observations
+}
+ \item{\dots}{
+Any other arguments to pass to the Tps function.
+}
+ \item{C}{Scaling for huber robust weighting function. (See below.) Usually it is better to leave this at 1 and
+just modify the scale \code{psi.scale} according to the size of the residuals. }
+ \item{f.start}{
+The initial value for the estimated function. If NULL then the constant function at the
+median of \code{Y} will be used. NOTE: This may not be a very good starting vector and a more robust
+method would be to use a local robust smoother.
+}
+ \item{psi.scale}{
+The scale value for the Huber function. When C=1, this is the point where the Huber weight function will
+change from quadratic to linear. Default is to use the scale \code{.05*mad(Y)} and \code{C=1} . Very small scales relative to the
+size of the residuals will cause the estimate to approximate a quantile spline. Very large scales will yield the
+ordinary least squares spline.
+}
+ \item{alpha}{
+The quantile that is estimated by the spline. Default is .5 giving a median. Equivalently this parameter controls the slope of the linear wings in the Huber function \code{2*alpha} for the positive wing and \code{2*(1-alpha)}
+for the negative wing.
+
+}
+ \item{Niterations}{
+Maximum number of interations of the psuedo data algorithm
+}
+ \item{tolerance}{
+Convergence criterion based on the relative change in the predicted values of the function estimate. Specifically if the criterion \code{mean(abs(f.hat.new - f.hat))/mean(abs(f.hat))} is less than \code{tolerance} the iterations re stopped.
+}
+ \item{verbose}{
+If TRUE intermediate results are printed out.
+}
+}
+\details{
+This is an experimentla function that uses the psuedo-value algorithm to compute a class of robust and quantile problems.
+
+The Thin Plate Spline/ Kriging model through fields is: Y.k= f(x.k) = P(x.k) + Z(x.k) + e.k
+
+ with the goal of estimating the smooth function: f(x)= P(x) + Z(x)
+
+The extension in this function is that e.k can be heavy tailed or have outliers and one would still like a
+robust estimate of f(x). In the quantile approximation (very small scale parameter) f(x) is an estimate of the
+alpha quantile of the conditional distribution of Y given x.
+
+The algorithm is iterative and involves at each step tapering the residuals in a nonlinear way.
+Let psi.wght be this tapering function then given an initial estimate of f, f.hat the new data for smoothing is
+
+\code{ Y.psuedo<- f.hat + psi.scale* psi.wght( Y - f.hat, psi.scale=psi.scale, alpha=alpha)}
+A thin plate spline is now estimated for these data and a new prediction for f is found. This new vector is
+used to define new psuedo values. Convergence is achieved when the the subsequent estimates of f.hat do not
+change between interations. The advantage of this algorithm is at every step a standard "least squares" thin
+plate spline is fit to the psuedo data. Because only the observation vector is changing at each iteration
+Some matrix decompositions need only be found once and the computations at each subsequent iteration are efficient.
+At convergence there is some asymptotic theory to suggest that the psuedo data can be fit using the least
+squares spline and the standard smoothing techinques are valid. For example one can consider looking at the
+cross-validation function for the psuedo-data as a robust version to select a smoothing parameter. This approach
+is different from the weighted least squared algorithm used in the \code{qsreg} function. Also \code{qsreg} is only
+designed to work with 1-d cubic smoothing splines.
+
+The "rho" function indicating the departure from a pure quadratic loss function has the definition
+\preformatted{
+qsreg.rho<-function(r, alpha = 0.5, C = 1)
+ temp<- ifelse( r< 0, ((1 - alpha) * r^2)/C , (alpha * r^2)/C)
+ temp<- ifelse( r >C, 2 * alpha * r - alpha * C, temp)
+ temp<- ifelse( r < -C, -2 * (1 - alpha) * r - (1 - alpha) * C, temp)
+ temp
+}
+
+The derivative of this function "psi" is
+
+\preformatted{
+ qsreg.psi<- function(r, alpha = 0.5, C = 1)
+ temp <- ifelse( r < 0, 2*(1-alpha)* r/C, 2*alpha * r/C )
+ temp <- ifelse( temp > 2*alpha, 2*alpha, temp)
+ temp <- ifelse( temp < -2*(1-alpha), -2*(1-alpha), temp)
+ temp
+}
+
+Note that if C is very small and if alpha = .5 then psi will essentially be 1 for r > 0 and -1 for r < 0.
+The key feature here is that outside a ceratin range the residual is truncated to a constant value. This is similar
+ to the Windsorizing operation in classical robust statistics.
+
+Another advantage of the psuedo data algotrithm is that at convergence one can just apply all the usual
+generic functions from Tps to the psuedo data fit. For example, predict, surface, print, etc. Some additional
+components are added to the Krig/Tps object, however, for information about the iterations and original data.
+Note that currently these are not reported in the summaries and printing of the output object.
+
+
+}
+\value{
+A \code{Krig} object with additional components:
+
+ \item{yraw}{ Original Y values}
+ \item{conv.info}{A vector giving the convergence criterion at each iteration.}
+ \item{conv.flag}{If TRUE then convergence criterion was less than the tolerance value.}
+ \item{psi.scale}{Scaling factor used for the psi.wght function.}
+ \item{value}{Value of alpha.}
+}
+\references{
+Oh, Hee-Seok, Thomas CM Lee, and Douglas W. Nychka. "Fast nonparametric quantile regression with arbitrary smoothing methods." Journal of Computational and Graphical Statistics 20.2 (2011): 510-526.
+}
+\author{
+Doug Nychka
+}
+
+\seealso{
+qsreg
+}
+\examples{
+
+data(ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+
+
+
+# Smoothing fixed at 50 df
+ look1<- QTps( x,y, psi.scale= 15, df= 50)
+
+\dontrun{
+# Least squares spline (because scale is so large)
+ look2<- QTps( x,y, psi.scale= 100, df= 50)
+#
+ y.outlier<- y
+# add in a huge outlier.
+ y.outlier[58]<- 1e5
+ look.outlier1<- QTps( x,y.outlier, psi.scale= 15, df= 50)
+# least squares spline.
+ look.outlier2<- QTps( x,y.outlier, psi.scale=100 , df= 50)
+#
+ set.panel(2,2)
+ surface( look1)
+ title("robust spline")
+ surface( look2)
+ title("least squares spline")
+ surface( look.outlier1, zlim=c(0,250))
+ title("robust spline w/outlier")
+ points( rbind(x[58,]), pch="+")
+ surface( look.outlier2, zlim=c(0,250))
+ title("least squares spline w/outlier")
+ points( rbind(x[58,]), pch="+")
+ set.panel()
+}
+# some quantiles
+look50 <- QTps( x,y, psi.scale=.5)
+look75 <- QTps( x,y,f.start= look50$fitted.values, alpha=.75)
+
+
+# a simulated example that finds some different quantiles.
+\dontrun{
+set.seed(123)
+N<- 400
+x<- matrix(runif( N), ncol=1)
+true.g<- x *(1-x)*2
+true.g<- true.g/ mean( abs( true.g))
+y<- true.g + .2*rnorm( N )
+
+look0 <- QTps( x,y, psi.scale=10, df= 15)
+look50 <- QTps( x,y, df=15)
+look75 <- QTps( x,y,f.start= look50$fitted.values, df=15, alpha=.75)
+}
+
+\dontrun{
+# this example tests the quantile estimate by Monte Carlo
+# by creating many replicate point to increase the sample size.
+# Replicate points are used because the computations for the
+# spline are dominated by the number of unique locations not the
+# total number of points.
+set.seed(123)
+N<- 80
+M<- 200
+x<- matrix( sort(runif( N)), ncol=1)
+x<- matrix( rep( x[,1],M), ncol=1)
+
+true.g<- x *(1-x)*2
+true.g<- true.g/ mean( abs( true.g))
+errors<- .2*(rexp( N*M) -1)
+y<- c(matrix(true.g, ncol=M, nrow=N) + .2 * matrix( errors, ncol=M, nrow=N))
+
+look0 <- QTps( x,y, psi.scale=10, df= 15)
+look50 <- QTps( x,y, df=15)
+look75 <- QTps( x,y, df=15, alpha=.75)
+
+
+bplot.xy(x,y, N=25)
+xg<- seq(0,1,,200)
+lines( xg, predict( look0, x=xg), col="red")
+lines( xg, predict( look50, x=xg), col="blue")
+lines( xg, predict( look75, x=xg), col="green")
+}
+\dontrun{
+# A comparison with qsreg
+ qsreg.fit50<- qsreg(rat.diet$t,rat.diet$con, sc=.5)
+ lam<- qsreg.fit50$cv.grid[,1]
+ df<- qsreg.fit50$cv.grid[,2]
+ M<- length(lam)
+ CV<-rep( NA, M)
+ M<- length( df)
+ fhat.old<- NULL
+ for ( k in M:1){
+ temp.obj<- QTps(rat.diet$t,rat.diet$con, f.start=fhat.old, psi.scale=.5, tolerance=1e-6,
+ verbose=FALSE, df= df[k])
+ cat(k, " ")
+ CV[k] <- temp.obj$Qinfo$CV.psuedo
+ fhat.old<- temp.obj$fitted.values
+ }
+ plot( df, CV, type="l", lwd=2)
+# psuedo data estimate
+ points( qsreg.fit50$cv.grid[,c(5,6)], col="blue")
+# alternative CV estimate via reweighted LS
+ points( qsreg.fit50$cv.grid[,c(2,3)], col="red")
+}
+
+
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{spatial}
diff --git a/man/RCMexample.Rd b/man/RCMexample.Rd
new file mode 100644
index 0000000..50a3e57
--- /dev/null
+++ b/man/RCMexample.Rd
@@ -0,0 +1,76 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{RCMexample}
+\alias{RCMexample}
+\docType{data}
+\title{3-hour precipitation fields from a regional climate model}
+\description{
+These are few model output fields from the North American Regional Climate
+ Change and Assessment Program (NARCCAP).
+ The imagea are transformed surface precipitation fields simulated by the WRFP
+regional climate model (RCM) over North Amreica forced by observation
+data. The fields are 3 hour precipitation for 8 time periods in January
+1, 1979. The grid is unequally spaced in longitude and latitude appropriate projection centered on the model
+domain.The grid points are nearly equally spaced in great circle distance due to this projection. Precipitation is in a log 10 scale where values smaller than
+4.39e-5 ( the .87 quantile) have been been set to this value. Longitudes have
+ been shifted from the original coordinates (0-360) to the range (-180-180) that
+is assumed by the R \code{map} function.
+}
+\usage{data(RCMexample)}
+\format{
+ The format is a list of three arrays:
+\itemize{
+
+\item x: 123X101 matrix of the longitude locations
+\item y: 123X101 matrix of the latitude locations
+\item z: 123X101X8 transformed matrix of precipitation
+}
+
+Spatial units are degrees with longitude being -180,180 with the prime meridian at 0.
+Precipitation is log 10 of cm / 3 hour period.
+}
+\details{
+ This is primarily an example of a regular grid that is not equally
+spaced and is due to transforming an equally spaced grid from one map
+projection into longitude latitude coordinates. This model is one small
+part of an extension series of numerical experiments the North American
+Regional Climate Change and Assessment Program (NARCCAP). NARCCAP has
+used 4 global climate models and observational data to supply the
+atmospheric boundery conditions for 6 different regional climate
+models. In the current data the forcing is the observations derived
+from the NCEP reanalysis data and is for Janurary 1, 1979. The full
+simulation runs for 20 years from this starting date. See
+\url{www.image.ucar.edu/Data} for more information about these data.
+
+To facilatate an animation of these fields the raw precipitation values
+have been transformed to the log scale with all values below 4.39E-5 cm/3 hours
+set to this lower bound.
+
+}
+\examples{
+data(RCMexample)
+# second time period
+
+image.plot( RCMexample$x, RCMexample$y, RCMexample$z[,,2])
+world( add=TRUE, lwd=2, col="grey")
+
+}
+\keyword{datasets}
diff --git a/man/REML.test.Rd b/man/REML.test.Rd
new file mode 100644
index 0000000..7405d80
--- /dev/null
+++ b/man/REML.test.Rd
@@ -0,0 +1,252 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{REML.test}
+\Rdversion{1.1}
+\alias{REML.test}
+\alias{MLE.Matern}
+\alias{MLE.Matern.fast}
+\alias{MLE.objective.fn}
+\alias{MaternGLS.test}
+\alias{MaternGLSProfile.test}
+\alias{MaternQR.test}
+\alias{MaternQRProfile.test}
+\title{
+Maximum Likelihood estimates for some Matern covariance parameters.
+}
+\description{
+For a fixed smoothness (shape) parameter these functions provide
+different ways of estimating and testing restricted and profile
+likehiloods for the Martern covariance parameters. \code{MLE.Matern}
+is a simple function that finds the restricted maximum likelihood
+(REML) estimates of the sill, nugget and range parameters (\code{rho,
+sigma2 and theta}) of the Matern covariance functions. The remaining
+functions are primarily for testing.
+}
+\usage{
+
+
+MLE.Matern(x, y, smoothness, theta.grid = NULL, ngrid = 20,
+ verbose = FALSE, niter = 25, tol = 1e-05,
+ Distance = "rdist", m = 2, Dmax = NULL, ...)
+
+MLE.Matern.fast(x, y, smoothness, theta.grid = NULL, ngrid=20, verbose=FALSE,
+ m=2, ...)
+MLE.objective.fn( ltheta,info, value=TRUE)
+
+MaternGLSProfile.test(x, y, smoothness = 1.5, init = log(c(0.05,1)))
+MaternGLS.test(x, y, smoothness = 1.5, init = log(c(1, 0.2, 0.1)))
+MaternQR.test (x, y, smoothness = 1.5, init = log(c(1, 0.2, 0.1)))
+MaternQRProfile.test (x, y, smoothness = 1.5, init = log(c(1)))
+
+REML.test(x, y, rho, sigma2, theta, nu = 1.5)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+
+ \item{Dmax}{ Maximum distance for grid used to evaluate the fitted
+ covariance function.}
+
+ \item{Distance}{Distance function used in finding covariance.}
+
+ \item{x}{ A matrix of spatial locations with rows indexing location
+ and columns the dimension (e.g. longitude/latitude)}
+
+\item{y}{ Spatial observations}
+
+ \item{smoothness}{Value of the Matern shape parameter.}
+
+ \item{theta.grid}{ Grid of theta parameter values to use for grid
+ search in maximizing the Likelilood. The defualt is do an initial
+ grid search on ngrid points with the range at the 3 an d 97
+ quantiles of the pairwise distances.If only two points are passed
+ then this is used as the range for a sequence of ngrid points.}
+
+ \item{ngrid}{Number of points in grid search.}
+
+\item{init}{Initial values of the parameters for optimization. For the
+first three functions these are in the order rho, theta sigma2 and in
+a log scale. For MaternQRProfile.test initial value is just
+log(theta). }
+
+\item{verbose}{If TRUE prints more information.}
+
+\item{rho}{ Marginal variance of Matern process (the "sill") }
+
+\item{sigma2}{Variance of measurement error (the "nugget")}
+
+\item{theta}{Scale parameter (the "range")}
+
+\item{nu}{Smoothness parameter}
+
+
+\item{ltheta}{ log of range parameter}
+
+\item{info}{A list with components \code{x,y, smoothness, ngrid} that
+pass the information to the optimizer. See details below.}
+
+\item{value}{If TRUE only reports minus log Profile likelihood with
+profile on the range parameter. If FALSE returns a list of
+information.}
+
+\item{m}{Polynomial of degree (m-1) will be included in model as a
+fixed part.}
+
+\item{niter}{Maximum number of interations in golden section search.}
+
+\item{tol}{Tolerance for convergence in golden section search.}
+
+\item{\dots}{Additional arguments that are passed to the Krig function
+in evaluating the profile likelihood.}
+
+}
+
+\details{
+ \code{MLE.Matern} is a simple function to find the maximum likelihood
+estimates of using the restricted and profiled likeilihood that is
+intrinsic to the ccomputations in \code{Krig}. The idea is that the
+likelihood is concentrated to the parameters lambda and theta. (where
+lambda = sigma2/rho). For fixed theta then this is maximized over
+lambda using \code{Krig} and thus concetrates the likelihood on
+theta. The final maximization over theta is implemented as a golden
+section search and assumes a convex function. All that is needed is
+for three theta grid points where the middle point has a larger
+likelihood than the endpoints. In practice the theta grid defualts to
+a 20 points equally spaced between the .03 and .97 quantiles of the
+distribution of the pairwise distances. The likelihood is evaluated at
+these points and a possible triple is identified. If no exists from
+the grid search the function returns with NAs for the parameter
+estimates. Note that due to the setup of the golden section search
+the computation actually minimizes minus the log likelihood.
+\code{MLE.Matern.fast} is a similar function but replaces the
+optimaiztion step computed by Krig to a tighter set of code in the
+function \code{MLE.objective.fn}. See also \code{mKrigMLEGrid} for an
+alternative and streamlined function using \code{mKrig} rather than
+\code{Krig}.
+}
+\value{
+For MLE.Matern (and MLE.Matern.fast)
+
+\item{smoothness}{Value of the smoothness function}
+
+\item{pars}{MLE for rho, theta and sigma}
+
+\item{REML}{Value of minus the log restricted Profile likelihood at
+the maxmimum}
+
+\item{trA}{Effective degrees of freedom in the predicted surface based
+on the MLE parameters.}
+
+\item{REML.grid}{Matrix with values of theta and the log likelihood
+from the initial grid search.}
+
+}
+\author{
+Doug Nychka
+}
+\note{
+See the script REMLest.test.R and Likelihood.test.R in the tests directory to
+see how these functions are used to check the likelihood expressions.
+}
+\examples{
+# Just look at one day from the ozone2
+data(ozone2)
+
+out<- MLE.Matern( ozone2$lon.lat,ozone2$y[16,],1.5, ngrid=8)
+plot( out$REML.grid)
+points( out$pars[2], out$REML, cex=2)
+xline( out$pars[2], col="blue", lwd=2)
+\dontrun{
+# to get a finer grid on initial search:
+out<- MLE.Matern( ozone2$lon.lat,ozone2$y[16,],1.5,
+ theta.grid=c(.3,2), ngrid=40)
+
+# simulated data 200 points uniformly distributed
+set.seed( 123)
+x<- matrix( runif( 2*200), ncol=2)
+n<- nrow(x)
+rho= 2.0
+sigma= .05
+theta=.5
+
+Cov.mat<- rho* Matern( rdist(x,x), smoothness=1.0, range=theta)
+A<- chol( Cov.mat)
+gtrue<- t(A) \%*\% rnorm(n)
+gtrue<- c( gtrue)
+err<- rnorm(n)*sigma
+y<- gtrue + err
+out0<- MLE.Matern( x,y,smoothness=1.0) # the bullet proof version
+# the MLEs and -log likelihood at maximum
+print( out0$pars)
+print( out0$REML)
+
+out<- MLE.Matern.fast( x,y, smoothness=1.0) # for the impatient
+# the MLEs:
+print( out$pars)
+print( out$REML)
+
+
+# MLE for fixed theta (actually the MLE from out0)
+# that uses MLE.objective.fn directly
+info<- list( x=x,y=y,smoothness=1.0, ngrid=80)
+# the MLEs:
+out2<- MLE.objective.fn(log(out0$pars[2]), info, value=FALSE)
+print( out2$pars)
+}
+
+\dontrun{
+# Now back to Midwest ozone pollution ...
+# Find the MLEs for ozone data and evaluate the Kriging surface.
+ data(ozone2)
+ out<- MLE.Matern.fast( ozone2$lon.lat,ozone2$y[16,],1.5)
+#use these parameters to fit surface ....
+ lambda.MLE<- out$pars[3]/out$pars[1]
+ out2<- Krig( ozone2$lon.lat,ozone2$y[16,] , Covariance="Matern",
+ theta=out$pars[2], smoothness=1.5, lambda= lambda.MLE)
+ surface( out2) # uses default lambda -- which is the right one.
+
+# here is another way to do this where the new lambda is given in
+# the predict step
+ out2<- Krig( ozone2$lon.lat,ozone2$y[16,] , Covariance="Matern",
+ theta=out$pars[2], smoothness=1.5)
+# The default lambda is that found by GCV
+# predict on a grid but use the MLE value for lambda:
+ out.p<- predictSurface(out2, lambda= lambda.MLE)
+ surface(out.p) # same surface!
+}
+
+# One could also use mKrig with a fixed lambda to compute the surface.
+
+\dontrun{
+# looping through all the days of the ozone data set.
+ data( ozone2)
+ x<- ozone2$lon.lat
+ y<- ozone2$y
+ out.pars<- matrix( NA, ncol=3, nrow=89)
+
+ for ( k in 1:89){
+ hold<- MLE.Matern.fast( x,c(y[k,]), 1.5)$pars
+ cat( "day", k," :", hold, fill=TRUE)
+ out.pars[k,]<- hold }
+}
+
+}
+\keyword{spatial}
+
diff --git a/man/RMprecip.Rd b/man/RMprecip.Rd
new file mode 100644
index 0000000..1cf5886
--- /dev/null
+++ b/man/RMprecip.Rd
@@ -0,0 +1,165 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{RMprecip}
+\alias{RMprecip}
+\alias{RMelevation}
+\alias{PRISMelevation}
+\title{
+Monthly total precipitation (mm) for August 1997 in the Rocky Mountain
+Region and some gridded 4km elevation data sets (m).
+}
+\description{
+\code{RMprecip} is a useful spatial data set of moderate size consisting of 806
+locations. See www.image.ucar.edu/Data for the source of these data.
+\code{PRISMelevation} and \code{RMelevation} are gridded elevations for the
+continental US and Rocky Mountain region at 4km resolution.
+Note that the gridded elevations from the PRISM data product are
+different than the exact station elevations. (See example below.)
+}
+
+\format{
+The data set \code{RMprecip} is a list containing the following components:
+
+\describe{
+\item{x}{
+Longitude-latitude position of monitoring stations. Rows names are station id codes consistent
+with the US Cooperative observer network.
+The ranges for these coordinates are [-111, -99] for longitude and [35,45] for latitude.
+}
+\item{elev}{
+Station elevation in meters.
+}
+\item{y}{
+Monthly total precipitation in millimeters.
+for August, 1997
+}
+}
+
+The data sets
+ \code{PRISMelevation} and
+ \code{RMelevation} are lists
+in the usual R grid format for images and contouring
+
+They have the following components:
+\describe{
+\item{x}{
+Longitude grid at approximately 4km resolution}
+\item{y}{
+Latitude grid at approximately 4km resolution}
+\item{z}{
+Average elevation for grid cell in meters
+}
+
+}
+
+These elevations and the companion grid formed the basis for the
+103-Year High-Resolution Precipitation Climate Data Set for the
+Conterminous United States
+\url{ftp://ftp.ncdc.noaa.gov/pub/data/prism100} archived at the National
+Climate Data Center. This work was primarily authored by Chris Daly
+\url{www.prism.oregonstate.edu} and his PRISM group but had some
+contribution from the Geophysical Statistics Project at NCAR.
+and is an interpolation of the observational data to a 4km grid that
+takes into account topography such as elevation and aspect.
+
+}
+\details{
+The binary file \code{RData.USmonthlyMet.bin}
+can be downwloaded from
+\url{http://www.image.ucar.edu/Data/US.monthly.met}
+and also includes information on its source.
+
+\preformatted{
+# explicit source code to create the RMprecip data
+dir <- "" # include path to data file
+load(paste(dir, "RData.USmonthlyMet.bin", sep="/")
+#year.id<- 1963- 1895
+year.id<- 103
+#pptAUG63<- USppt[ year.id,8,]
+loc<- cbind(USpinfo$lon, USpinfo$lat)
+xr<- c(-111, -99)
+yr<- c( 35, 45)
+station.subset<- (loc[,1]>= xr[1]) & (loc[,1] <= xr[2]) & (loc[,2]>= yr[1]) & (loc[,2]<= yr[2])
+ydata<- USppt[ year.id,8,station.subset]
+ydata <- ydata*10 # cm -> mm conversion
+xdata<- loc[station.subset,]
+dimnames(xdata)<- list( USpinfo$station.id[station.subset], c( "lon", "lat"))
+xdata<- data.frame( xdata)
+good<- !is.na(ydata)
+ydata<- ydata[good]
+xdata<- xdata[good,]
+
+test.for.zero.flag<- 1
+test.for.zero( unlist(RMprecip$x), unlist(xdata), tag="locations")
+test.for.zero( ydata, RMprecip$y, "values")
+}
+}
+\examples{
+# this data set was created the
+# historical data taken from
+# Observed monthly precipitation, min and max temperatures for the coterminous US
+# 1895-1997
+# NCAR_pinfill
+# see the Geophysical Statistics Project datasets page for the supporting functions
+# and details.
+
+# plot
+quilt.plot(RMprecip$x, RMprecip$y)
+US( add=TRUE, col=2, lty=2)
+
+# comparison of station elevations with PRISM gridded values
+
+data(RMelevation)
+
+interp.surface( RMelevation, RMprecip$x)-> test.elev
+
+plot( RMprecip$elev, test.elev, xlab="Station elevation",
+ylab="Interpolation from PRISM grid")
+abline( 0,1,col="blue")
+
+# some differences with high elevations probably due to complex
+# topography!
+
+#
+# view of Rockies looking from theSoutheast
+
+save.par<- par(no.readonly=TRUE)
+
+par( mar=c(0,0,0,0))
+
+# fancy use of persp with shading and lighting.
+persp( RMelevation, theta=75, phi= 15,
+ box=FALSE, axes=FALSE, xlab="", ylab="",
+ border=NA,
+ shade=.95, lphi= 10, ltheta=80,
+ col= "wheat4",
+ scale=FALSE, expand=.00025)
+
+# reset graphics parameters and a more conventional image plot.
+par( save.par)
+image.plot(RMelevation, col=topo.colors(256))
+US( add=TRUE, col="grey", lwd=2)
+title("PRISM elevations (m)")
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/Tps.Rd b/man/Tps.Rd
new file mode 100644
index 0000000..a48b6e3
--- /dev/null
+++ b/man/Tps.Rd
@@ -0,0 +1,489 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Tps}
+\alias{Tps}
+\alias{fastTps}
+\title{
+ Thin plate spline regression
+}
+\description{
+Fits a thin plate spline surface to irregularly spaced data. The
+smoothing parameter is chosen by generalized cross-validation. The assumed
+model is additive Y = f(X) +e where f(X) is a d dimensional surface.
+This function also works for just a single dimension and is a special case of a spatial process estimate
+(Kriging). A "fast" version of this function uses a compactly supported Wendland covariance and computes the estimate for a fixed smoothing parameter.
+}
+\usage{
+Tps(x, Y, m = NULL, p = NULL, scale.type = "range", lon.lat = FALSE,
+ miles = TRUE, method = "GCV", GCV = TRUE, ...)
+
+fastTps(x, Y, m = NULL, p = NULL, theta, lon.lat=FALSE,
+ find.trA = TRUE, lambda=0, ...)
+
+}
+\arguments{
+%To be helpful, a more complete list of arguments are described that are the
+%same as those for the Krig function.
+\item{x}{
+Matrix of independent variables. Each row is a location or a set of
+independent covariates.
+}
+\item{Y}{
+Vector of dependent variables.
+}
+\item{m}{
+A polynomial function of degree (m-1) will be
+included in the model as the drift (or spatial trend) component.
+Default is the value such that 2m-d is greater than zero where d is the
+dimension of x.
+}
+
+\item{p}{
+Polynomial power for Wendland radial basis functions. Default is 2m-d
+where d is the dimension of x.
+}
+
+\item{scale.type}{
+The independent variables and knots are scaled to the specified
+scale.type.
+By default the scale type is "range", whereby
+the locations are transformed
+ to the interval (0,1) by forming (x-min(x))/range(x) for each x.
+Scale type of "user" allows specification of an x.center and x.scale by
+the
+user. The default for "user" is mean 0 and standard deviation 1. Scale
+type of "unscaled" does not scale the data.
+}
+\item{theta}{The tapering range that is passed to the Wendland compactly
+supported covariance. The covariance (i.e. the radial basis function) is
+zero beyond range theta. The larger theta the closer this model will approximate the
+standard thin plate spline.}
+
+\item{lon.lat}{If TRUE locations are interpreted as lognitude and
+latitude and great circle distance is used to find distances among
+locations. The theta scale parameter for \code{fast.Tps} (setting the
+compact support of the Wendland function) in this case is in units of
+miles (see example and caution below). }
+
+ \item{method}{ Determines what "smoothing" parameter should be
+ used. The default is to estimate standard GCV Other choices are:
+ GCV.model, GCV.one, RMSE, pure error and REML. The differences
+ are explained in the Krig help file.}
+ \item{GCV}{If TRUE the decompositions are done to efficiently evaluate the estimate, GCV function and likelihood at multiple values of lambda. }
+\item{miles}{If TRUE great circle distances are in miles if FALSE
+distances are in kilometers}
+
+
+\item{lambda}{Smoothing parameter the ratio of error variance to
+ process variance, default is zero which corresponds to
+ interpolation. See fastTpsMLE to estimate this paramter
+ from the data.}
+
+\item{find.trA}{If TRUE will estimate the effective degrees of freedom
+using a simple Monte Carlo method. This will add to the computational
+burden by approximately \code{NtrA} solutions of the linear system but
+the cholesky decomposition is reused.}
+
+\item{\dots}{For \code{Tps} any argument that is valid for the
+\code{Krig} function. Some of the main ones are listed below.
+
+For \code{fastTps} any argument that is suitable for the \code{mKrig}
+function see help on mKrig for these choices.
+
+Arguments for Tps:
+\describe{
+ \item{lambda}{ Smoothing parameter that is the ratio of the error
+ variance (sigma**2) to the scale parameter of the covariance
+ function. If omitted this is estimated by GCV. }
+
+ \item{Z}{Linear covariates to be included in fixed part of the model
+ that are distinct from the default low order polynomial in
+ \code{x}}
+
+ \item{df}{ The effective number of parameters for the fitted
+ surface. Conversely, N- df, where N is the total number of
+ observations is the degrees of freedom associated with the
+ residuals. This is an alternative to specifying lambda and much
+ more interpretable.}
+
+ \item{cost}{ Cost value used in GCV criterion. Corresponds to a
+ penalty for increased number of parameters. The default is 1.0 and
+ corresponds to the usual GCV.}
+
+ \item{weights}{ Weights are proportional to the reciprocal variance
+ of the measurement error. The default is no weighting i.e. vector
+ of unit weights. }
+
+ \item{nstep.cv}{ Number of grid points for minimum GCV search. }
+
+ \item{x.center}{ Centering values are subtracted from each column of
+ the x matrix. Must have scale.type="user".}
+
+ \item{x.scale}{ Scale values that divided into each column after
+ centering. Must have scale.type="user".}
+
+ \item{rho}{Scale factor for covariance. }
+
+ \item{sigma2}{ Variance of errors or if weights are not equal to 1
+ the variance is sigma**2/weight.}
+
+
+
+ \item{verbose}{ If true will print out all kinds of intermediate
+ stuff. }
+
+ \item{mean.obj}{ Object to predict the mean of the spatial
+ process. }
+
+ \item{sd.obj}{ Object to predict the marginal standard deviation of
+ the spatial process. }
+
+ \item{null.function}{An R function that creates the matrices for
+ the null space model. The default is fields.mkpoly, an R
+ function that creates a polynomial regression matrix with all
+ terms up to degree m-1. (See Details) }
+
+ \item{offset}{ The offset to be used in the GCV
+ criterion. Default is 0. This would be used when Krig/Tps is
+ part of a backfitting algorithm and the offset has to be
+ included to reflect other model degrees of freedom. }
+}
+}
+
+
+
+}
+
+\value{
+A list of class Krig. This includes the
+fitted values, the predicted surface evaluated at the
+observation locations, and the residuals. The results of the grid
+search minimizing the generalized cross validation function are
+returned in gcv.grid. Note that the GCV/REML optimization is
+done even if lambda or df is given.
+Please see the documentation on Krig for details of the returned
+arguments.
+}
+\details{
+Both of these functions are special cases of using the
+\code{Krig} and \code{mKrig} functions. See the help on each of these
+for more information on the calling arguments and what is returned.
+
+A thin plate spline is result of minimizing the residual sum of
+squares subject to a constraint that the function have a certain
+level of smoothness (or roughness penalty). Roughness is
+quantified by the integral of squared m-th order derivatives. For one
+dimension and m=2 the roughness penalty is the integrated square of
+the second derivative of the function. For two dimensions the
+roughness penalty is the integral of
+
+ (Dxx(f))**22 + 2(Dxy(f))**2 + (Dyy(f))**22
+
+(where Duv denotes the second partial derivative with respect to u
+and v.) Besides controlling the order of the derivatives, the value of
+m also determines the base polynomial that is fit to the data.
+The degree of this polynomial is (m-1).
+
+The smoothing parameter controls the amount that the data is
+smoothed. In the usual form this is denoted by lambda, the Lagrange
+multiplier of the minimization problem. Although this is an awkward
+scale, lambda =0 corresponds to no smoothness constraints and the data
+is interpolated. lambda=infinity corresponds to just fitting the
+polynomial base model by ordinary least squares.
+
+This estimator is implemented by passing the right generalized covariance
+function based on radial basis functions to the more general function
+Krig. One advantage of this implementation is that once a Tps/Krig object
+is created the estimator can be found rapidly for other data and smoothing
+parameters provided the locations remain unchanged. This makes simulation
+within R efficient (see example below). Tps does not currently support the
+knots argument where one can use a reduced set of basis functions. This is
+mainly to simplify the code and a good alternative using knots would be to use a
+valid covariance from the Matern family and a large range parameter.
+
+CAUTION about \code{lon.lat=TRUE}: The option to use great circle distance
+ to define the radial basis functions (\code{lon.lat=TRUE}) is very useful
+ for small geographic domains where the spherical geometry is well approximated by a plane. However, for large domains the spherical distortion be large enough that the basis function no longer define a positive definite system and Tps will report a numerical error. An alternative is to switch to a three
+dimensional thin plate spline the locations being the direction cosines. This will
+give approximate great circle distances for locations that are close and also the numerical methods will always have a positive definite matrices.
+
+Here is an example using this idea for \code{RMprecip} and also some
+examples of building grids and evaluating the Tps results on them:
+\preformatted{
+# a useful function:
+ dircos<- function(x1){
+ coslat1 <- cos((x1[, 2] * pi)/180)
+ sinlat1 <- sin((x1[, 2] * pi)/180)
+ coslon1 <- cos((x1[, 1] * pi)/180)
+ sinlon1 <- sin((x1[, 1] * pi)/180)
+ cbind(coslon1*coslat1, sinlon1*coslat1, sinlat1)}
+# fit in 3-d to direction cosines
+ out<- Tps(dircos(RMprecip$x),RMprecip$y)
+ xg<-make.surface.grid(fields.x.to.grid(RMprecip$x))
+ fhat<- predict( out, dircos(xg))
+# coerce to image format from prediction vector and grid points.
+ out.p<- as.surface( xg, fhat)
+ surface( out.p)
+# compare to the automatic
+ out0<- Tps(RMprecip$x,RMprecip$y, lon.lat=TRUE)
+ surface(out0)
+}
+
+The function \code{fastTps} is really a convenient wrapper function that
+calls \code{mKrig} with the Wendland covariance function. This is
+experimental and some care needs to exercised in specifying the taper
+range and power ( \code{p}) which describes the polynomial behavior of
+the Wendland at the origin. Note that unlike Tps the locations are not
+scaled to unit range and this can cause havoc in smoothing problems with
+variables in very different units. So rescaling the locations \code{ x<- scale(x)}
+is a good idea for putting the variables on a common scale for smoothing.
+This function does have the potential to approximate estimates of Tps
+for very large spatial data sets. See \code{wendland.cov} and help on
+the SPAM package for more background.
+Also, the function \code{predictSurface.fastTps} has been made more efficient for the
+case of k=2 and m=2.
+
+See also the mKrig function for handling larger data sets and also for an example
+of combining Tps and mKrig for evaluation on a huge grid.
+
+ }
+
+\section{References}{
+See "Nonparametric Regression and Generalized Linear Models"
+by Green and Silverman.
+See "Additive Models" by Hastie and Tibshirani.
+}
+\seealso{
+Krig, summary.Krig, predict.Krig, predictSE.Krig, predictSurface, predictSurface.fastTps, plot.Krig, mKrig
+\code{\link{surface.Krig}},
+\code{\link{sreg}}
+}
+\examples{
+#2-d example
+
+fit<- Tps(ChicagoO3$x, ChicagoO3$y) # fits a surface to ozone measurements.
+
+set.panel(2,2)
+plot(fit) # four diagnostic plots of fit and residuals.
+set.panel()
+
+# summary of fit and estiamtes of lambda the smoothing parameter
+summary(fit)
+
+surface( fit) # Quick image/contour plot of GCV surface.
+
+# NOTE: the predict function is quite flexible:
+
+ look<- predict( fit, lambda=2.0)
+# evaluates the estimate at lambda =2.0 _not_ the GCV estimate
+# it does so very efficiently from the Krig fit object.
+
+ look<- predict( fit, df=7.5)
+# evaluates the estimate at the lambda values such that
+# the effective degrees of freedom is 7.5
+
+
+# compare this to fitting a thin plate spline with
+# lambda chosen so that there are 7.5 effective
+# degrees of freedom in estimate
+# Note that the GCV function is still computed and minimized
+# but the lambda values used correpsonds to 7.5 df.
+
+fit1<- Tps(ChicagoO3$x, ChicagoO3$y,df=7.5)
+
+set.panel(2,2)
+plot(fit1) # four diagnostic plots of fit and residuals.
+ # GCV function (lower left) has vertical line at 7.5 df.
+set.panel()
+
+# The basic matrix decompositions are the same for
+# both fit and fit1 objects.
+
+# predict( fit1) is the same as predict( fit, df=7.5)
+# predict( fit1, lambda= fit$lambda) is the same as predict(fit)
+
+
+# predict onto a grid that matches the ranges of the data.
+
+out.p<-predictSurface( fit)
+image( out.p)
+
+# the surface function (e.g. surface( fit)) essentially combines
+# the two steps above
+
+# predict at different effective
+# number of parameters
+out.p<-predictSurface( fit,df=10)
+
+\dontrun{
+# predicting on a grid along with a covariate
+ data( COmonthlyMet)
+# predicting average daily minimum temps for spring in Colorado
+# NOTE to create an 4km elevation grid:
+# data(PRISMelevation); CO.elev1 <- crop.image(PRISMelevation, CO.loc )
+# then use same grid for the predictions: CO.Grid1<- CO.elev1[c("x","y")]
+ obj<- Tps( CO.loc, CO.tmin.MAM.climate, Z= CO.elev)
+ out.p<-predictSurface( obj,
+ grid.list=CO.Grid, ZGrid= CO.elevGrid)
+ image.plot( out.p)
+ US(add=TRUE, col="grey")
+ contour( CO.elevGrid, add=TRUE, levels=c(2000), col="black")
+}
+\dontrun{
+#A 1-d example with confidence intervals
+ out<-Tps( rat.diet$t, rat.diet$trt) # lambda found by GCV
+ out
+ plot( out$x, out$y)
+ xgrid<- seq( min( out$x), max( out$x),,100)
+ fhat<- predict( out,xgrid)
+ lines( xgrid, fhat,)
+ SE<- predictSE( out, xgrid)
+ lines( xgrid,fhat + 1.96* SE, col="red", lty=2)
+ lines(xgrid, fhat - 1.96*SE, col="red", lty=2)
+
+#
+# compare to the ( much faster) B spline algorithm
+# sreg(rat.diet$t, rat.diet$trt)
+
+# Here is a 1-d example with 95 percent CIs where sreg would not
+# work:
+# sreg would give the right estimate here but not the right CI's
+ x<- seq( 0,1,,8)
+ y<- sin(3*x)
+ out<-Tps( x, y) # lambda found by GCV
+ plot( out$x, out$y)
+ xgrid<- seq( min( out$x), max( out$x),,100)
+ fhat<- predict( out,xgrid)
+ lines( xgrid, fhat, lwd=2)
+ SE<- predictSE( out, xgrid)
+ lines( xgrid,fhat + 1.96* SE, col="red", lty=2)
+ lines(xgrid, fhat - 1.96*SE, col="red", lty=2)
+}
+
+# More involved example adding a covariate to the fixed part of model
+\dontrun{
+set.panel( 1,3)
+# without elevation covariate
+ out0<-Tps( RMprecip$x,RMprecip$y)
+ surface( out0)
+ US( add=TRUE, col="grey")
+
+# with elevation covariate
+ out<- Tps( RMprecip$x,RMprecip$y, Z=RMprecip$elev)
+# NOTE: out$d[4] is the estimated elevation coefficient
+# it is easy to get the smooth surface separate from the elevation.
+ out.p<-predictSurface( out, drop.Z=TRUE)
+ surface( out.p)
+ US( add=TRUE, col="grey")
+# and if the estimate is of high resolution and you get by with
+# a simple discretizing -- does not work in this case!
+ quilt.plot( out$x, out$fitted.values)
+#
+# the exact way to do this is evaluate the estimate
+# on a grid where you also have elevations
+# An elevation DEM from the PRISM climate data product (4km resolution)
+ data(RMelevation)
+ grid.list<- list( x=RMelevation$x, y= RMelevation$y)
+ fit.full<- predictSurface( out, grid.list, ZGrid= RMelevation)
+# this is the linear fixed part of the second spatial model:
+# lon,lat and elevation
+ fit.fixed<- predictSurface( out, grid.list, just.fixed=TRUE, ZGrid= RMelevation)
+# This is the smooth part but also with the linear lon lat terms.
+ fit.smooth<-predictSurface( out, grid.list, drop.Z=TRUE)
+#
+ set.panel( 3,1)
+
+ fit0<- predictSurface( out0, grid.list)
+ image.plot( fit0)
+ title(" first spatial model (w/o elevation)")
+ image.plot( fit.fixed)
+ title(" fixed part of second model (lon,lat,elev linear model)")
+ US( add=TRUE)
+ image.plot( fit.full)
+ title("full prediction second model")
+ set.panel()
+}
+###
+### fast Tps
+# m=2 p= 2m-d= 2
+#
+# Note: theta =3 degrees is a very generous taper range.
+# Use some trial theta value with rdist.nearest to determine a
+# a useful taper. Some empirical studies suggest that in the
+# interpolation case in 2 d the taper should be large enough to
+# about 20 non zero nearest neighbors for every location.
+
+ fastTps( RMprecip$x,RMprecip$y,m=2,lambda= 1e-2, theta=3.0) -> out2
+
+# note that fastTps produces an mKrig object so one can use all the
+# the overloaded functions that are defined for the mKrig class.
+# summary of what happened note estimate of effective degrees of
+# freedom
+ print( out2)
+
+\dontrun{
+set.panel( 1,2)
+surface( out2)
+
+#
+# now use great circle distance for this smooth
+# note the different "theta" for the taper support ( there are
+# about 70 miles in one degree of latitude).
+#
+fastTps( RMprecip$x,RMprecip$y,m=2,lambda= 1e-2,lon.lat=TRUE, theta=210) -> out3
+print( out3) # note the effective degrees of freedom is different.
+surface(out3)
+
+set.panel()
+}
+
+\dontrun{
+#
+# simulation reusing Tps/Krig object
+#
+fit<- Tps( rat.diet$t, rat.diet$trt)
+true<- fit$fitted.values
+N<- length( fit$y)
+temp<- matrix( NA, ncol=50, nrow=N)
+sigma<- fit$shat.GCV
+for ( k in 1:50){
+ysim<- true + sigma* rnorm(N)
+temp[,k]<- predict(fit, y= ysim)
+}
+matplot( fit$x, temp, type="l")
+
+}
+#
+#4-d example
+fit<- Tps(BD[,1:4],BD$lnya,scale.type="range")
+
+# plots fitted surface and contours
+# default is to hold 3rd and 4th fixed at median values
+
+surface(fit)
+
+
+
+}
+\keyword{smooth}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/US.Rd b/man/US.Rd
new file mode 100644
index 0000000..530a872
--- /dev/null
+++ b/man/US.Rd
@@ -0,0 +1,53 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{US}
+\alias{US}
+\title{
+ Plot of the US with state boundaries
+}
+\description{
+Plots quickly, medium resolution outlines of the US with the states and
+bodies of water. A simple wrapper for the map function from the maps package.
+}
+\usage{
+US( ...)
+}
+\arguments{
+\item{\dots}{
+These are the arguments that are passed to the map function
+from the maps package.
+}
+}
+\details{
+The older version of this function (fields < 6.7.2) used the FIELDS dataset US.dat for the
+ coordinates. Currenty this has been switched to use the maps package.
+}
+\seealso{
+world
+}
+\examples{
+# Draw map in device color # 3
+US( col=3)
+}
+\keyword{hplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/US.dat.Rd b/man/US.dat.Rd
new file mode 100644
index 0000000..13388be
--- /dev/null
+++ b/man/US.dat.Rd
@@ -0,0 +1,34 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{US.dat}
+\alias{US.dat}
+\title{
+Outline of coterminous US and states.
+}
+\description{
+This data set is used by the fields function US to draw a map. It is the
+medium resolution outline that is produced by drawing the US from the maps
+package.
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/Wendland.Rd b/man/Wendland.Rd
new file mode 100644
index 0000000..02f6b2c
--- /dev/null
+++ b/man/Wendland.Rd
@@ -0,0 +1,160 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Wendland}
+\alias{Wendland}
+\alias{Wendland.beta}
+\alias{Wendland2.2}
+\alias{fields.D}
+\alias{fields.pochdown}
+\alias{fields.pochup}
+\alias{wendland.eval}
+%\alias{Wendland.father}
+%\alias{Wendland.mother}
+%\alias{wendland.basis}
+
+
+\title{Wendland family of covariance functions and supporting
+numerical functions}
+
+\description{
+ Computes the compactly supported, stationatry Wendland covariance
+function as a function ofdistance. This family is useful for creating
+sparse covariance matrices.
+
+}
+
+\usage{
+Wendland(d, theta = 1, dimension, k,derivative=0, phi=NA)
+
+Wendland2.2(d, theta=1)
+Wendland.beta(n,k)
+wendland.eval(r, n, k, derivative = 0)
+fields.pochup(q, k)
+fields.pochdown(q, k)
+fields.D(f,name,order = 1)
+
+%Wendland.father(x, theta = 1, dimension = 1, k=3)
+%Wendland.mother(x, theta = 1, dimension = 1, k=3)
+%wendland.basis(x1, x2, theta = 1, V=NULL,
+% k = 3, C = NA, Dist.args = list(method = "euclidean"),
+% spam.format = TRUE, verbose = FALSE, flavor=0)
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+\item{d}{Distances between locations. Or for wendland.coef the dimension of
+the locations.}
+\item{theta}{Scale for distances. This is the same as the
+range parameter.}
+\item{dimension}{Dimension of the locations}
+\item{n}{Dimension for computing Wendland polynomial coefficients}
+\item{k}{Order of covariance function.}
+\item{derivative}{Indicates derivative of covariance function}
+\item{phi}{Depreciated argument will give stop if not an NA.
+ (Formerly the scale factor to multiply the function. Equivalent to the
+marginal variance or sill if viewed as a covariance function.) }
+\item{r}{ Real value in [0,1] to evaluate Wendland function.}
+\item{q}{Order of Pochhammer symbol}
+\item{f}{Numerical expression to differentiate.}
+\item{name}{Variable with which to take derivative.}
+\item{order}{Order of derivative.}
+
+%\item{x}{Argument for one dimensional basis function}
+%\item{x1}{Two dimensional locations to evaluate the basis functions}
+%\item{x2}{Two dimensional centers that define the basis}
+%\item{C}{Vector or matrix of coefficients to multiply with basis functions}
+%\item{Dist.args}{Arguments to distance function}
+%\item{V}{See explantion in help for \code{Exp.cov}}
+%\item{spam.format}{If TRUE return result in sparse format}
+%\item{verbose}{If TRUE prints out intermediate steps for debugging.}
+%\item{flavor}{Takes values 0:3. Controls type of tensor product:
+%father*father=0, father*mother =1, mother*father = 2, mother*mother =3 }
+
+}
+
+\details{
+ This is the basic function applied to distances and called by the
+\code{wendland.cov} function. It can also be used as the Covariance or
+Taper specifications in the more general
+stationary.cov and station.taper.cov functions.
+The proofs and construction of the Wendland family of positive definite functions can be found in the work of Wendland(1995).
+( H. Wendland. Piecewise polynomial , positive definite and compactly supported radial functions of minimal degree. AICM 4(1995), pp 389-396.)
+
+The Wendland covariance function is a positive
+polynomial on [0,theta] and zero beyond theta. It is further normalized in these fields functions to be 1 at 0. The parameter \code{k} detemines the smoothness of the covariance at zero. The additional parameter \code{n} or \code{dimension} is needed because the property of
+positive definitness for radial functions depends on the dimension being considered.
+
+The polynomial terms of the Wenland function.
+ are computed recursively based on the values of \code{k}
+and \code{dimension} in the function \code{wendland.eval}. The matrix of
+coefficients found by \code{Wendland.beta} is used to weight each polynomial term and follows Wendland's original construction of these functions. The recursive definition of the Wendland coefficients depends on Pochhammer symbols akin to binomial coefficients:
+
+\code{fields.pochup(q, k)}
+calculates the Pochhammer symbol for rising factorial q(q+1)(q+2)...(q+k-1)
+
+and
+
+\code{fields.pochdown(q, k)}
+calculates the Pochhammer symbol for falling factorial q(q-1)(q-2)...(q-k+1).
+
+Derivatives are found symbolically using a recursive modification of the base function \code{D} (\code{fields.D}) and then evaluated numerically based on the polynomial form.
+
+
+A specific example of the Wendland family is \code{Wendland2.2} (k=2, dimension=2). This is included mainly for testing but the explicit formula may also be enlightening.
+}
+
+\value{
+A vector of the covariances or its derivative.
+
+}
+\author{Doug Nychka, Ling Shen}
+\seealso{ wendland.cov, stationary.taper.cov}
+\examples{
+
+dt<- seq( 0,1.5,, 200)
+
+y<- Wendland( dt, k=2, dimension=2)
+
+plot( dt, y, type="l")
+
+# should agree with
+
+y.test<- Wendland2.2( dt)
+points( dt, y.test)
+
+# second derivative
+plot( dt, Wendland( dt, k=4, dimension=2, derivative=2), type="l")
+
+# a radial basis function using the Wendland the "knot" is at (.25,.25)
+gl<- list( x= seq( -1,1,,60), y = seq( -1,1,,60) )
+
+bigD<- rdist( make.surface.grid( gl), matrix( c(.25,.25), nrow=1))
+RBF<- matrix(Wendland( bigD, k=2, dimension=2), 60,60)
+
+# perspective with some useful settings for shading.
+persp( gl$x, gl$y, RBF, theta=30, phi=20, shade=.3, border=NA, col="grey90")
+
+
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{spatial}
diff --git a/man/WorldBank.Rd b/man/WorldBank.Rd
new file mode 100644
index 0000000..53f94b1
--- /dev/null
+++ b/man/WorldBank.Rd
@@ -0,0 +1,103 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{WorldBankCO2}
+\alias{WorldBankCO2}
+\docType{data}
+\title{Carbon emissions and demographic covariables by country for 1999.}
+\description{
+These data are a small subset of the demographic data compiled by the World Bank. The
+data has been restricted to 1999 and to countries with a population larger than 1 million. Also, only countries reporting all the covariables are included.
+}
+ \usage{ data(WorldBankCO2)}
+\format{
+ This a 75X5 matrix with the row names identifying countries and
+columns the covariables:
+\code{ "GDP.cap" "Pop.mid" "Pop.urb" "CO2.cap" "Pop"}
+\itemize{
+\item GDP.cap: Gross domestic product (in US dollars) per capita.
+\item Pop.mid: percentage of the population within the ages of 15 through 65.
+\item Pop.urb: Precentage of the population living in an urban environment
+\item CO2.cap: Equivalent CO2 emmissions per capita
+\item Pop: Population
+}
+
+}
+\section{Reference}{
+Romero-Lankao, P., J. L. Tribbia and D. Nychka (2008) Development and greenhouse
+gas emissions deviate from the modernization theory and convergence hypothesis. Cli-
+mate Research 38, 17-29.
+}
+\examples{
+data(WorldBankCO2)
+plot( WorldBankCO2[,"GDP.cap"], WorldBankCO2[,"CO2.cap"], log="xy")
+}
+\section{Creating dataset}{
+Listed below are scripts to create this data set from spread sheet on the
+World Bank CDs:
+\preformatted{
+## read in comma delimited spread sheet
+ read.csv("climatedemo.csv", stringsAsFactors=FALSE)->hold
+## convert numbers to matrix of data
+ Ddata<- as.matrix( hold[,5:51] )
+ Ddata[Ddata==".."] <- NA
+## still in character form parse as numeric
+ Ddata<- matrix( as.numeric( Ddata), nrow=1248, ncol=ncol( Ddata),
+ dimnames=list( NULL, format( 1960:2006) ))
+## these are the factors indicating the different variables
+### unique( Fac) gives the names of factors
+ Fac<- as.character( hold[,1])
+ years<- 1960:2006
+# create separate tables of data for each factor
+ temp<- unique( Fac)
+## also subset Country id and name
+ Country.id<- as.character( hold[Fac== temp[1],3])
+ Country<- as.character( hold[Fac== temp[1],4])
+ Pop<- Ddata[ Fac== temp[2],]
+ CO2<- Ddata[ Fac== temp[1],]
+ Pop.mid<- Ddata[ Fac== temp[3],]
+ GDP.cap<- Ddata[ Fac== temp[4],]
+ Pop.urb<- Ddata[ Fac== temp[5],]
+ CO2.cap<- CO2/Pop
+ dimnames( Pop)<- list( Country.id,format(years))
+ dimnames( CO2)<- list( Country.id,format(years))
+ dimnames( Pop.mid)<- list( Country.id,format(years))
+ dimnames( Pop.urb)<- list( Country.id,format(years))
+ dimnames( CO2.cap)<- list( Country.id,format(years))
+# delete temp data sets
+ rm( temp)
+ rm( hold)
+ rm( Fac)
+# define year to do clustering.
+ yr<- "1999"
+# variables for clustering combined as columns in a matrix
+ temp<-cbind( GDP.cap[,yr], Pop.mid[,yr], Pop.urb[,yr],CO2[,yr],Pop[,yr])
+# add column names and figure how many good data rows there are.
+ dimnames( temp)<-list( Country, c("GDP.cap","Pop.mid","Pop.urb",
+ "CO2.cap", "Pop"))
+ good<-complete.cases(temp)
+ good<- good & Pop[,yr] > 10e6
+# subset with only the complete data rows
+ WorldBankCO2<- temp[good,]
+ save(WorldBankCO2, file="WorldBankCO2.rda")
+}
+
+}
+\keyword{datasets}
diff --git a/man/add.image.Rd b/man/add.image.Rd
new file mode 100644
index 0000000..e30bc82
--- /dev/null
+++ b/man/add.image.Rd
@@ -0,0 +1,76 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{add.image}
+\alias{add.image}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{ Adds an image to an existing plot.}
+\description{
+Adds an image to an existing plot. Simple arguments control the
+location and size.
+}
+\usage{
+add.image(xpos, ypos, z, adj.x = 0.5, adj.y = 0.5,
+image.width = 0.15, image.height = NULL, col = tim.colors(256), ...)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{xpos}{X position of image in user coordinates }
+ \item{ypos}{ Y position of image in user coordinates }
+ \item{z}{ Matrix of intensities comprising the image. }
+ \item{adj.x}{ Location of image relative to x coordinate.
+ Most
+common values are .5 (centered), 0 (right side of image at x) and 1
+(left side of
+image at x). These are the same conventions that are used for \code{adj}
+in
+positioning text.}
+ \item{adj.y}{Location of image relative to y coordinate.
+Same rules as \code{adj.x}
+}
+ \item{image.width}{ Width of image as a fraction of the
+plotting region in horizontal direction. }
+ \item{image.height}{ Height of image as a fraction of the
+plotting region in horizontal direction. If NULL height is scaled to
+make image pixels square.}
+ \item{col}{ Color table for image. Default is tim.colors.}
+ \item{\dots}{Any other plotting arguments that are passed to the image
+function }
+}
+\seealso{ image.plot, colorbar.plot, image, tim.colors }
+\examples{
+plot( 1:10, 1:10, type="n")
+data( lennon)
+
+add.image( 5,4,lennon, col=grey( (0:256)/256))
+# reference lines
+xline( 5, col=2)
+yline( 4,col=2)
+
+#
+# add lennon right in the corner beyond the plotting region
+#
+
+par(new=TRUE, plt=c(0,1,0,1), mar=c(0,0,0,0), usr=c(0,1,0,1))
+add.image( 0,0, lennon, adj.x=0, adj.y=0)
+
+}
+\keyword{ hplot }% at least one, from doc/KEYWORDS
diff --git a/man/arrow.plot.Rd b/man/arrow.plot.Rd
new file mode 100644
index 0000000..a7bb240
--- /dev/null
+++ b/man/arrow.plot.Rd
@@ -0,0 +1,117 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{arrow.plot}
+\alias{arrow.plot}
+\title{
+ Adds arrows to a plot
+}
+\description{
+Adds arrows at specified points where the arrow lengths are scaled to
+fit on
+the plot in a reasonable manner. A classic use of this function is to
+depict a vector field. At each point (x,y) we have a vector with
+components (u,v). Like the arrows function this adds arrows to an
+existing plot.
+}
+\usage{
+arrow.plot(a1, a2, u = NA, v = NA, arrow.ex = 0.05,
+ xpd = TRUE, true.angle = FALSE, arrowfun=arrows,...)
+}
+\arguments{
+\item{a1}{
+The x locations of the tails of the arrows or a 2 column matrix giving
+the x and y coordinates of the arrow tails.
+}
+\item{a2}{
+The y locations of the tails of the arrows or a 2 column matrix giving
+the u and v coordinates of the arrows.
+}
+\item{u}{
+The u components of the direction vectors if they are not specified in
+the a1 argument
+}
+\item{v}{
+The v components of the direction vectors if they are not specified in
+the a2 argument
+}
+\item{arrow.ex}{
+Controls the length of the arrows. The length is in terms of the fraction
+of the shorter axis in the plot. So with a default of .05 20 arrows
+of maximum length can line up end to end along the shorter axis.
+}
+\item{xpd}{
+If true does not clip arrows to fit inside the plot region, default is
+not to clip.
+}
+\item{true.angle}{
+If true preserves the true angle of the (u,v) pair on the plot. E.g. if
+(u,v)=(1,1) then the arrow will be drawn at 45 degrees.
+}
+\item{arrowfun}{
+The actual arrow function to use. The default is standard R \code{arrows}.
+However, Tamas K Papp suggests \code{p.arrows} from sfsmisc which makes prettier
+arrows.
+}
+
+
+\item{\dots}{
+Graphics arguments passed to the arrows function that can can change the
+color or arrow sizes. See help on this for
+details.
+}
+}
+\details{
+This function is useful because (u,v) may be in very different scales
+from the locations (x,y). So some careful scaling is needed to plot the
+arrows.
+The only tricky thing about this function is whether you want the true
+angles on the plot. For overlaying a vector field on top of contours
+that are the streamlines true.angle should be false. In this case you
+want u and v to be scaled in the same way as the x and y variables.
+If the scaling is not the same then the arrows will not look like tangent
+vectors to the streamlines.
+An application where the absolute angles are meaningful might be the hands of a
+clock showing different times zones on a world map. Here true.angle=T is
+appropriate, the clock hands should preserve the right angles.
+}
+\seealso{arrows}
+
+\examples{
+#
+# 20 random directions at 20 random points
+
+x<- runif( 20)
+y<- runif( 20)
+u<- rnorm( 20)
+v<- rnorm( 20)
+plot( x,y)
+arrow.plot( x,y,u,v) # a default that is unattractive
+
+plot( x,y, type="n")
+arrow.plot( x,y,u,v, arrow.ex=.2, length=.1, col='green', lwd=2)
+# thicker lines in green, smaller heads and longer tails. Note length, col and lwd are
+# options that the arrows function itself knows about.
+
+}
+\keyword{aplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/as.image.Rd b/man/as.image.Rd
new file mode 100644
index 0000000..abe95fa
--- /dev/null
+++ b/man/as.image.Rd
@@ -0,0 +1,116 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{as.image}
+\alias{as.image}
+\title{
+ Creates image from irregular x,y,z
+}
+\description{
+Discretizes a set of 2-d locations to a grid and produces a image object
+with the z values in the right cells. For cells with more than one Z
+value the average is used.
+}
+\usage{
+as.image(Z, ind=NULL, grid=NULL, x=NULL,weights=rep(1, length(Z)),
+ na.rm=FALSE, nx=64, ny=64, boundary.grid=FALSE, nrow=NULL, ncol=NULL,
+ FUN = NULL)
+}
+\arguments{
+\item{Z}{
+Values of image.
+}
+\item{ind}{
+A matrix giving the row and column subscripts for each image
+value in Z. (Not needed if x is specified.)
+}
+\item{grid}{
+A list with components x and y of equally spaced values describing the
+centers of the grid points. The default is to use nrow and ncol and the
+ranges of the data locations (x) to construct a grid.
+}
+\item{x}{
+Locations of image values. Not needed if ind is specified.
+}
+\item{nrow}{
+Same as nx this is depreciated.
+}
+\item{ncol}{
+Same as ny this is depreciated.
+}
+\item{weights}{
+If two or more values fall into the same
+pixel a weighted average is used to represent the pixel value. Default is
+equal weights.
+}
+\item{na.rm}{
+If true NA's are removed from the Z vector.}
+\item{nx}{Number of grid point in X coordinate.}
+\item{ny}{Number of grid points in Y coordinate.}
+\item{boundary.grid}{If FALSE grid points are assumed to be the
+grid midpoints. If TRUE they are the grid box boundaries.}
+\item{FUN}{The function to apply to common values in a grid box.
+ The default is a mean (or weighted mean). If FUN is specified the
+ weights are not used. }
+}
+
+\value{
+An list in image format with a few more components. Components x and y are
+the grid values , z is a
+nrow X ncol matrix
+with the Z values. NA's are placed at cell locations where Z data has
+not been supplied.
+Component ind is a 2 column matrix with subscripts for the locations of
+the values in the image matrix.
+Component weights is an image matrix with the sum of the
+individual weights for each cell. If no weights are specified the
+default for each observation is one and so the weights will be the
+number of observations in each bin.
+
+}
+\details{
+The discretization is straightforward once the grid is determined.
+If two or more Z values have locations in the same cell the weighted
+average value is taken as the value. The weights component that is
+returned can be used to account for means that have different numbers
+(or precisions) of observations contributing to the grid point averages.
+The default weights are taken to be one for each observation.
+See the source code to modify this to get more
+information about coincident locations. (See the call to fast.1way)
+
+}
+\seealso{
+image.smooth, image.plot, Krig.discretize, Krig.replicates
+}
+\examples{
+# convert precip data to 50X50 image
+look<- as.image( RMprecip$y, x= RMprecip$x, nx=50, ny=50)
+image.plot( look)
+
+# number of obs in each cell -- in this case equal to the
+# aggregated weights because each obs had equal wieght in the call
+
+image.plot( look$x ,look$y, look$weights, col=terrain.colors(50))
+# hot spot is around Denver
+}
+\keyword{manip}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/as.surface.Rd b/man/as.surface.Rd
new file mode 100644
index 0000000..9672c4b
--- /dev/null
+++ b/man/as.surface.Rd
@@ -0,0 +1,107 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{as.surface}
+\alias{as.surface}
+\title{
+ Creates an "surface" object from grid values.
+}
+\description{
+Reformats the vector from evaluating a function on a grid of points into
+a list for use with surface plotting function. The list has the
+usual components x,y and z and is suitable for use with persp, contour,
+image and image.plot.
+
+}
+\usage{
+as.surface(obj, z, order.variables="xy")
+}
+\arguments{
+\item{obj}{
+A description of the grid used to evaluate the function. This can
+either be in the form of a grid.list
+( see help file for grid.list) or the matrix of grid of points produced
+by make.surface.grid. In the later case obj is a matrix with
+the grid.list as an attribute.
+}
+\item{z}{
+The value of the function evaluated at the gridded points.
+}
+\item{order.variables}{
+Either "xy" or "yx" specifies how the x and y variables used to
+evaluate the function are matched with the x and y grids in the surface
+object.
+}
+}
+\value{
+A list of class surface. This object is a modest generalization of the
+list input format (x,y,z,) for the S functions contour, image or persp.
+
+\item{x}{
+The grid values in the X-axis
+}
+\item{y}{
+The grid values in the Y-axis
+}
+\item{z}{
+A matrix of dimensions nrow= length of x and ncol= length of y with
+entries being the grid point value reformatted from z.
+}
+}
+\details{
+This function was written to simply to go back and forth between a
+matrix of gridded values and the stacked vector obtained by stacking
+columns. The main application is evaluating a function at each grid point
+and then reforming the results for plotting. (See example below.)
+
+If zimage is matrix of values then the input vector is c( zimage).
+To go from the stacked vector to the matrix one needs the the nrow ncol
+and explains why grid information must also be specified.
+
+Note that the z input argument must be in the order
+values in order of stacking columns of the image. This is also the
+order of the grid points generated by make.surface.grid.
+
+To convert irregular 2-d data to a surface object where there are missing
+cells see the function as.image.
+}
+\seealso{
+ grid.list, make.surface.grid, surface, contour,
+image.plot, as.image
+}
+\examples{
+
+
+# Make a perspective of the surface Z= X**2 -Y**2
+# Do this by evaluating quadratic function on a 25 X 25 grid
+
+grid.l<-list( abcissa= seq( -2,2,,15), ordinate= seq( -2,2,,20))
+xg<-make.surface.grid( grid.l)
+# xg is a 300X2 matrix that has all pairs of X and Y grid values
+z<- xg[,1]**2 - xg[,2]**2
+# now fold z in the matrix format needed for persp
+out.p<-as.surface( xg, z)
+persp( out.p)
+# also try plot( out.p) to see the default plot for a surface object
+}
+\keyword{manip}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/bplot.Rd b/man/bplot.Rd
new file mode 100644
index 0000000..cc10e5d
--- /dev/null
+++ b/man/bplot.Rd
@@ -0,0 +1,117 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{bplot}
+\alias{bplot}
+\title{
+ boxplot
+}
+\description{
+Plots boxplots of several groups of data
+and allows for placement at different horizontal or vertical positions or
+colors. It
+is also flexible in the input object, accepting either a list or matrix.
+}
+\usage{
+bplot(x, by, pos=NULL, at = pos, add = FALSE, boxwex =
+ 0.8,xlim=NULL, ...)
+}
+\arguments{
+ \item{x}{
+Vector, matrix, list or data frame. A vector may be divided according
+to the by argument. Matrices and data frames are separated by columns and
+lists by components.
+}
+\item{by}{
+If x is a vector, an optional vector (either character or numerical)
+specifying the categories to divide x into separate data sets. Boxplots are then made
+for each group.
+}
+ \item{pos}{
+ The boxplots will be plotted vertically (horizontally) and pos gives the x (y) locations for their centers. If omitted the boxes are equally spaced at integer
+ values. This is the same as \code{at} in the \code{boxplot} function
+ }
+\item{at}{Same as \code{pos} this is the name for this argument in the standard \code{boxplot} function.}
+
+\item{add}{
+ If true, do not create a new plots just add the boxplots to a current
+ plot. Note that the pos argument may be useful in this case and should
+ be in the user coordinates of the parent plot.}
+\item{boxwex}{A boxplot argument to control the width of the boxplot.
+It behaves a little different than as an argumetn passed directly to \code{boxplot}.
+To make this a general function it is useful to scale this according to size of positions. Within bplot this happens as \code{boxwex<- boxwex* min(diff( sort( at)))}.
+and then the scaled version of \code{boxwex} is now passed to \code{boxplot}.}
+\item{xlim}{ Same as the usual argument used in plotting. The plotting limits for the
+x axis. }
+\item{\dots}{
+Other arguments to be passed to the boxplot function some handy favorites are:
+\code{names}
+ Labels for each boxplot.
+\code{horizontal}If TRUE draw boxplots horizontally the default is false, produce
+ vertical box plots.
+\code{lwd}Width(s) of lines in box plots.
+\code{col}Color(s) of bplots. See \code{colors()} for some choices.}
+}
+
+\details{
+ This function was created as a complement to the usual S/R function for
+boxplots. The current function makes it possible to put the boxplots
+at unequal x or y positions in a rational way using the \code{at} or
+\code{pos} arguments. This is useful for visually grouping a large set
+of boxplots into several groups. Also placement of the boxplots with
+respect to the axis can add information to the plot. Another aspect
+is the emphasis on data structures for groups of data. One useful
+feature is the by option to break up the x vector into distinct
+groups.
+
+Use \code{axis(3)} (\code{axis(4)}) to add an axis along the top (right side) or omit the category names and draw on the
+bottom \code{axis(1)} (left side \code{axis(2)}).
+
+
+The older \code{bplot} function drew the boxplots from scratch and if
+one needs to do this refer to the old functions: \code{
+describe.bplot, draw.bplot.obj, bplot.xy, bplot.obj}
+
+Finally to bin data into groups based on a continuous variable and to
+make bplots of each group see \code{bplot.xy}.
+}
+\seealso{ bplot.xy }
+\examples{
+#
+set.seed(123)
+temp<- matrix( rnorm(12*8), ncol=12)
+pos<- c(1:6,9, 12:16)*100
+bplot(temp)
+#
+par(las=2)
+bplot( temp, pos=pos, names=paste( "Data",1:12, sep=""))
+# add an axis along top for reference
+axis(3)
+
+#
+# Xmas boxplots in pleasing red and green
+bplot( temp, pos=pos, col=c("red4", "green4"))
+# add an axis on top
+axis( 3)
+}
+\keyword{hplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/bplot.xy.Rd b/man/bplot.xy.Rd
new file mode 100644
index 0000000..ba934ef
--- /dev/null
+++ b/man/bplot.xy.Rd
@@ -0,0 +1,78 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{bplot.xy}
+\alias{bplot.xy}
+\title{
+ Boxplots for conditional distribution
+}
+\description{
+Draws boxplots for y by binning on x. This gives a coarse, but quick,
+representation
+of the conditional distrubtion of [Y|X] in terms of boxplots.
+}
+\usage{
+bplot.xy(x, y, N = 10, breaks = pretty(x, N, eps.correct = 1), plot=TRUE,
+ ...)
+}
+\arguments{
+\item{x}{
+Vector to use for bin membership
+}
+\item{y}{
+Vector to use for constructing boxplot statistics.
+}
+\item{N}{
+Number of bins on x. Default is 10.
+}
+\item{breaks}{
+Break points defining bin boundaries. These can be unequally spaced.
+}
+
+\item{plot}{
+If FALSE just returns a list with the statistics used for plotting the
+box plots, bin centers, etc. -- More stuff than you can imagine!
+}
+\item{\dots }{
+Any other optional arguments passed to the standard \code{boxplot} function.
+}
+}
+\seealso{
+bplot, draw.bplot
+}
+\examples{
+# condition on swim times to see how run times vary
+bplot.xy( minitri$swim, minitri$run, N=5)
+
+# bivariate normal corr= .8
+set.seed( 123)
+x<-rnorm( 2000)
+y<- .8*x + sqrt( 1- .8**2)*rnorm( 200)
+#
+bplot.xy(x,y)
+#
+bplot.xy( x,y, breaks=seq( -3, 3,,25) ,
+ xlim =c(-4,4), ylim =c(-4,4), col="grey80", lwd=2)
+points( x,y,col=3, cex=.5)
+}
+\keyword{hplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/colorbar.plot.Rd b/man/colorbar.plot.Rd
new file mode 100644
index 0000000..c302eff
--- /dev/null
+++ b/man/colorbar.plot.Rd
@@ -0,0 +1,131 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{colorbar.plot}
+\alias{colorbar.plot}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{ Adds color scale strips to an existing plot.}
+\description{
+Adds one or more color scales in a horizontal orientation,
+vertical orientation to an existing plot.
+}
+\usage{
+
+colorbar.plot(x, y, strip, strip.width = 0.1, strip.length = 4 * strip.width,
+zrange = NULL, adj.x = 0.5, adj.y = 0.5, col = tim.colors(256),
+horizontal = TRUE, ...)
+
+
+}
+\arguments{
+ \item{x}{x position of strip in user coordinates }
+ \item{y}{y position of strip in user coordinates}
+ \item{strip}{ Either a vector or matrix giving the values of the color
+strip(s). If a matrix then strips are assumed to be the columns. }
+ \item{strip.width}{ Width of strip as a fraction of the plotting
+region. }
+ \item{strip.length}{ Length of strip as a function of the plotting
+region. Default is a pleasing 8 times width. }
+ \item{zrange}{If a vector these are the common limits used for
+ assigning the color scale. Default is to use the range of values in
+strip. If a two column matrix, rows are used as the limits for each
+strip.}
+ \item{adj.x}{ Location of strip relative to x coordinate. Most
+common values are .5 (centered), 0 (right end at x) and 1 (left end of
+at x). These are the same conventions that are used for \code{adj} in
+positioning text.}
+ \item{adj.y}{Location of strip relative to y coordinate.
+Same rules as \code{adj.x}
+}
+ \item{col}{ Color table used for strip. Default is our favorite
+tim.colors being a scale from a dark blue to dark red.}
+ \item{horizontal}{ If TRUE draws strips horizontally. If FALSE strips
+are drawn vertically }
+ \item{\dots}{ optional graphical arguments that are passed to
+the \code{image} function. }
+}
+\details{
+This function draws the strips as a sequence of image plots added to the
+existing plot. The main work is in creating a grid ( x,y) for the image
+that makes sense when superimposed on the plot.
+Note that although the columns of strip are considered as
+separate strips these can be oriented either horizontally or vertically
+based on the value of \code{horizontal}. The rows of zrange are
+essentially the \code{zlim} argument passed to the \code{image} function
+when each strip is drawn.
+
+Don't forget to use \code{locator} to interactively determine positions.
+\code{text} can be used to label points neatly in conjunction with
+setting adj.x and adj.y. Although this function is inefficient for
+placing images at arbitrary locations on a plot the code can be easily
+adapted to do this.
+
+This function was created to depict univariate posterior
+distribution on a map. The values are quantiles of the distribution and
+the strips when added under a common color scale give an overall
+impression of location and scale for several distributions.
+ }
+\author{Doug Nychka}
+
+\seealso{ image.plot, arrow.plot, add.image}
+
+\examples{
+# set up a plot but don't plot points and no "box"
+plot( 1:10, (1:10)*10, type="n", bty="n")
+# of course this could be anything
+
+y<- cbind( 1:15, (1:15)+25)
+
+colorbar.plot( 2.5, 30, y)
+points( 2.5,30, pch="+", cex=2, adj=.5)
+# note that strip is still in 1:8 aspect even though plot has very
+# different ranges for x and y.
+
+# adding legend using image.plot
+zr<- range( c( y))
+image.plot( legend.only=TRUE, zlim= zr)
+# see help(image.plot) to create more room in margin etc.
+
+zr<- rbind( c(1,20), c(1,100)) # separate ranges for columns of y.
+colorbar.plot( 5, 70, y, adj.x=0, zrange= zr)
+# some reference lines to show placement
+xline( 5, lty=2) # strip starts at x=5
+yline(70, lty=2) # strip is centered around y=7 (because adj.y=.5 by default)
+
+# many strips on common scale.
+
+y<- matrix( 1:200, ncol=10)
+colorbar.plot( 2, 75, y, horizontal=FALSE, col=rainbow(256))
+
+# Xmas strip
+y<- cbind( rep( c(1,2),10))
+y[15] <- NA # NA's should work
+colorbar.plot( 6, 45, y, adj.y=1,col=c("red", "green"))
+text(6,48,"Christmas strip", cex=2)
+
+# lennon thumbnail
+# there are better ways to this ... see add.image for example.
+data( lennon)
+colorbar.plot( 7.5,22, lennon,
+ strip.width=.25, strip.length=.25, col=grey(seq( 0,1,,256)))
+
+}
+\keyword{ hplot }% at least one, from doc/KEYWORDS
diff --git a/man/compactToMat.Rd b/man/compactToMat.Rd
new file mode 100644
index 0000000..cf75ba3
--- /dev/null
+++ b/man/compactToMat.Rd
@@ -0,0 +1,123 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{compactToMat}
+\alias{compactToMat}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+%% ~~function to do ... ~~
+Convert Matrix from Compact Vector to Standard Form
+}
+\description{
+%% ~~ A concise (1-5 lines) description of what the function does. ~~
+\code{compactToMat} transforms a matrix from compact, vector form to
+ a standard matrix. Only symmetric matrices can be stored in this
+ form, since a compact matrix is stored as a vector with elements
+ representing the upper triangle of the matrix. This function assumes
+ the vector does not contain diagonal elements of the matrix.
+
+ An example of a matrix stored in compact form is any matrix
+ generated from the \code{rdist} function with \code{compact=TRUE}.
+}
+\usage{
+compactToMat(compactMat, diagVal=0, lower.tri=FALSE, upper.tri=TRUE)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{compactMat}{
+%% ~~Describe \code{compactMat} here~~
+A symmetric matrix stored as a vector containing elements for the lower-triangular
+ portion of the true matrix (and none of the diagonal elements), as returned by
+ \code{rdist} with \code{compact=TRUE}.
+}
+ \item{diagVal}{
+%% ~~Describe \code{diagVal} here~~
+A number to put in the diagonal entries of the output matrix.
+}
+ \item{lower.tri}{
+%% ~~Describe \code{diagVal} here~~
+Whether or not to fill in the upper triangle of the output matrix
+}
+ \item{upper.tri}{
+%% ~~Describe \code{diagVal} here~~
+Whether or not to fill in the lower triangle of the output matrix
+}
+}
+\value{
+%% ~Describe the value returned
+%% If it is a LIST, use
+%% \item{comp1 }{Description of 'comp1'}
+%% \item{comp2 }{Description of 'comp2'}
+%% ...
+The standard form matrix represented by the input compact matrix
+}
+\author{
+%% ~~who you are~~
+John Paige
+}
+
+%% ~Make other sections like Warning with \section{Warning }{....} ~
+
+\seealso{
+%% ~~objects to See Also as \code{\link{help}}, ~~~
+\code{\link{rdist}}, \code{link{dist}}
+}
+\examples{
+################
+#Calculate distance matrix from compact form:
+################
+
+#make a distance matrix
+distOut = rdist(1:5, compact=TRUE)
+print(distOut)
+
+#note that distOut is in compact form:
+print(c(distOut))
+
+#convert to standard matrix form:
+distMat = compactToMat(distOut)
+
+################
+#fast computation of covariance matrix:
+################
+
+#generate 5 random points on [0,1]x[0,1] square
+x = matrix(runif(10), nrow=5)
+
+#get compact distance matrix
+distOut = rdist(x, compact=TRUE)
+
+#evaluate Exponential covariance with range=1. Note that
+#Covariance function is only evaluated over upper triangle
+#so time is saved.
+diagVal = Exponential(0, range=1)
+compactCovMat = Exponential(distOut, range=1)
+upperCovMat = compactToMat(compactCovMat, diagVal)
+lowerCovMat = compactToMat(compactCovMat, diagVal, lower.tri=TRUE, upper.tri=FALSE)
+fullCovMat = compactToMat(compactCovMat, diagVal, lower.tri=TRUE, upper.tri=TRUE)
+compactCovMat
+lowerCovMat
+upperCovMat
+fullCovMat
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{ compact }
+\keyword{ matrix }% __ONLY ONE__ keyword per line
diff --git a/man/cover.design.Rd b/man/cover.design.Rd
new file mode 100644
index 0000000..e76c187
--- /dev/null
+++ b/man/cover.design.Rd
@@ -0,0 +1,415 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{cover.design}
+\alias{cover.design}
+\title{
+ Computes Space-Filling "Coverage" designs using Swapping Algorithm
+}
+\description{
+Finds the set of points on a discrete grid (Candidate Set) which
+minimize a geometric space-filling criterion. The strength of this method
+is that the candidate set can satisfy whatever constraints are important
+for the problem.
+}
+\usage{
+cover.design(R, nd, nruns = 1, nn = TRUE, num.nn = 100, fixed = NULL,
+ scale.type = "unscaled", R.center, R.scale, P = -20, Q = 20,
+ start = NULL, DIST = NULL, return.grid = TRUE, return.transform =
+TRUE, max.loop=20, verbose=FALSE)
+}
+\arguments{
+\item{R}{
+A matrix of candidate points to be considered in the design.
+Each row is a separate point.
+}
+\item{nd}{
+Number of points to add to the design. If points exist and are to
+remain in the design (see "fixed" option), nd is the number of points
+to add. If no points are fixed, nd is the design size.
+}
+\item{nruns}{
+The number of random starts to be optimized. Uses random starts unless
+"start" is specified. If nruns is great than 1, the final results are
+the minimum.
+}
+\item{nn}{
+Logical value specifying whether or not to consider only nearest
+neighbors in the swapping algorithm. When nn=FALSE, then the swapping
+algorithm will consider all points in the candidate space. When nn=TRUE,
+then the swapping algorithm will consider only the num.nn closest
+points for possible swapping. The default is to use nearest neighbors
+only (nn=TRUE).
+}
+\item{num.nn}{
+Number of nearest-neighbors to search over. The default number is 100.
+If nn=F then this argument will be ignore.
+}
+\item{fixed}{
+A matrix or vector specifying points to be forced into the
+experimental design. If fixed is a matrix, it gives coordinates of the
+fixed points in the design. In this case fixed must be a subset of
+R. If fixed is a vector, then fixed gives the row numbers from the
+candidate matrix R that identify the fixed points. The number of
+points to be generated, nd, is in addition to the number of points
+specified by fixed.
+}
+\item{scale.type}{
+A character string that tells how to scale the candidate matrix, R,
+before calculating distances. The default is "unscaled", no
+transformation is done.
+Another option is "range" in which case
+variables are scaled to a [0,1] range before applying any distance
+functions. Use "unscaled" when all of the columns of R
+are commensurate; for example, when R gives x and y in spatial
+coordinates. When the columns of R are not in the same units, then it is
+generally thought that an appropriate choice of scaling will provide a
+better design. This would be the case, for example, for a typical
+process optimization. Other choices for scale.type are
+"unit.sd", which scales
+all columns of R to have 0 mean and unit standard deviation, and
+"user", which allows a user specified scaling (see R.center and R.scale
+arguments).
+}
+\item{R.center}{
+A vector giving the centering values if
+scale.type=\code{user}.
+}
+\item{R.scale}{
+A vector giving the scale values if scale.type=\code{user}.
+}
+\item{P}{
+The "p" exponent of the coverage criterion (see below). It
+affects how the distance from a point x to a set of design points D is
+calculated. P=1 gives average distance. P=-1 gives harmonic mean distance.
+P=-Inf would give minimum distance (not available as a value). As P gets
+large and negative, points will tend to be more spread out.
+}
+\item{Q}{
+The "q" exponent of the coverage criterion (see below).It
+affects how distances from all points not in the design to points in the
+design are averaged. When Q=1, simple averaging of the distances is employed.
+Q=Inf (not available as a value) in combination with P=-Inf would give a
+classical minimax design.
+}
+\item{start}{
+A matrix or vector giving the initial design from which to start
+optimization. If start is a matrix, it gives the coordinates of the
+design points. In this case start must be a subset of the candidate set , R matrix.
+If start is a
+vector, then start gives the row numbers of the initial design based on the rows of the
+candidate matrix rows. The
+default is to use a random starting design.
+}
+\item{DIST}{
+This argument is only for cover.design.S.
+A distance metric in the form of an S function. Default is Euclidean
+distance (FIELDS rdist function)
+See details
+and example below for the correct form.
+}
+\item{return.grid}{
+Logical value that tells whether or not to return the candidate matrix
+as an attribute of the computed design. The default is return.grid=T.
+If false this just reduces the returned object size.
+The candidate matrix is used by plot.spatial.design if it is available.
+}
+\item{return.transform}{
+Logical value that tells whether or not to return the transformation
+attributes
+of candidate set. The default is return.transform=T.
+}
+\item{max.loop}{
+Maximum number of outer loops in algorithm. This is the maximum number of
+passes through the design testing for swaps.
+}
+\item{verbose}{
+If TRUE prints out debugging information.
+}
+
+}
+\value{
+Returns a design object of class \code{spatialDesign}.
+Subscripting this object has the same effect as subscripting the first
+component (the design). The returned list has the following
+components:
+
+\item{design}{
+The best design in the form of a matrix.
+}
+\item{best.id}{
+Row numbers of the final design from the original candidate matrix, R.
+}
+\item{fixed}{
+Row numbers of the fixed points from the original candidate matrix, R.
+}
+\item{opt.crit}{
+Value of the optimality criterion for the final design.
+}
+\item{start.design}{
+Row numbers of the starting design from the original candidate matrix, R.
+}
+\item{start.crit}{
+Value of the optimality criterion for the starting design.
+}
+\item{history}{
+The swapping history and corresponding values of the optimality
+criterion for the best design.
+}
+\item{other.designs}{
+The designs other than the best design generated when nruns is
+greater than 1.
+}
+\item{other.crit}{
+The optimality criteria for the other designs when nrun is greate
+than 1.
+}
+\item{DIST}{
+The distance function used in calculating the design criterion.
+}
+\item{nn}{
+Logical value for nearest-neighbor search or not.
+}
+\item{num.nn}{
+The number of nearest neighbor set.
+}
+\item{grid}{
+The matrix R is returned if the argument return.grid=T.
+}
+\item{transform}{
+The type of transformation used in scaling the data and the values
+of the centering and scaling constants if the argument return.transform=T.
+}
+\item{call}{
+The calling sequence.
+}
+\item{P}{
+The parameter value for calculating criterion.
+}
+\item{Q}{
+The parameter value for calculating criterion.
+}
+\item{nhist}{
+The number of swaps performed.
+}
+\item{nloop}{
+The number of outer loops required to reach convergence if nloop is less
+the max.loop.
+}
+\item{minimax.crit}{
+The minimax design criterion using DIST.
+}
+\item{max.loop}{
+The maximum number of outer loops.
+}
+}
+\details{
+
+OTHER DISTANCE FUNCTIONS:
+You can supply an R/S-function to be used as the
+distance metric. The expected calling sequence for this distance function
+is
+function( X1,X2)\{....\} where X1 and X2 are matrices with coordinates as
+the rows. The returned value of this function should be the pairwise
+distance matrix. If nrow( X1)=m and nrow( X2)=n then the function should
+return an m by n matrix of all distances between these two sets of points.
+See the example for Manhattan distance below.
+
+The candidate set and DIST function can be flexible and the last
+example
+below using sample correlation matrices is an example.
+
+COVERAGE CRITERION:
+For nd design points in the set D and nc candidate points ci in the
+set C,
+the coverage criteria is defined as:
+
+M(D,C) = [sum(ci in C) [sum(di in D) (dist(di,ci)**P]**(Q/P)]**(1/Q)
+
+
+Where P, less than 0, and Q, greater than 0, are parameters.
+The algorithm used in
+"cover.design" to find the set of nd points in C that minimize this
+criterion is an iterative swapping algorithm which will be described
+briefly. The resulting design is referred to as a "coverage design"
+from among the class of space-filling designs. If fixed points are
+specified they are simply fixed in the design set and are not allowed to be
+swapped out.
+
+ALGORITHM:
+ An initial set of nd points is chosen randomly
+if no starting
+configuration is provided. The nc x nd distance matrix between the
+points in C and the points in D is computed, and raised to the power P.
+The "row sums" of this matrix are computed. Denote these as rs.i and
+the vector of row sums as rs. Using rs, M(D,C) is computed as:
+
+ [sum i (rs.i)**(Q/P)]**(1/Q)
+
+Note that if point d.i is "swapped" for point c.j, one must
+only recompute 1 column of the original distance matrix, and 1 row.
+The row elements not in the ith column will be the same for all j and
+so only need computing when the first swapping occurs for each d.i .
+Denote the sum of these off-i elements as "newrow(i)". The index is
+i here since this is the same for all rows (j=1,...nc).
+Thus, for each swap, the row sums vector is updated as
+
+ rs(new) = rs(old) - column(i,old) + column(i,new)
+
+And the jth element of rs(new) is replaced by:
+
+ rs(new)[j] = column(i,new)[k] + newrow(i)
+
+Finally, M(D,C) is computed for this swap of the ith design point
+for the jth candidate point using [2]. The point in C that when
+swapped produces the minimum value of M(D,C) replaces d.i.
+This is done for all nd points in the design, and is iterated until
+M(D,C) does not change.
+When the nearest neighbor option is selected, then the points
+considered for swapping are limited to the num.nn nearest neighbors
+of the current design point.
+
+STABILITY
+
+The algorithm described above is guaranteed to converge. However, upon
+convergence, the solution is sensitive to the initial configuration of
+points. Thus, it is recommended that multiple optimizations be done (i.e.
+set nruns greater than 1 ). Also, the quality of the solution depends on
+the density of the points on the region. At the same time, for large
+regions , optimization can be computationally prohibitive unless the
+nearest neighbor option is employed.
+
+}
+\section{References}{
+Johnson, M.E., Moore, L.M., and Ylvisaker, D. (1990). Minimax and
+maximin distance designs. Journal of Statistical Planning and
+Inference 26, 131-148.
+SAS/QC Software. Volume 2: Usage and Reference. Version 6. First
+Edition (1995). "Proc Optex". SAS Institute Inc. SAS Campus Drive,
+}
+\seealso{
+rdist, rdist.earth
+}
+\examples{
+##
+##
+# first generate candidate set
+set.seed(123) # setting seed so that you get the same thing I do!
+test.df <- matrix( runif( 600), ncol=3)
+
+test1.des<-cover.design(R=test.df,nd=10)
+
+summary( test1.des)
+plot( test1.des)
+
+#
+candidates<- make.surface.grid( list( seq( 0,5,,20), seq(0,5,,20)))
+out<- cover.design( candidates, 15)
+
+# find 10 more points keeping this original design fixed
+
+out3<-cover.design( candidates, 10,fixed=out$best.id)
+# see what happened
+
+plot( candidates[,1:2], pch=".")
+points( out$design, pch="x")
+points( out3$design, pch="o")
+
+# here is a strange graph illustrating the swapping history for the
+# the first design. Arrows show the swap done
+# at each pass through the design.
+
+h<- out$history
+cd<- candidates
+plot( cd[,1:2], pch=".")
+points( out$design, pch="O", col=2)
+points( out$start.design, pch="x", col=5)
+
+arrows(
+cd[h[,2],1],
+cd[h[,2],2],
+cd[h[,3],1],
+cd[h[,3],2],length=.1)
+text( cd[h[,2],1],
+cd[h[,2],2], h[,1], cex=1.0 )
+
+
+#
+# try this out using "Manhattan distance"
+# ( distance following a grid of city streets)
+
+dist.man<- function(x1,x2) {
+ d<- ncol( x1)
+ temp<- abs(outer( x1[,1], x2[,1],'-'))
+ for ( k in 2:d){
+ temp<- temp+abs(outer( x1[,k], x2[,k],'-'))
+ }
+ temp }
+
+# use the design from the Euclidean distance as the starting
+#configuration.
+
+cover.design( candidates, 15, DIST=dist.man, start= out3$best.id)-> out2
+# this takes a while ...
+plot( out2$design)
+points( out3$design, col=2)
+
+# find a design on the sphere
+#
+
+candidates<- make.surface.grid( list( x=seq( -180,180,,20), y= seq( -85,
+85,,20)))
+
+out4<-cover.design( candidates, 15, DIST=rdist.earth)
+# this takes a while
+
+plot( candidates, pch="+", cex=2)
+points(out4$design, pch="o", cex=2, col="blue")
+
+# covering based on correlation for 153 ozone stations
+#
+data( ozone2)
+
+cor.mat<-cor( ozone2$y, use="pairwise")
+
+cor.dist<- function( x1,x2)
+{matrix( 1-cor.mat[ x1,x2], ncol=length(x2))}
+
+#
+# find 25 points out of the 153
+# here the "locations" are just the index but the distance is
+# determined by the correlation function.
+#
+out5<-cover.design(cbind(1:153),25, DIST= cor.dist, scale.type="unscaled")
+
+plot( ozone2$lon.lat, pch=".")
+points( ozone2$lon.lat[out5$best.id,],pch="O", col=4)
+#
+# this seems a bit strange probably due some funny correlation values
+#
+
+# reset panel
+set.panel(1,1)
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/drape.plot.Rd b/man/drape.plot.Rd
new file mode 100644
index 0000000..fc9734f
--- /dev/null
+++ b/man/drape.plot.Rd
@@ -0,0 +1,171 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{drape.plot}
+\alias{drape.color}
+\alias{drape.plot}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{Perspective plot draped with colors in the facets.}
+\description{
+Function to produce the
+usual wireframe perspective plot with the facets being filled
+with different colors. By default the colors are assigned from a
+color bar based on the z values. \code{drape.color} can be used to create
+a color matrix different from the z matrix used for the wireframe.}
+\usage{
+drape.plot(x, y, z, z2=NULL, col = tim.colors(64), zlim = range(z, na.rm=TRUE),
+ zlim2 = NULL, add.legend = TRUE, horizontal = TRUE, theta = 30, phi = 20,
+ breaks=NA, ...)
+
+drape.color(z, col = tim.colors(64), zlim = NULL,breaks,
+ transparent.color = "white", midpoint=TRUE, eps=1e-8)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{x}{ grid values for x coordinate (or if x is a list the
+components x y and z are used.)}
+ \item{y}{grid values for y coordinate}
+ \item{z}{A matrix of z heights }
+ \item{z2}{ A matrix of z values to use for coloring facets. If NULL
+then z is used for this purpose}
+ \item{col}{ A color table for the z values that will be used for draping}
+ \item{zlim}{the z limits for \code{z} these are used to set up
+ the scale of the persp plot. This defaults to range(z, na.rm=TRUE) as
+ in persp}
+ \item{zlim2}{the z limits for \code{z2} these are used to set up
+ the color scale. This defaults to }
+ \item{add.legend}{ If true a color strip is added as a legend.}
+ \item{horizontal}{ If true color strip is put at bottom of the
+plot, if FALSE it is placed vertically on the right side. }
+ \item{theta}{ x-y rotation angle for perspective. }
+ \item{phi}{ z-angle for perspective. }
+ \item{transparent.color}{ Color to use when given an NA in z}
+ \item{midpoint}{ If TRUE color scale is formed for midpoints of z
+ obtained by averaging 4 corners.}
+ \item{breaks}{Numerical divisions for the color scale. If the default (NA)
+ is N+1 equally spaced points in the range \code{zlim} where N is the number of
+colors in \code{col}. This is the argument has the same effect as used in the
+\code{image} and \code{image.plot} functions.}
+ \item{eps}{Amount to inflate the range (1+/- eps) to inlude points on
+ break endpoints.}
+ \item{\dots}{ Other arguments that will be passed to the
+ persp function. The most common is zlim the z limits for the
+ 3-d plot and also the limits to set up the color scale. The
+ default for zlim is the range of z.}
+}
+\value{
+\code{drape.plot}
+If an assignment is made the projection matrix from persp is returned.
+This information can be used to add additional 3-d features to the plot.
+See the \code{persp} help file for an example how to add additional points
+and lines using the \code{trans3d} function and also the example below.
+
+
+\code{drape.color} If dim( z) = M,N this function returns a list with components:
+\item{color.index}{An (M-1)X(N-1) matrix (midpoint= TRUE) or MXN matrx (midpoint=FALSE) where each element is a text string specifying the color. }
+\item{breaks}{The breaks used to assign the numerical values in z to color categories.}
+
+}
+\details{
+
+The legend strip may obscure part of the plot. If so, add this as
+another step using image.plot.
+
+When using \code{drape.color} just drop the results into the
+\code{col} argument of \code{persp}. Given this function there are no
+surprises how the higher level \code{drape.plot} works: it calls
+\code{drape.color} followed by \code{persp} and finally the legend
+strip is added with \code{image.plot}.
+
+The color scales essentially default to the ranges of the z
+values. However, by specifying zlim and/or zlim2 one has more control
+of how the perspective plot is scaled and the limits of the color
+scale used to fill the facets. The color assignments are done by
+dividing up the zlim2 interval into equally spaced bins and adding a
+very small inflation to these limits. The mean z2 values, comprising
+an (M-1)X(N-1) matrix, for each facet are discretized to the bins. The
+bin numbers then become the indices used for the color scale. If zlim2
+is not specified it is the range of the z2 matrix is used to generate
+the ranges of the color bar. Note that this may be different than the
+range of the mean facets. If z2 is not passed then z is used in its
+place and in this case the zlim2 or zlim argument can used to define
+the color scale.
+
+This kind of plot is also supported through the wireframe function in the
+\code{lattice} package. The advantage of the fields version is that it uses the
+standard R graphics functions -- and is written in R code.
+
+The drape plot is also drawn by the fields \code{surface} function with
+\code{type="P"}.
+
+}
+\author{D. Nychka }
+
+\seealso{ image.plot, quilt.plot, persp, plot.surface, surface, lattice, trans3d}
+\examples{
+
+# an obvious choice:
+# Dr. R's favorite New Zealand Volcano!
+data( volcano)
+M<- nrow( volcano)
+N<- ncol( volcano)
+x<- seq( 0,1,,M)
+y<- seq( 0,1,,N)
+
+drape.plot( x,y,volcano, col=terrain.colors(128))-> pm
+
+# use different range for color scale and persp plot
+# setting of border omits the mesh lines
+
+ drape.plot( x,y,volcano, col=terrain.colors(128),zlim=c(0,300),
+ zlim2=c( 120,165), border=NA)
+
+# note tranparent color for facets outside the zlim2 range
+
+
+#The projection has been saved in pm
+# add a point marking the summit
+max( volcano)-> zsummit
+ix<- row( volcano)[volcano==zsummit]
+iy<- col( volcano)[volcano==zsummit]
+trans3d( x[ix], y[iy],zsummit,pm)-> uv
+points( uv, col="magenta", pch="+", cex=2)
+
+# overlay volcano wireframe with gradient in x direction.
+
+dz<- (
+ volcano[1:(M-1), 1:(N-1)] - volcano[2:(M), 1:(N-1)] +
+ volcano[1:(M-1), 2:(N)] - volcano[2:(M), 2:(N)]
+ )/2
+
+# convert dz to a color scale:
+ zlim<- range( c( dz), na.rm=TRUE)
+ zcol<-drape.color( dz, zlim =zlim)$color.index
+
+# wireframe with these colors
+ persp( volcano, col=zcol, theta=30, phi=20)
+
+# add legend using image.plot function
+ image.plot( zlim=zlim, legend.only =TRUE, horizontal =TRUE, col=zcol)
+
+
+}
+\keyword{hplot}% at least one, from doc/KEYWORDS
diff --git a/man/exp.cov.Rd b/man/exp.cov.Rd
new file mode 100644
index 0000000..95fc167
--- /dev/null
+++ b/man/exp.cov.Rd
@@ -0,0 +1,405 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Covariance functions}
+\alias{Exp.cov}
+\alias{Exp.simple.cov}
+\alias{Rad.cov}
+\alias{Rad.simple.cov}
+\alias{stationary.cov}
+\alias{stationary.taper.cov}
+\alias{wendland.cov}
+\alias{cubic.cov}
+\title{
+ Exponential family, radial basis
+functions,cubic spline, compactly supported Wendland family and
+stationary covariances. }
+\description{
+Given two sets of locations these functions compute the cross covariance matrix for
+some covariance families. In addition these functions can take advantage
+of spareness, implement more efficient multiplcation of the
+cross covariance by a vector or matrix and also return a marginal
+variance to be consistent with calls by the Krig function.
+
+\code{stationary.cov} and \code{Exp.cov} have additional arguments for
+precomputed distance matrices and for calculating only the upper triangle
+and diagonal of the output covariance matrix to save time. Also, they
+support using the \code{rdist} function with \code{compact=TRUE} or input
+distance matrices in compact form, where only the upper triangle of the
+distance matrix is used to save time.
+
+Note: These functions have been been renamed from the previous fields functions
+using 'Exp' in place of 'exp' to avoid conflict with the generic exponential
+function (\code{exp(...)})in R.
+}
+\usage{
+Exp.cov(x1, x2=NULL, theta = 1, p=1, distMat = NA,
+ C = NA, marginal = FALSE, onlyUpper=FALSE)
+
+Exp.simple.cov(x1, x2, theta =1, C=NA,marginal=FALSE)
+
+Rad.cov(x1, x2, p = 1, m=NA, with.log = TRUE, with.constant = TRUE,
+ C=NA,marginal=FALSE, derivative=0)
+
+cubic.cov(x1, x2, theta = 1, C=NA, marginal=FALSE)
+
+Rad.simple.cov(x1, x2, p=1, with.log = TRUE, with.constant = TRUE,
+ C = NA, marginal=FALSE)
+
+stationary.cov(x1, x2=NULL, Covariance = "Exponential", Distance = "rdist",
+ Dist.args = NULL, theta = 1, V = NULL, C = NA, marginal = FALSE,
+ derivative = 0, distMat = NA, onlyUpper = FALSE, ...)
+
+stationary.taper.cov(x1, x2, Covariance="Exponential",
+ Taper="Wendland",
+ Dist.args=NULL, Taper.args=NULL,
+ theta=1.0,V=NULL, C=NA, marginal=FALSE,
+ spam.format=TRUE,verbose=FALSE,...)
+
+wendland.cov(x1, x2, theta = 1, V=NULL, k = 2, C = NA,
+ marginal =FALSE,Dist.args = list(method = "euclidean"),
+ spam.format = TRUE, derivative = 0, verbose=FALSE)
+}
+\arguments{
+\item{x1}{ Matrix of first set of locations where each row gives the
+coordinates of a particular point.}
+
+\item{x2}{ Matrix of second set of locations where each row gives the
+coordinatesof a particular point. If this is missing x1 is used. }
+
+\item{theta}{ Range (or scale) parameter. This should be a scalar (use
+the V argument for other scaling options). Any distance calculated for
+a covariance function is divided by theta before the covariance function
+is evaluated.}
+
+\item{V}{ A matrix that describes the inverse linear transformation of
+the coordinates before distances are found. In R code this
+transformation is: \code{x1 \%*\% t(solve(V))} Default is NULL, that
+is the transformation is just dividing distance by the scalar value
+\code{theta}. See Details below. If one has a vector of "theta's"
+that are the scaling for each coordinate then just express this as
+\code{V = diag(theta)} in the call to this function.}
+
+\item{C}{ A vector with the same length as the number of rows of x2.
+If specified the covariance matrix will be multiplied by this vector.}
+
+\item{marginal}{If TRUE returns just the diagonal elements of the
+covariance matrix using the \code{x1} locations. In this case this is
+just 1.0. The marginal argument will trivial for this function is a
+required argument and capability for all covariance functions used
+with Krig.}
+
+\item{p}{ Exponent in the exponential covariance family. p=1 gives an
+exponential and p=2 gives a Gaussian. Default is the exponential
+form. For the radial basis function this is the exponent applied to
+the distance between locations.}
+\item{m}{For the radial basis function p = 2m-d, with d being the dimension of the
+locations, is the exponent applied to
+the distance between locations. (m is a common way of parametrizing this exponent.)}
+
+\item{with.constant}{ If TRUE includes complicated constant for radial
+ basis functions. See the function \code{radbad.constant} for more
+ details. The default is TRUE, include the constant. Without the usual
+ constant the lambda used here will differ by a constant from spline
+ estimators ( e.g. cubic smoothing splines) that use the
+ constant. Also a negative value for the constant may be necessary to
+ make the radial basis positive definite as opposed to negative
+ definite. }
+
+\item{with.log}{If TRUE include a log term for even dimensions. This
+is needed to be a thin plate spline of integer order. }
+
+\item{Covariance}{Character string that is the name of the covariance
+shape function for the distance between locations. Choices in fields
+are \code{Exponential}, \code{Matern}}
+
+\item{Distance}{Character string that is the name of the distance
+function to use. Choices in fields are \code{rdist},
+\code{rdist.earth}}
+
+\item{Taper}{Character string that is the name of the taper function
+to use. Choices in fields are listed in help(taper).}
+
+\item{Dist.args}{ A list of optional arguments to pass to the Distance
+function.}
+
+\item{Taper.args}{ A list of optional arguments to pass to the Taper
+function. \code{theta} should always be the name for the range (or
+scale) paremeter.}
+
+\item{spam.format}{If TRUE returns matrix in sparse matrix format
+implemented in the spam package. If FALSE just returns a full
+matrix. }
+
+\item{k}{The order of the Wendland covariance function. See help on
+ Wendland.}
+
+ \item{derivative}{ If nonzero evaluates the partials of the
+covariance function at locations x1. This must be used with the "C" option
+and is mainly called from within a predict function. The partial
+derivative is taken with respect to \code{x1}. }
+
+\item{verbose}{If TRUE prints out some useful information for
+debugging.}
+
+\item{distMat}{
+If the distance matrix between \code{x1} and \code{x2} has already been
+computed, it can be passed via this argument so it won't need to be
+recomputed.
+}
+\item{onlyUpper}{
+For internal use only, not meant to be set by the user. Automatically
+set to \code{TRUE} by \code{mKrigMLEJoint} or \code{mKrigMLEGrid} if
+\code{lambda.profile} is set to \code{TRUE}, but set to \code{FALSE}
+for the final parameter fit so output is compatible with rest of
+\code{fields}.
+
+If \code{TRUE}, only the upper triangle and diagonal of the covariance
+matrix is computed to save time (although if a non-compact distance
+matrix is used, the onlyUpper argument is set to FALSE). If \code{FALSE},
+the entire covariance matrix is computed. In general, it should
+only be set to \code{TRUE} for \code{mKrigMLEJoint} and \code{mKrigMLEGrid},
+and the default is set to \code{FALSE} so it is compatible with all of
+\code{fields}.
+}
+
+\item{\dots}{ Any other arguments that will be passed to the
+covariance function. e.g. \code{smoothness} for the Matern.} }
+
+\value{ If the argument C is NULL the cross covariance matrix is
+returned. In general if nrow(x1)=m and nrow(x2)=n then the returned
+matrix will be mXn. Moreover, if x1 is equal to x2 then this is the
+covariance matrix for this set of locations.
+
+
+If C is a vector of length n, then returned value is the
+multiplication of the cross covariance matrix with this vector.
+
+} \details{ For purposes of illustration, the function
+\code{Exp.cov.simple} is provided in fields as a simple example and
+implements the R code discussed below. List this function out as a
+way to see the standard set of arguments that fields uses to define a
+covariance function. It can also serve as a template for creating new
+covariance functions for the \code{Krig} and \code{mKrig}
+functions. Also see the higher level function \code{stationary.cov} to
+mix and match different covariance shapes and distance functions.
+
+A common scaling for stationary covariances: If \code{x1} and
+ \code{x2} are matrices where \code{nrow(x1)=m} and \code{nrow(x2)=n}
+ then this function will return a mXn matrix where the (i,j) element
+ is the covariance between the locations \code{x1[i,]} and
+ \code{x2[j,]}. The exponential covariance function is computed as
+ exp( -(D.ij)) where D.ij is a distance between \code{x1[i,]} and
+ \code{x2[j,]} but having first been scaled by theta. Specifically if
+ \code{theta} is a matrix to represent a linear transformation of the
+ coordinates, then let \code{u= x1\%*\% t(solve( theta))} and \code{v=
+ x2\%*\% t(solve(theta))}. Form the mXn distance matrix with
+ elements:
+
+\code{D[i,j] = sqrt( sum( ( u[i,] - v[j,])**2 ) )}.
+
+and the cross covariance matrix is found by \code{exp(-D)}. The
+tapered form (ignoring scaling parameters) is a matrix with i,j entry
+\code{exp(-D[i,j])*T(D[i,j])}. With T being a positive definite
+tapering function that is also assumed to be zero beyond 1.
+
+Note that if theta is a scalar then this defines an isotropic
+covariance function and the functional form is essentially
+\code{exp(-D/theta)}.
+
+Implementation: The function \code{r.dist} is a useful FIELDS function
+that finds the cross Euclidean distance matrix (D defined above) for
+two sets of locations. Thus in compact R code we have
+
+ exp(-rdist(u, v))
+
+Note that this function must also support two other kinds of calls:
+
+If marginal is TRUE then just the diagonal elements are returned (in R
+code \code{diag( exp(-rdist(u,u)) )}).
+
+If C is passed then the returned value is \code{ exp(-rdist(u, v))
+\%*\% C}.
+
+
+Some details on particular covariance functions:
+
+\describe{ \item{Radial basis functions (\code{Rad.cov}:}{The
+functional form is Constant* rdist(u, v)**p for odd dimensions and
+Constant* rdist(u,v)**p * log( rdist(u,v) ) For an m th order thin plate
+spline in d dimensions p= 2*m-d and must be positive. The constant,
+depending on m and d, is coded in the fields function
+\code{radbas.constant}. This form is only a generalized covariance
+function -- it is only positive definite when restricted to linear
+subspace. See \code{Rad.simple.cov} for a coding of the radial basis
+functions in R code.}
+
+\item{Stationary covariance \code{stationary.cov}:}{Here the
+computation is to apply the function Covariance to the distances found
+by the Distance function. For example
+
+\code{Exp.cov(x1,x2, theta=MyTheta)}
+
+and
+
+\code{stationary.cov( x1,x2, theta=MyTheta, Distance= "rdist",
+Covariance="Exponential")}
+
+are the same. This also the same as
+
+\code{stationary.cov( x1,x2, theta=MyTheta, Distance= "rdist",
+Covariance="Matern",smoothness=.5)}. }
+
+\item{Stationary tapered covariance \code{stationary.taper.cov}:}{The
+resulting cross covariance is the direct or Shure product of the
+tapering function and the covariance. In R code given location
+matrices, \code{x1} and \code{x2} and using Euclidean distance.
+
+\code{Covariance(rdist( x1, x2)/theta)*Taper( rdist( x1,
+x2)/Taper.args$theta)}
+
+By convention, the \code{Taper} function is assumed to be identically
+zero outside the interval [0,1]. Some efficiency is introduced within
+the function to search for pairs of locations that are nonzero with
+respect to the Taper. This is done by the SPAM function
+\code{nearest.dist}. This search may find more nonzero pairs than
+dimensioned internally and SPAM will try to increase the space. One
+can also reset the SPAM options to avoid these warnings. For
+spam.format TRUE the multiplication with the \code{C} argument is done
+with the spam sparse multiplication routines through the "overloading"
+of the \code{\%*\%} operator. }
+
+}
+
+About the FORTRAN: The actual function \code{Exp.cov} and
+\code{Rad.cov} call FORTRAN to
+make the evaluation more efficient this is especially important when the
+C argument is supplied. So unfortunately the actual production code in
+Exp.cov is not as crisp as the R code sketched above. See
+\code{Rad.simple.cov} for a R coding of the radial basis functions.
+
+}
+
+\seealso{
+ Krig, rdist, rdist.earth, gauss.cov, Exp.image.cov, Exponential, Matern,
+Wendland.cov, mKrig}
+
+\examples{
+# exponential covariance matrix ( marginal variance =1) for the ozone
+#locations
+out<- Exp.cov( ChicagoO3$x, theta=100)
+
+# out is a 20X20 matrix
+
+out2<- Exp.cov( ChicagoO3$x[6:20,],ChicagoO3$x[1:2,], theta=100)
+# out2 is 15X2 matrix
+
+# Kriging fit where the nugget variance is found by GCV
+# Matern covariance shape with range of 100.
+#
+
+fit<- Krig( ChicagoO3$x, ChicagoO3$y, Covariance="Matern", theta=100,smoothness=2)
+
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+
+
+# example of calling the taper version directly
+# Note that default covariance is exponential and default taper is
+# Wendland (k=2).
+
+stationary.taper.cov( x[1:3,],x[1:10,] , theta=1.5, Taper.args= list(k=2,theta=2.0,
+ dimension=2) )-> temp
+# temp is now a tapered 3X10 cross covariance matrix in sparse format.
+
+ is.spam( temp) # evaluates to TRUE
+
+# should be identical to
+# the direct matrix product
+
+ temp2<- Exp.cov( x[1:3,],x[1:10,], theta=1.5) * Wendland(rdist(x[1:3,],x[1:10,]),
+ theta= 2.0, k=2, dimension=2)
+ test.for.zero( as.matrix(temp), temp2)
+
+# Testing that the "V" option works as advertized ...
+x1<- x[1:20,]
+x2<- x[1:10,]
+
+V<- matrix( c(2,1,0,4), 2,2)
+Vi<- solve( V)
+
+u1<- t(Vi\%*\% t(x1))
+u2<- t(Vi\%*\% t(x2))
+
+look<- exp(-1*rdist(u1,u2))
+look2<- stationary.cov( x1,x2, V= V)
+test.for.zero( look, look2)
+
+
+# Here is an example of how the cross covariance multiply works
+# and lots of options on the arguments
+
+
+ Ctest<- rnorm(10)
+
+ temp<- stationary.cov( x,x[1:10,], C= Ctest,
+ Covariance= "Wendland",
+ k=2, dimension=2, theta=1.5 )
+
+# do multiply explicitly
+
+ temp2<- stationary.cov( x,x[1:10,],
+ Covariance= "Wendland",
+ k=2, dimension=2, theta=1.5 )\%*\% Ctest
+
+ test.for.zero( temp, temp2)
+
+
+# use the tapered stationary version
+# cov.args is part of the argument list passed to stationary.taper.cov
+# within Krig.
+# This example needs the spam package.
+#
+
+\dontrun{
+
+Krig(x,y, cov.function = "stationary.taper.cov", theta=1.5,
+ cov.args= list(Taper.args= list(k=2, dimension=2,theta=2.0) )
+ ) -> out2
+# NOTE: Wendland is the default taper here.
+}
+
+# BTW this is very similar to
+\dontrun{
+ Krig(x,y, theta= 1.5)-> out
+}
+
+
+
+}
+\keyword{spatial}
+% docclass is function
+
diff --git a/man/fields-internal.Rd b/man/fields-internal.Rd
new file mode 100644
index 0000000..a8707be
--- /dev/null
+++ b/man/fields-internal.Rd
@@ -0,0 +1,264 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields internal }
+\alias{[.spatialDesign}
+\alias{COR}
+\alias{D4transform.image}
+\alias{Krig.df.to.lambda}
+\alias{Krig.fdf}
+\alias{Krig.fgcv}
+\alias{Krig.fgcv.model}
+\alias{Krig.fgcv.one}
+
+\alias{Krig.flplike}
+\alias{Krig.fs2hat}
+\alias{Krig.ftrace}
+\alias{Krig.parameters}
+\alias{Krig.updateY}
+\alias{Krig.which.lambda}
+\alias{Krig.ynew}
+\alias{bisection.search}
+\alias{cat.matrix}
+\alias{cat.to.list}
+\alias{ceiling2}
+\alias{describe}
+\alias{dyadic.2check}
+\alias{dyadic.check}
+\alias{double.exp}
+\alias{Exp.earth.cov}
+\alias{fast.1way}
+\alias{find.upcross}
+\alias{gauss.cov}
+\alias{golden.section.search}
+\alias{imageplot.info}
+\alias{imagePlotInfo}
+\alias{imageplot.setup}
+\alias{krig.image.parameters}
+
+\alias{makeSimulationGrid}
+\alias{makeSimulationGrid2}
+\alias{makePredictionPoints}
+\alias{multWendlandGrid}
+
+\alias{minimax.crit}
+\alias{plot.krig.image}
+\alias{plot.sim.krig.image}
+\alias{plot.spatialDesign}
+\alias{predict.interp.surface}
+\alias{predict.krig.image}
+
+\alias{predict.surface.default}
+
+\alias{print.krig.image}
+\alias{print.spatialDesign}
+\alias{print.sreg}
+\alias{print.summary.Krig}
+\alias{print.summary.spatialProcess}
+\alias{print.summary.krig.image}
+\alias{print.summarySpatialDesign}
+\alias{print.summary.sreg}
+\alias{printGCVWarnings}
+\alias{qr.q2ty}
+\alias{qr.yq2}
+\alias{plot.qsreg}
+\alias{predict.qsreg}
+\alias{print.qsreg}
+\alias{qsreg.fit}
+\alias{qsreg.psi}
+\alias{qsreg.rho}
+\alias{qsreg.psi.OLD}
+\alias{qsreg.rho.OLD}
+\alias{qsreg.trace}
+\alias{quickPrint}
+\alias{summary.qsreg}
+\alias{radbas.constant}
+\alias{sim.krig.image}
+\alias{sreg.df.to.lambda}
+\alias{sreg.fdf}
+\alias{sreg.fgcv}
+\alias{sreg.fgcv.model}
+\alias{sreg.fgcv.one}
+\alias{sreg.fit}
+\alias{sreg.fs2hat}
+\alias{sreg.trace}
+\alias{stats.sim.krig.image}
+\alias{summaryGCV.Krig}
+\alias{summaryGCV.sreg}
+\alias{summary.krig.image}
+\alias{summary.spatialDesign}
+\alias{summary.sreg}
+\alias{surface}
+\alias{surface.default}
+\alias{surface.krig.image}
+\alias{unscale}
+\alias{world.dat}
+\alias{compactTOMatOLD}
+
+\alias{MLESpatialProcess.fast}
+
+\title{
+ Fields internal and secondary functions
+}
+\description{
+Listed below are supporting functions for the major methods in fields.
+}
+\usage{
+\method{[}{spatialDesign}(x, ...)
+
+Krig.df.to.lambda(df, D, guess = 1, tol = 1e-05)
+Krig.fdf (llam, info)
+Krig.fgcv (lam, obj)
+Krig.fgcv.model (lam, obj)
+Krig.fgcv.one (lam, obj)
+Krig.flplike (lambda, obj)
+Krig.fs2hat (lam, obj)
+Krig.ftrace (lam, D)
+Krig.parameters (obj, mle.calc=obj$mle.calc)
+Krig.updateY (out, Y, verbose = FALSE, yM=NA)
+Krig.which.lambda(out)
+Krig.ynew (out, y=NULL, yM=NULL )
+
+
+
+
+bisection.search (x1, x2, f, tol = 1e-07, niter = 25, f.extra =
+ NA, upcross.level = 0)
+
+cat.matrix (mat, digits = 8)
+
+cat.to.list (x, a)
+
+ceiling2 (m)
+
+describe (x)
+
+double.exp(x)
+
+dyadic.2check( m,n,cut.p=2)
+dyadic.check( n,cut.p=2)
+
+Exp.earth.cov (x1, x2, theta = 1)
+
+fast.1way (lev, y, w = rep(1, length(y)))
+
+find.upcross (fun, fun.info, upcross.level = 0, guess = 1, tol =
+1e-05)
+
+gauss.cov (...)
+
+
+golden.section.search (ax, bx, cx, f, niter = 25, f.extra = NA,
+ tol = 1e-05, gridx=NA)
+
+imagePlotInfo (...,breaks, nlevel)
+imageplot.info(...)
+imageplot.setup(x, add=FALSE, legend.shrink = 0.9, legend.width = 1,
+horizontal = FALSE, legend.mar=NULL, bigplot = NULL, smallplot = NULL,...)
+
+
+makeSimulationGrid(mKrigObject, predictionPoints, nx, ny, nxSimulation,
+ nySimulation, gridRefinement, gridExpansion)
+makeSimulationGrid2 (fastTpsObject, predictionPointsList,
+ gridRefinement, gridExpansion)
+
+
+
+minimax.crit (obj, des = TRUE, R)
+
+
+\method{plot}{spatialDesign}(x,...)
+
+\method{predict}{interp.surface}(object, loc,...)
+
+\method{predict}{surface}(object, ...)
+\method{predict}{surface.default}(object, ...)
+
+
+\method{print}{spatialDesign} (x,...)
+\method{print}{sreg}(x, ...)
+\method{print}{summary.Krig} (x, ...)
+\method{print}{summary.spatialProcess} (x, ...)
+\method{print}{summarySpatialDesign} (x, digits = 4,...)
+\method{print}{summary.sreg} (x, ...)
+
+printGCVWarnings( Table, method = "all")
+
+makePredictionPoints(mKrigObject, nx, ny, predictionPointsList)
+
+multWendlandGrid( grid.list,center, delta, coef, xy = c(1, 2))
+
+qr.q2ty (qr, y)
+
+qr.yq2 (qr, y)
+\method{plot}{qsreg}(x, pch = "*", main = NA,...)
+\method{predict}{qsreg}(object, x, derivative = 0, model = object$ind.cv.ps,...)
+\method{print}{qsreg} (x, ...)
+qsreg.fit (x, y, lam, maxit = 50, maxit.cv = 10,
+ tol = 1e-04, offset = 0, sc = sqrt(var(y)) * 1e-07, alpha = 0.5,
+ wt = rep(1, length(x)), cost = 1)
+qsreg.psi( r,alpha=.5,C=1)
+qsreg.rho( r,alpha=.5,C=1)
+qsreg.trace(x, y, lam, maxit = 50, maxit.cv = 10, tol = 1e-04,
+ offset = 0, sc = sqrt(var(y)) * 1e-07, alpha = 0.5,
+ wt = rep(1, length(x)), cost = 1)
+qsreg.rho.OLD(r, alpha = 0.5, C = 1)
+qsreg.psi.OLD(r, alpha = 0.5, C = 1)
+
+quickPrint(obj, max.values = 10)
+
+
+
+radbas.constant (m, d)
+
+sreg.df.to.lambda (df, x, wt, guess = 1, tol = 1e-05)
+sreg.fdf (h, info)
+sreg.fgcv (lam, obj)
+sreg.fgcv.model (lam, obj)
+sreg.fgcv.one (lam, obj)
+sreg.fit (lam, obj, verbose=FALSE)
+sreg.fs2hat (lam, obj)
+sreg.trace (h, info)
+
+summaryGCV.Krig(object, lambda, cost = 1, verbose = FALSE,
+ offset = 0, y = NULL, ...)
+summaryGCV.sreg (object, lambda, cost = 1, nstep.cv = 20,
+ offset = 0, verbose = TRUE,...)
+
+\method{summary}{qsreg} (object, ...)
+\method{summary}{spatialDesign} (object, digits = 4, ...)
+\method{summary}{sreg} (object, digits = 4, ...)
+
+surface(object , ...)
+\method{surface}{default} (object, ...)
+
+
+unscale (x, x.center, x.scale)
+
+MLESpatialProcess.fast(x, y, lambda.start=.5, theta.start = NULL,
+ cov.function = "stationary.cov",
+ cov.args = list(Covariance = "Matern", smoothness = 1),
+ Distance = "rdist", verbose=FALSE, optim.args=NULL, ...)
+
+}
+\keyword{internal}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/fields-stuff.Rd b/man/fields-stuff.Rd
new file mode 100644
index 0000000..5b40c74
--- /dev/null
+++ b/man/fields-stuff.Rd
@@ -0,0 +1,148 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields-stuff}
+\alias{fields.diagonalize2}
+\alias{fields.diagonalize}
+\alias{fields.duplicated.matrix}
+\alias{fields.mkpoly}
+\alias{fields.derivative.poly}
+\alias{fields.evlpoly}
+\alias{fields.evlpoly2}
+
+\title{Fields supporting functions}
+\description{
+Some supporting functions that are internal to fields top level
+methods. Variants of these might be found in the R base but these
+have been written for cleaner code or efficiency.
+}
+\usage{
+fields.diagonalize2(A,B, verbose=FALSE)
+fields.diagonalize(A,B)
+fields.duplicated.matrix(mat, digits = 8)
+
+fields.mkpoly(x, m = 2)
+
+fields.derivative.poly(x, m,dcoef)
+
+fields.evlpoly( x, coef)
+
+fields.evlpoly2( x, coef, ptab)
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+\item{A}{
+A positive definite matrix}
+\item{B}{
+A positive definite matrix}
+\item{mat}{
+Arbitrary matrix for examining rows}
+\item{digits}{Number of significant digits to use for comparing
+elements to determine duplciate values. }
+
+\item{x}{
+Arbitrary matrix where rows are components of a multidimensional
+vector}
+\item{m}{ The null space degree -- results in a polynomial of degree
+(m-1) }
+\item{dcoef}{ Coefficients of a multidimensional polynomial}
+
+\item{coef}{Polynomial coefficients.}
+\item{ptab}{Table of powers of different polnomial terms.}
+\item{verbose}{If TRUE prints condition number of A+B}
+}
+
+\details{
+Given two matrices A (positive definite) and B (nonnegative definite)
+ \code{fields.diagonalize} and \code{fields.diagonalize2} finds the
+matrix transformation G that will convert A to a identity matrix and B
+to a diagonal matrix:
+
+ G\^T A G= I G\^T B G= D.
+
+
+\code{fields.diagonalize2} is not as easy to follow as \code{fields.diagonalize} but may be more stable
+ and is the version used in
+the Krig engine.
+
+\code{fields.duplicated} finds duplicate rows in a matrix. The digits
+arguments is the number of digits that are considered in the
+comparison.
+The returned value is an array of integers from 1:M where M is the
+number of unique rows and duplicate rows are referenced in the same
+order that they appear as the rows of \code{mat}.
+
+\code{fields.mkpoly} computes the complete matrix of all monomial
+terms up to degree (m-1). Each row of \code{x} is are the componets of
+a vector. (The fields function mkpoly returns the number of these
+terms.) In 2 dimensions with m=3 there 6 polynomial terms up to
+quadratic ( 3-1 =2) order and will be returned as the matrix:
+
+cbind( 1 , x[,1], x[,2], x[,1]**2, x[,1]*x[,2], x[,2]**2 )
+
+This function is used for the fixed effects polynomial or spatial
+drift used in spatial estimating functions Krig, Tps and mKrig.
+The matrix ptab is a table of the powers in each term for each
+variable and is included as an attribute to the matrix returned by
+this function.
+See the \code{attr} function for extracting an attribute from an
+object.
+
+\code{ptab} for the example above is
+\preformatted{
+ [,1] [,2]
+[1,] 0 0
+[2,] 1 0
+[3,] 0 1
+[4,] 2 0
+[5,] 1 1
+[6,] 0 2
+}
+
+This information is used in finding derivatives of the polynomial.
+
+\code{fields.deriviative.poly} finds the partial derivative matrix of
+a multidimensional polynomial of degree (m-1) at different vector
+values and with coefficients \code{dcoef}.
+This function has been orgainzed to be a clean utility for the
+predicting the derivative of the estimated function from Krig or
+mKrig.
+Within the fields context
+the polynomial itself would be evaluated as
+ fields.mkpoly( x,m)\%*\%dcoef.
+If x has d columns ( also the dimension of the polynomial) and n rows
+the partial derivatives of this polynomial at the locations x can be
+organized in a nXd matrix. This is the object returned by ths
+function.
+
+\code{evlpoly} and \code{evlpoly2} are FORTRAN based functions for
+evaluating univariate polynomials and multivariate polynomials. The
+table of powers (ptab) needed for evlpoly2 is the same format as that
+returned my the fields.mkpoly function.
+
+
+}
+\author{Doug Nychka}
+\seealso{Krig, Tps, as.image, predict.Krig, predict.mKrig,
+Krig.engine.default, Wendland}
+\keyword{spatial}
+% at least one, from doc/KEYWORDS
diff --git a/man/fields.Rd b/man/fields.Rd
new file mode 100644
index 0000000..3a93f6b
--- /dev/null
+++ b/man/fields.Rd
@@ -0,0 +1,182 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields}
+\alias{fields-package}
+\alias{fields}
+\title{
+fields - tools for spatial data
+}
+\description{
+ Fields is a collection of programs for curve and function
+ fitting with an emphasis on spatial data and spatial statistics. The
+ major methods implemented include cubic and thin plate splines,
+ universal
+ Kriging and Kriging for large data sets. One main feature is any
+ covariance function implemented in R code can be used for spatial prediction. Another important feature is that fields will take advantage of compactly supported covariance functions in a seamless way through
+the spam package. See \code{library( help=fields)} for a listing of all the
+fields contents.
+
+fields stives to have readable and tutorial code. Take a look at the
+source code for \code{Krig} and \code{mKrig} to see how things work
+"under the hood".
+To load fields with the comments retained in the source
+use \code{ keep.source = TRUE} in the \code{library} command.
+We also keep the source on-line:
+browse the directory
+\url{http://www.image.ucar.edu/~nychka/Fields/Source} for commented source.
+\url{http://www.image.ucar.edu/~nychka/Fields/Help/00Index.html} is a
+page for html formatted help files. (If you obtain the source version of the
+package (file ends in .gz) the commented source code is the R subdirectory.)
+
+\strong{Major methods}
+\itemize{
+
+\item \code{\link{Tps}} Thin Plate spline
+regression including GCV and REML estimates for the smoothing parameter.
+
+\item \code{\link{spatialProcess}} An easy to use method that fits a spatial process model
+( e.g. Kriging) but also estimates the key spatial parameters: nugget variance, sill variance and range by maximum likelihood. Default covariance model is a Matern covariance function.
+
+\item \code{\link{Krig}} Spatial process estimation that is the core function of fields.
+
+The Krig function allows you to supply a covariance function that is
+written in native R code. See (\code{\link{stationary.cov}}) that includes
+several families of covariances and distance metrics including the
+Matern and great circle distance.
+
+\item \code{\link{mKrig}} (micro Krig) are
+ \code{\link{fastTps}}
+ fast efficient Universal Kriging and spline-like functions, that can take advantage of sparse covariance
+functions and thus handle very large numbers of spatial locations.
+ \code{\link{QTps}} A easy to use extension of thin plate splines for quantile and robust surface fitting.
+
+\item \code{\link{mKrigMLEGrid}} for maximum likelihood estimates of covariance parameters. This function also
+handles replicate fields assumed to be independent realizations at the same locations.
+
+}
+
+\strong{Other noteworthy functions}
+\itemize{
+
+\item \code{\link{vgram}} and \code{\link{vgram.matrix}} find variograms for spatial data (and
+with temporal replications.
+
+\item \code{\link{cover.design}} Generates space-filling designs where the distance
+function is expresed in R code.
+
+\item \code{as.image}, \code{image.plot}, \code{drape.plot}, \code{quilt.plot}
+\code{add.image}, \code{crop.image}, \code{half.image}, \code{average.image},
+\code{\link{designer.colors}}, \code{\link{color.scale}}, \code{\link{in.poly}} Many
+convenient functions for working with image data and rationally (well,
+maybe reasonably) creating and placing a color scale on an image plot.
+See also \code{\link{grid.list}} for how fields works with grids and \code{\link{US}}
+and \code{\link{world}} for adding a map quickly.
+
+\item \code{\link{sreg}} \code{\link{splint}} Fast 1-D smoothing
+splines and interpolating cubic splines.
+
+}
+
+
+\strong{ Generic functions that support the methods}
+
+\code{plot} - diagnostic plots of fit \cr
+\code{summary}- statistical summary of fit \cr
+\code{print}- shorter version of summary \cr
+\code{\link{surface}}- graphical display of fitted surface \cr
+\code{predict}- evaluation fit at arbitrary points \cr
+\code{\link{predictSE}}- prediction standard errors at arbitrary points. \cr
+\code{\link{sim.rf}}- Simulate a random fields on a 2-d grid.
+
+\strong{Getting Started}
+
+ Try some of the examples from help files for \code{Tps} or
+\code{spatialProcess}.
+
+\strong{Graphics tips}
+
+\code{help( fields.hints)}
+ gives some R code tricks for setting up common legends and axes.
+And has little to do with this package!
+
+\strong{Testing}
+See \code{help(fields.tests)} for testing fields.
+
+\strong{Some fields datasets}
+\itemize{
+\item \code{\link{CO2}} Global satelite CO2 concentrations (simulated field)
+\item \code{\link{RCMexample}} Regional climate model output
+\item \code{\link{lennon}} Image of John Lennon
+\item \code{\link{COmonthlyMet}} Monthly mean temperatures and precip for Colorado
+\item \code{\link{RMelevation}} Digital elevations for the Rocky Mountain Empire
+\item \code{\link{ozone2}} Daily max 8 hour ozone concentrations for the US midwest
+for summer 1987.
+\item \code{\link{PRISMelevation}} Digital elevations for the
+ continental US at approximately 4km resolution
+\item \code{\link{NorthAmericanRainfall}} 50+ year average and trend for summer rainfall at
+1700+ stations.
+\item \code{\link{rat.diet}} Small paired study on rat food intake over time.
+\item \code{\link{WorldBankCO2}} Demographic and carbon emission data
+ for 75 countries and for 1999.
+}
+
+\strong{DISCLAIMER:}
+ The
+authors can not guarantee the correctness of any function or program in
+this package.
+
+}
+
+\examples{
+
+# some air quality data, daily surface ozone measurements for the Midwest:
+data(ozone2)
+x<-ozone2$lon.lat
+y<- ozone2$y[16,] # June 18, 1987
+
+# pixel plot of spatial data
+quilt.plot( x,y)
+US( add=TRUE) # add US map
+
+fit<- Tps(x,y)
+# fits a GCV thin plate smoothing spline surface to ozone measurements.
+# Hey, it does not get any easier than this!
+
+summary(fit) #diagnostic summary of the fit
+set.panel(2,2)
+plot(fit) # four diagnostic plots of fit and residuals.
+
+# quick plot of predicted surface
+set.panel()
+surface(fit) # contour/image plot of the fitted surface
+US( add=TRUE, col="magenta", lwd=2) # US map overlaid
+title("Daily max 8 hour ozone in PPB, June 18th, 1987")
+
+
+fit2<- spatialProcess( x,y)
+# a "Kriging" model. The covariance defaults to a Matern with smoothness 1.0.
+# the nugget, sill and range parameters are found by maximum likelihood
+# summary, plot, and surface also work for fit2 !
+
+
+}
+\keyword{datasets}
diff --git a/man/fields.grid.Rd b/man/fields.grid.Rd
new file mode 100644
index 0000000..98a324e
--- /dev/null
+++ b/man/fields.grid.Rd
@@ -0,0 +1,84 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields.grid}
+\alias{mKrig.grid}
+\title{
+Using MKrig for predicting on a grid.
+}
+
+\description{
+This is an extended example for using the sparse/fast interpolation
+methods in mKrig to evaluate a Kriging estimate on a large grid.
+}
+
+\details{
+\code{mKrig} is a flexible function for surface fitting using
+a spatial process model. It can also exploit sparse matrix methods forlarge data sets by using a compactly supported covariance.
+The example below shows how ot evaluate a solution on a big grid. (Thanks to Jan Klennin for this example.)
+}
+\examples{
+x<- RMprecip$x
+y<- RMprecip$y
+
+Tps( x,y)-> obj
+
+# make up an 80X80 grid that has ranges of observations
+# use same coordinate names as the x matrix
+
+glist<- fields.x.to.grid(x, nx=80, ny=80) # this is a cute way to get a default grid that covers x
+
+# convert grid list to actual x and y values ( try plot( Bigx, pch="."))
+ make.surface.grid(glist)-> Bigx
+
+# include actual x locations along with grid.
+ Bigx<- rbind( x, Bigx)
+
+# evaluate the surface on this set of points (exactly)
+
+ predict(obj, x= Bigx)-> Bigy
+
+# set the range for the compact covariance function
+# this will involve less than 20 nearest neighbors that have
+# nonzero covariance
+#
+
+ V<- diag(c( 2.5*(glist$lon[2]-glist$lon[1]),
+ 2.5*(glist$lat[2]-glist$lat[1])))
+\dontrun{
+# this is an interplotation of the values using a compact
+# but thin plate spline like covariance.
+ mKrig( Bigx,Bigy, cov.function="wendland.cov",k=4, V=V,
+ lambda=0)->out2
+# the big evaluation this takes about 45 seconds on a Mac G4 latop
+ predictSurface( out2, nx=400, ny=400)-> look
+}
+
+# the nice surface
+\dontrun{
+ surface( look)
+ US( add=TRUE, col="white")
+}
+
+}
+
+
+\keyword{hplot}
diff --git a/man/fields.hints.Rd b/man/fields.hints.Rd
new file mode 100644
index 0000000..fbfae9f
--- /dev/null
+++ b/man/fields.hints.Rd
@@ -0,0 +1,180 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields.hints}
+\alias{fields.hints}
+\alias{fields.style}
+\alias{fields.color.picker}
+\title{
+fields - graphics hints
+}
+
+\description{
+ Here are some technical hints for assembling multiple plots with common
+legends or axes and setting the graphics parameters to make more
+readable figures. Also we an index to the defaultcolors in R graphics
+and setting their definitions in LaTeX.
+All these hints use the standard graphics environment.
+}
+\usage{
+fields.style()
+fields.color.picker()
+}
+\details{
+\code{fields.style} is a simple
+function to enlarge the characters in a plot and set the colors. List this out to modify the choices.
+
+\preformatted{
+
+##Two examples of concentrating a panel of plots together
+## to conserve the white space.
+## see also the example in image.plot using split.screen.
+## The basic trick is to use the oma option to reserve some space around the
+## plots. Then unset the outer margins to use that room.
+
+library( fields)
+
+# some hokey image data
+x<- 1:20
+y<- 1:15
+z<- outer( x,y,"+")
+zr<- range( c(z))
+
+# add common legend to 3X2 panel
+
+par( oma=c(4,0,0,0))
+set.panel( 3,2)
+par( mar=c(1,1,0,0))
+
+# squish plots together with just 1 space between
+for ( k in 1:6){
+image( x,y,z, axes=FALSE, xlab="", ylab="", zlim=zr)
+}
+
+par( oma=c(0,0,0,0))
+image.plot( zlim=zr, legend.only=TRUE, horizontal=TRUE, legend.mar=5)
+
+# you may have to play around with legend.mar and the oma settings to
+# get enough space.
+
+
+
+##
+### also add some axes on the sides. in a lattice style
+## note oma adds some more room at bottom.
+
+par( oma=c(8,6,1,1))
+set.panel( 3,2)
+par( mar=c(1,1,0,0))
+##
+for ( k in 1:6){
+ image( x,y,z, axes=FALSE, xlab="", ylab="", zlim=zr)
+ box() # box around figure
+
+# maybe draw an x axis
+ if( k \%in\% c(5,6) ){
+ axis( 1, cex.axis=1.5)
+ mtext( line=4, side=1, "Xstuff")}
+
+# maybe draw a y axis
+ if( k \%in\% c(1,3,5) ){
+ axis( 2, cex.axis=1.5)
+ mtext( line=4, side=2, "Ystuff")}
+}
+
+# same trick of adding a legend strip.
+par( oma=c(0,0,0,0))
+image.plot( zlim=zr, legend.only=TRUE, horizontal=TRUE, legend.mar=5)
+
+# reset panel
+set.panel()
+
+
+####
+# show colors
+## the factory colors:
+
+clab<- colors()
+n<- length( clab)
+N<- ceiling( sqrt(n) )
+M<- N
+temp<- rep( NA,M*N)
+temp[1:n] <- 1:n
+z<- matrix(temp, M,N)
+
+image(seq(.5,M+.5,,M+1), seq(.5,N+.5,,N+1)
+ , z, col=clab, axes=FALSE, xlab="", ylab="")
+
+
+# see the function fields.color.picker() to locate colors
+
+
+
+# dumping out colors by name for a latex document
+# this creates text strings that are the LaTeX color definitions
+# using the definecolor function.
+
+# grab all of the R default colors
+clab<- colors()
+
+for( nn in clab){
+ temp<- signif(col2rgb(nn)/256, 3)
+ cat(
+ "\\definecolor{",
+ nn, "}",
+ "{rgb}{", temp[1],
+ ",", temp[2],
+ ",", temp[3],
+ "}", fill=TRUE , sep="")
+ }
+
+# this loop prints out definitions such as
+# \definecolor{yellowgreen}{rgb}{0.602,0.801,0.195}
+# having loaded the color package in LaTeX
+# defining this color
+# use the construction {\color{yellowgreen} THIS IS A COLOR}
+# to use this color in a talk or document.
+
+# this loop prints out all the colors in LaTeX language
+# as their names and can be converted to a pdf for handy reference.
+
+sink( "showcolors.tex")
+
+clab<- colors()
+for( nn in clab){
+ temp<- signif(col2rgb(nn)/256, 3)
+ cat(
+ "\\definecolor{",
+ nn, "}",
+ "{rgb}{", temp[1],
+ ",", temp[2],
+ ",", temp[3],
+ "}", fill=TRUE , sep="")
+ cat( paste("{ \\color{",nn,"} ", nn," $\\bullet$ \\\\ }", sep=""),
+ fill=TRUE)
+}
+sink()
+
+} %end preformatted
+
+}
+
+\keyword{hplot}
diff --git a/man/fields.tests.Rd b/man/fields.tests.Rd
new file mode 100644
index 0000000..c3d5c48
--- /dev/null
+++ b/man/fields.tests.Rd
@@ -0,0 +1,150 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{fields testing scripts}
+\alias{fields.tests}
+\alias{test.for.zero}
+
+\title{
+Testing fields functions
+}
+\description{
+
+Some of the basic methods in fields can be tested by directly
+implementing the linear algebra using matrix expressions and
+other functions can be cross checked within fields. These
+comparisons are done in the the R source code test files in the tests
+subdirectory of fields. The function \code{test.for.zero} is useful for
+comparing the tests in a meaninful and documented way.
+}
+
+\usage{
+test.for.zero( xtest, xtrue, tol= 1.0e-8, relative=TRUE, tag=NULL)
+}
+
+\arguments{
+
+\item{xtest}{Vector of target values}
+
+\item{xtrue}{Vector of reference values}
+
+\item{tol}{Tolerance to judge whether the test passes.}
+
+\item{relative}{If true a relative error comparison is used. (See
+ details below.)}
+
+\item{tag}{ A text string to be printed out with the test results as a
+ reference}
+
+}
+
+\details{
+IMPORTANT: If the R object \code{test.for.zero.flag} exists with any
+value ( e.g. \code{test.for.zero.flag <- 1} ) then when the test fails
+this function will also generate an error in addition to printing a
+message. This option is added to insure
+that any test scripts will generate an error when any individual test
+fails.
+
+An example:
+\preformatted{
+> test.for.zero( 1:10, 1:10 + 1e-10, tag="First test")
+Testing: First test
+PASSED test at tolerance 1e-08
+}
+
+\preformatted{
+> test.for.zero( 1:10, 1:10 + 1e-10, tag="First test", tol=1e-12)
+Testing: First test
+FAILED test value = 1.818182e-10 at tolerance 1e-12
+}
+\preformatted{
+> test.for.zero.flag <- 1
+Testing: First test
+FAILED test value = 1.818182e-10 at tolerance 1e-12
+Error in test.for.zero(1:10, 1:10 + 1e-10, tag = "First test", tol = 1e-12) :
+}
+
+The scripts in the \code{tests} subdirectory are
+
+\describe{
+\item{Krig.test.R:}{Tests basic parts of the Krig and Tps functions including replicated
+and weighted observations. }
+
+\item{Krig.se.test.R:}{Tests computations of standard errors for the
+Kriging estimate.}
+
+\item{Krig.se.grid.test.R}{Tests approximate standard errors for the
+Krig function found by Monte Carlo conditional simulation.}
+
+\item{Krig.test.W.R}{Tests predictions and A matrix when an off diagonal
+observation weight matrix is used.}
+
+\item{Krig.se.W.R}{Tests standard errors when an off diagonal
+observation weight matrix is used.}
+
+\item{spam.test.R}{Tests sparse matrix formats and linear algebra.}
+
+\item{Wend.test.R}{Tests form for Wendland covariance family and its
+use of sparse matrix formats.}
+
+\item{diag.multiply.test.R}{Tests special (efficient) version of matrix
+multiply for diagonal matrices.}
+
+\item{ evlpoly.test.R}{Tests evaluation of univariate and multivariate
+polynomial evaluation.}
+
+\item{mKrig.test.R}{Tests the micro Krig function with and without sparse
+matrix methods. }
+
+
+}
+
+To run the tests just attach the fields library and source the testing
+file. In the fields source code these are in a subdirectory
+"tests". Compare the output to the "XXX.Rout.save" text file.
+
+ \code{test.for.zero} is used to print out the result for each
+individual comparison.
+Failed tests are potentially bad and are reported with a
+string beginning
+
+"FAILED test value = ... "
+
+If the object test.for.zero.flag exists then an error is also generated
+when the test fails.
+
+
+FORM OF COMPARISON:
+The actual test done is the sum of absolute differnces:
+
+test value = \code{
+sum( abs(c(xtest) - c( xtrue) ) ) /denom}
+
+Where \code{denom} is either \code{ mean( abs(c(xtrue)))} for relative error
+or 1.0 otherwise.
+
+Note the use of "c" here to stack any structure in xtest and xtrue into
+a vector.
+
+}
+
+\keyword{misc}
diff --git a/man/flame.Rd b/man/flame.Rd
new file mode 100644
index 0000000..8fb1527
--- /dev/null
+++ b/man/flame.Rd
@@ -0,0 +1,49 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{flame}
+\alias{flame}
+\title{
+ Response surface experiment ionizing a reagent
+}
+\description{
+The characteristics of an ionizing flame are varied with the intent of
+maximizing the intensity of emitted light for lithuim in
+solution. Areas outside of the measurements are where the mixture may
+explode! Note that the optimum is close to the boundary. Source of data is
+from a master's level lab experiment in
+analytical chemistry from Chuck Boss's course at NCSU.
+<s-section name= "DATA DESCRIPTION">
+This is list with the following components
+}
+\arguments{
+\item{x}{
+x is a 2 column matrix with the different Fuel and oxygen flow rates for
+the burner.
+}
+\item{y}{
+y is the response. The intensity of light at a particular
+wavelength indicative of Lithium ions.
+}
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/gcv.Krig.Rd b/man/gcv.Krig.Rd
new file mode 100644
index 0000000..0276430
--- /dev/null
+++ b/man/gcv.Krig.Rd
@@ -0,0 +1,152 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{gcv.Krig}
+\alias{gcv.Krig}
+\alias{gcv.sreg}
+
+\title{Finds profile likelihood and GCV estimates of
+smoothing parameters for splines and Kriging.}
+\description{
+ This is a secondary function that will use the computed Krig object and
+find various estimates of the smoothing parameter lambda. These are
+several different flavors of cross-validation, a moment matching
+strategy and the profile likelihood. This function can also be used
+independently with different data sets (the y's) if the covariates ( the
+x's) are the same and thus reduce the computation.
+}
+\usage{
+gcv.Krig(
+out, lambda.grid = NA, cost = 1, nstep.cv = 200, rmse
+ = NA, verbose = FALSE, tol = 1e-05, offset = 0, y =
+ NULL, give.warnings = TRUE)
+
+gcv.sreg (
+out, lambda.grid = NA, cost = 1, nstep.cv = 80, rmse =
+ NA, offset = 0, trmin = NA, trmax = NA, verbose =
+ FALSE, tol = 1e-05, give.warnings = TRUE)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{out}{ A Krig or sreg object.}
+ \item{lambda.grid}{ Grid of lambdas for coarse search. The default is
+equally spaced on effective degree of freedom scale. }
+ \item{cost}{ Cost used in GCV denominator }
+ \item{nstep.cv}{ Number of grid points in coarse search. }
+ \item{rmse}{ Target root mean squared error to match with
+ the estimate of sigma**2 }
+ \item{verbose}{ If true prints intermediate results. }
+ \item{tol}{ Tolerance in delcaring convergence of golden section search or bisection search. }
+ \item{offset}{ Additional degrees of freedom to be added into the GCV denominator.
+}
+ \item{y}{A new data vector to be used in place of the one associated with the
+Krig object (obj) }
+ \item{give.warnings}{ If FALSE will suppress warnings about grid search being out of
+range for various estimates based on GCV and REML.}
+
+\item{trmin}{Minimum value of lambda for grid search specified in terms
+of effective degrees of freedom.}
+\item{trmax}{Maximum value for grid search.}
+
+}
+\details{
+
+ This function finds several estimates of the smoothing parameter using
+first a coarse grid search followed by a refinement using a minimization (
+in the case of GCV or maximum likelihood) or bisection in the case of
+mathcing the rmse. Details of the estimators can be found in the help file
+for the Krig function.
+
+The Krig object passed to this function has some matrix decompostions that
+facilitate rapid computation of the GCV and ML functions and do not depend
+on the independent variable. This makes it possible to compute the Krig
+object once and to reuse the decompostions for multiple data sets. (But
+keep in mind if the x values change then the object must be recalculated.)
+The example below show show this can be used for a simulation study on the
+variability for estimating the smoothing parameter.
+
+
+}
+\value{A list giving a summary of estimates and diagonostic details with the
+following components:
+ \item{gcv.grid }{ A matrix describing results of the
+coarse search rows are values of lambda and the columns are
+lambda= value of smoothing parameter,
+trA=effective degrees of freedom,
+GCV=Usual GCV criterion,
+GCV.one=GCV criterion leave-one-out,
+GCV.model= GCV based on average response in the case of replicates,
+shat= Implied estimate of sigma ,
+-Log Profile= negative log of profiel likelihood for the lambda.
+}
+ \item{lambda.est}{Summary table of all estimates
+Rows index different types of estimates:
+GCV, GCV.model, GCV.one, RMSE, pure error, -Log Profile
+and the columns are the estimated values for lambda, trA, GCV, shat.
+}
+}
+\author{Doug Nychka}
+
+\seealso{
+\code{\link{Krig}},
+\code{\link{Tps}},
+\code{\link{predict.Krig}} }
+\examples{
+
+#
+Tps( ChicagoO3$x, ChicagoO3$y)-> obj # default is to find lambda by GCV
+summary( obj)
+
+gcv.Krig( obj)-> out
+print( out$lambda.est) # results agree with Tps summary
+
+sreg( rat.diet$t, rat.diet$trt)-> out
+gcv.sreg( out, tol=1e-10) # higher tolerance search for minimum
+\dontrun{
+# a simulation example
+x<- seq( 0,1,,150)
+f<- x**2*( 1-x)
+f<- f/sqrt( var( f))
+
+set.seed(123) # let's all use the same seed
+sigma<- .1
+y<- f + rnorm( 150)*sigma
+
+Tps( x,y)-> obj # create Krig object
+
+hold<- hold2<- matrix( NA, ncol=6, nrow=200)
+
+for( k in 1:200){
+# look at GCV estimates of lambda
+# new data simulated
+ y<- f + rnorm(150)*sigma
+# save GCV estimates
+lambdaTable<- gcv.Krig(obj, y=y, give.warnings=FALSE)$lambda.est
+hold[k,]<- lambdaTable[1,]
+hold2[k,]<- lambdaTable[6,]
+}
+matplot( cbind(hold[,2], hold2[,2]),cbind( hold[,4],hold2[,4]),
+ xlab="estimated eff. df", ylab="sigma hat", pch=16, col=c("orange3", "green2"), type="p")
+yline( sigma, col="grey", lwd=2)
+
+}
+}
+\keyword{spatial}
diff --git a/man/grid.list.Rd b/man/grid.list.Rd
new file mode 100644
index 0000000..5a1db10
--- /dev/null
+++ b/man/grid.list.Rd
@@ -0,0 +1,226 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{grid list}
+\alias{grid list}
+\alias{grid.list}
+\alias{fields.x.to.grid}
+\alias{parse.grid.list}
+\alias{fields.convert.grid}
+\alias{discretize.image}
+\alias{make.surface.grid}
+\alias{unrollZGrid}
+
+\title{
+Some simple functions for working with gridded data and
+the grid format (grid.list) used in fields.
+}
+\description{
+
+The object grid.list refers to a list that contains
+information for evaluating a function on a 2-dimensional
+grid of points. If a function has more than two
+independent variables then one also needs to specify the
+constant levels for the variables that are not being
+varied. This format is used in several places in fields
+for functions that evaluate function estimates and plot
+surfaces. These functions provide some default conversions
+among information and the gird.list. The function
+\code{discretize.image} is a useful tool for "registering"
+irregular 2-d points to a grid.
+}
+
+\usage{
+parse.grid.list( grid.list, order.variables="xy")
+fields.x.to.grid(x,nx=80, ny=80, xy=c(1,2))
+fields.convert.grid( midpoint.grid )
+discretize.image(x, m = 64, n = 64, grid = NULL, expand = c(1, 1),
+ boundary.grid = FALSE, na.rm = TRUE)
+make.surface.grid( grid.list)
+unrollZGrid( grid.list, ZGrid)
+}
+\arguments{
+\item{grid.list}{ No surprises here -- a grid list! These can be unequally
+spaced.}
+\item{order.variables}{ If "xy" the x variable will be subsequently plotted
+as the horizontal variable. If "yx" the x variable will be on the vertical
+axis.}
+\item{x}{ A matrix of independent variables such as the locations of
+observations given to Krig.}
+\item{nx}{Number of grid points for x variable.}
+\item{ny}{Number of grid points for y variable.}
+\item{m}{Number of grid points for x variable.}
+\item{n}{Number of grid points for y variable.}
+\item{na.rm}{Remove missing values if TRUE}
+\item{xy}{The column positions that locate the x and y variables for the
+grid.}
+\item{grid}{ A grid list!}
+\item{expand}{
+A scalar or two column vector that will expand the
+grid beyond the range of the observations.}
+
+\item{midpoint.grid}{ Grid midpoints to convert to grid boundaries.}
+
+\item{boundary.grid}{ If TRUE interpret grid points as boundaries of
+grid boxes. If FALSE interpret as the midpoints of the boxes. }
+
+\item{ZGrid}{An array or list form of covariates to use for
+ prediction. This must match the
+\code{grid.list} argument. e.g. ZGrid and grid.list describe the same
+ grid.
+If ZGrid is an array then the first two indices are the x and y
+ locations in the
+grid. The third index, if present, indexes the covariates. e.g. For
+ evaluation on
+a 10X15 grid and with 2 covariates. \code{ dim( ZGrid) == c(10,15, 2)}.
+If ZGrid is a list then the components x and y shold match those of
+ grid.list and
+the z component follows the shape described above for the no list
+case.
+}
+}
+\details{The form of a grid.list is
+
+\code{list( var.name1= what1 , var.name2=what2 , ... var.nameN=what3)}
+
+Here var.names are the names of the independent variables.
+The what options describe what should be done with this variable when
+generating the grid. These should either an increasing sequence of points
+or a single vaules. Obviously there should be only be two variables with
+sequences to define a grid for a surface.
+
+Most of time the gridding sequences are equally
+spaced and are easily generated using the \code{seq} function. Also throughout fields
+the grid points are typically the midpoints of the grid rather the grid box
+boundaries. However, these functions can handle unequally spaced grids and the
+logical boundary.grid can indicate a grid being the box boundaries.
+
+The variables in the list components are assumed
+to be in the same order as they appear in the data matrix.
+
+A useful function that expands the grid from the grid.list description into
+a full set of locations is \code{make.surface.grid} and is
+just a wrapper around the R base function \code{expand.grid}. A typical operation is to go from a grid.list to the set of grid locations. Evaluate a
+fucntion at these lcoations and then reformat this as an image for plotting.
+Here is how to do this cleanly:
+\preformatted{
+grid.list<- list( x= 1:10, y=1:15)
+xg<- make.surface.grid(grid.list)
+# look at a surface dependin on xg locations
+z<- xg[,1] + 2*xg[,2]
+out<- list( x=grid.list$x, y= grid.list$y, z=matrix( z, nrow=10, ncol=15))
+# now for example
+image.plot( out)
+}
+
+The key here is that \code{xg} and \code{matrix} both organize the grid in the
+same order.
+
+Some fields internal functions that support interpreting grid list format are:
+
+\code{fields.x.to.grid}:
+ Takes an "x" matrix of locations or independent variables and creates a
+reasonable grid list. This is used to evaluate predicted surfaces when a
+grid list is not explicited given to predictSurface. The variables
+(i.e. columns of x) that are not part of the grid are set to the median
+values. The x grid values are \code{nx} equally spaced points in the
+range \code{x[, xy[1]]}. The y grid values are \code{ny} equally spaced
+points in the range \code{x[, xy[2]]}.
+
+
+\code{parse.grid.list}:
+ Takes a grid list and returns the information in a more expanded list
+form that is easy to use. This is used, for example, by predictSurface
+to figure out what to do!
+
+
+\code{fields.convert.grid}:
+ Takes a vector of n values assumed to be midpoints of a grid and
+returns the n+1 boundaries. See how this is used in discretize.image
+with the cut function. This function will handle unequally spaced
+grid values.
+
+ \code{discretize.image}: Takes a vector of locations and a 2-d grid and
+figures out to which boxes they belong. The output matrix ind has the
+grid locations. If boundary.grid is FALSE then the grid list (grid) is
+assumed to be grid midpoints. The grid boundaries are taken to be the
+point half way between these midpoints. The first and last boundaries
+points are determined by extrapolating so that the first and last box
+has the midpoint in its center. (See the code in fields.convert.grid for
+details.) If grid is NULL then midpoints are found from m and n and the
+range of the x matrix.
+
+\code{unrollZGrid} Checks that the ZGrid object is compatible with th e grid.list and concatenates the grid arrays into vectors. This version of the covariates are used the usual predict function.
+
+}
+
+\seealso{
+as.surface, predictSurface, plot.surface, surface,
+expand.grid, as.image
+}
+
+\examples{
+
+#Given below are some examples of grid.list objects and the results
+#when they are used with make.surface.grid. Note that
+#make.surface.grid returns a matrix that retains the grid.list
+#information as an attribute.
+
+grid.l<- list( 1:3, 2:5)
+make.surface.grid(grid.l)
+
+
+grid.l <- list( 1:3, 10, 1:3)
+make.surface.grid(grid.l)
+
+#The next example shows how the grid.list can be used to
+#control surface plotting and evaluation of an estimated function.
+# first create a test function
+
+set.seed( 124)
+
+X<- 2*cbind( runif(30), runif(30), runif(30)) -1
+
+dimnames( X)<- list(NULL, c("X1","X2","X3"))
+y<- X[,1]**2 + X[,2]**2 + exp(X[,3])
+
+# fit an interpolating thin plate spline
+out<- Tps( X,y)
+
+grid.l<- list( X1= seq( 0,1,,20), X2=.5, X3=seq(0,1,,25))
+surface( out, grid.list=grid.l)
+# surface plot based on a 20X25 grid in X1 an X3
+# over the square [0,2] and [0,2]
+# holding X2 equal to 1.0.
+#
+
+# indicator image of discretized locations
+look<- discretize.image( RMprecip$x, m=15, n=15)
+image.plot( look$grid$x, look$grid$y,look$hist )
+# actual locations
+points( RMprecip$x,col="magenta", pch=".")
+
+
+
+}
+\keyword{misc}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/image.cov.Rd b/man/image.cov.Rd
new file mode 100644
index 0000000..c2d4b40
--- /dev/null
+++ b/man/image.cov.Rd
@@ -0,0 +1,242 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{image.cov}
+\alias{stationary.image.cov}
+\alias{Exp.image.cov}
+\alias{Rad.image.cov}
+\alias{wendland.image.cov}
+\alias{matern.image.cov}
+\title{
+Exponential, Matern and general covariance functions for 2-d
+gridded locations.
+}
+\description{
+Given two sets of locations defined on a 2-d grid efficiently multiplies a
+cross covariance with a vector. The intermediate compuations (the setup)
+can also be used for fast simulation of the processes on a grid using the circulant
+embedding technique.
+}
+\usage{
+stationary.image.cov(ind1, ind2, Y, cov.obj = NULL, setup = FALSE,
+grid, M=NULL,N=NULL,cov.function="stationary.cov", delta = NULL, cov.args = NULL, ...)
+
+Exp.image.cov(ind1, ind2, Y, cov.obj = NULL, setup = FALSE, grid, ...)
+
+Rad.image.cov(ind1, ind2, Y, cov.obj = NULL, setup = FALSE, grid, ...)
+
+matern.image.cov(ind1, ind2, Y, cov.obj = NULL, setup = FALSE, grid,
+M=NULL,N=NULL,theta= 1.0, smoothness=.5)
+
+wendland.image.cov(ind1, ind2, Y, cov.obj = NULL,
+ setup = FALSE, grid, M = NULL, N = NULL, cov.args=NULL, ...)
+
+}
+\arguments{
+\item{ind1}{
+Matrix of indices for first set of locations this is a two column matrix
+where each row is the row/column index of the image element. If missing
+the default is to use all grid locations.
+}
+\item{ind2}{
+Matrix of indices for second set of locations. If missing this is taken to
+be ind2. If ind1 is missing ind2 is coerced to be all grid locations.
+}
+\item{Y}{
+Vector to multiply by the cross covariance matrix. Y must be the same
+locations as those referred to by ind2.
+}
+\item{cov.args}{Any additional arguments or parameters to the covariance function.}
+\item{cov.obj}{
+A list with the information needed to do the multiplication by
+convolutions. This is usually found by using the returned list when
+setup=T.
+}
+\item{cov.function}{Name of the (stationary) covariance function.}
+\item{setup}{
+If true do not do the multiplication but just return the covariance object
+required by this function.
+}
+\item{delta}{A distance that indicates the range of the covariance when it has compact support.
+For example this is the theta parameter in the Wendland covariance.}
+\item{grid}{
+A grid list giving the X and Y grids for the image. (See example below.)
+This is only required if setup is true.
+}
+\item{M}{
+Size of x-grid used to compute multiplication (see notes on image.smooth
+for details) by the FFT. If NULL, the default for M is the largest power
+of 2
+greater than or equal to 2*m where m= length( grid\$x).
+This will give an exact
+result but smaller values of M will yield an approximate, faster result.
+}
+
+\item{N}{Size of y-grid used to compute multiplication by the FFT.}
+\item{theta}{Scale parameter for Matern.}
+\item{smoothness}{Smoothness parameter for Matern (.5=Exponential)}
+\item{\dots}{
+Any arguments to pass to the covariance function in setting up the
+covariance object. This is only required if setup is TRUE.
+For \code{stationary.image.cov} one can include \code{V} a matrix reflecting a rotation and scaling of
+coordinates. See stationary.cov for details. }
+}
+\value{
+A vector that is the multiplication of the cross covariance matrix with
+the vector Y.
+}
+\details{
+This function was provided to do fast computations for large numbers of
+spatial locations and supports the conjugate gradient solution in
+krig.image. In doing so the observations can be irregular spaced
+but their coordinates must be 2-dimensional and be restricted to grid
+points.
+(The function as.image will take irregular, continuous coordinates and
+overlay a grid on them.)
+
+Returned value: If ind1 and ind2 are matrices where nrow(ind1)=m and
+nrow(ind2)=n then the cross covariance matrix, Sigma is an mXn matrix
+(i,j) element is the covariance between the grid locations indexed at
+ind1[i,] and ind2[j,]. The returned result is Sigma\%*\%Y. Note that one can
+always recover the coordinates themselves by evaluating the grid list at
+the indices. e.g. cbind( grid\$x[ ind1[,1]], grid\$y[ind1[,2])) will give
+the coordinates associated with ind1. Clearly it is better just to work
+with ind1!
+
+Functional Form: Following the same form as Exp.cov stationary.cov for
+irregular locations, the covariance is defined as phi( D.ij) where D.ij is
+the Euclidean distance between x1[i,] and x2[j,] but having first been
+scaled by theta. Specifically,
+
+D.ij = sqrt( sum.k (( x1[i,k] - x2[j,k]) /theta[k])**2 ).
+
+See \code{Matern} for the version of phi for the Matern family.
+
+Note that if theta is a scalar then this defines an isotropic covariance
+function.
+
+Implementation: This function does the multiplication on the full
+grid efficiently by a 2-d FFT. The irregular pattern in Y is handled by
+padding with zeroes and once that multiplication is done only the
+appropriate subset is returned.
+
+As an example assume that the grid is 100X100 let big.Sigma denote the big
+covariance matrix among all grid points ( If the parent grid is 100x100
+then big.Sigma is 10K by 10K !) Here are the computing steps:
+
+temp<- matrix( 0, 100,100)
+
+temp[ ind2] <- Y
+
+temp2<- big.Sigma\%*\% temp
+
+temp2[ind1]
+
+Notice how much we pad with zeroes or at the end throw away!
+Here the matrix multiplication is effected through convolution/FFT tricks
+to avoid creating and multiplying big.Sigma explicitly. It is often faster
+to multiply the regular grid and throw away the parts we do not need then
+to deal directly with the irregular set of locations.
+
+Note: In this entire discussion Y is treated as vector. However if
+one has complete data then Y can also be interpreted as a image matrix
+conformed to correspond to spatial locations. See the last example for
+this distinction.
+
+}
+\seealso{
+ smooth.2d, as.image, krig.image, stationary.cov
+}
+\examples{
+# multiply 2-d isotropic exponential with theta=4 by a random vector
+
+junk<- matrix(rnorm(100*100), 100,100)
+
+cov.obj<- stationary.image.cov( setup=TRUE,
+ grid=list(x=1:100,y=1:100),theta=8)
+result<- stationary.image.cov(Y=junk,cov.obj=cov.obj)
+
+image( matrix( result, 100,100)) # NOTE that is also a smoother!
+
+# to do it again, no setup is needed
+# e.g.
+# junk2<- matrix(rnorm(100**2, 100,100))
+# result2<- stationary.image.cov(Y=junk2, cov.obj=cov.obj)
+
+# generate a grid and set of indices based on discretizing the locations
+# in the precip dataset
+
+ out<-as.image( RMprecip$y, x= RMprecip$x)
+ ind1<- out$ind
+ grid<- list( x= out$x, y=out$y)
+
+#
+# discretized x locations to use for comparison
+ xd<- cbind( out$x[ out$ind[,1]], out$y[ out$ind[,2]] )
+
+# setup to create cov.obj for exponential covariance with range= 1.25
+
+ cov.obj<- stationary.image.cov( setup=TRUE, grid=grid, theta=1.25)
+
+# multiply covariance matrix by an arbitrary vector
+ junk<- rnorm(nrow( ind1))
+ result<- stationary.image.cov( ind1, ind1, Y= junk,cov.obj=cov.obj)
+
+# The brute force way would be
+# result<- stationary.cov( xd, xd, theta=1.25, C=junk)
+# or
+# result<- stationary.cov( xd, xd, theta=1.25) %*% junk
+# both of these take much longer
+
+
+# evaluate the covariance between all grid points and the center grid point
+ Y<- matrix(0,cov.obj$m, cov.obj$n)
+ Y[32,32]<- 1
+ result<- stationary.image.cov( Y= Y,cov.obj=cov.obj)
+# covariance surface with respect to the grid point at (32,32)
+#
+# reshape "vector" as an image
+ temp<- matrix( result, cov.obj$m,cov.obj$n)
+ image.plot(cov.obj$grid$x,cov.obj$grid$y, temp)
+# or persp( cov.obj$grid$x,cov.obj$grid$y, temp)
+
+# check out the Matern
+ grid<- list( x= seq(-105,-99,,64), y= seq( 40,45,,64))
+ cov.obj<- matern.image.cov(
+ setup=TRUE, grid=grid, theta=.55, smoothness=1.0)
+ Y<- matrix(0,64,64)
+ Y[16,16]<- 1
+
+ result<- matern.image.cov( Y= Y,cov.obj=cov.obj)
+ temp<- matrix( result, cov.obj$m,cov.obj$n)
+ image.plot( cov.obj$grid$x,cov.obj$grid$y, temp)
+
+# Note we have centered at the location (grid$x[16],grid$y[16]) for this case
+# using sim.rf to simulate an Matern field
+ look<- sim.rf( cov.obj)
+ image.plot( grid$x, grid$y, look)
+
+
+}
+
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/image.plot.Rd b/man/image.plot.Rd
new file mode 100644
index 0000000..47425de
--- /dev/null
+++ b/man/image.plot.Rd
@@ -0,0 +1,482 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{image.plot}
+\alias{image.plot}
+\title{
+ Draws image plot with a legend strip for the color scale based on either
+a regular grid or a grid of quadrilaterals.
+}
+\description{
+This function combines the R image function with some automatic
+placement of a legend. This is done by automatically splitting the plotting region
+into two parts. Putting the image in one and the legend in the other. After the
+legend is added the plot region is reset to the image plot.
+This function also allows for plotting quadrilateral cells in the image format that
+often arise from regular grids transformed with a map projection.
+
+}
+\usage{
+\method{image}{plot}(...,
+ add = FALSE,
+ breaks= NULL, nlevel = 64, col = NULL,
+ horizontal = FALSE, legend.shrink = 0.9, legend.width = 1.2,
+ legend.mar = ifelse(horizontal, 3.1, 5.1), legend.lab = NULL,
+ legend.line= 2,
+ graphics.reset = FALSE, bigplot = NULL, smallplot = NULL,
+ legend.only = FALSE, lab.breaks = NULL,
+ axis.args = NULL, legend.args = NULL, legend.cex=1.0, midpoint = FALSE, border = NA,
+ lwd = 1,verbose = FALSE )
+}
+
+\arguments{
+
+\item{\dots}{
+ The usual arguments to the \code{image} function as x,y,or z or as a
+list with x,y,z as components. One can also include a \code{breaks} an
+argument for an unequal spaced color scale with color scale boundaries
+at the breaks (see example below). If a "quadrilateral grid",
+arguments must be explicitly x,y and z with x, and y being matrices of
+dimensions equal to, or one more than, z giving the grid
+locations. The basic concept is that the coordinates of x and y still
+define a grid but the image cells are quadrilaterals rather than being
+restricted to rectangles. See details below as to how one handles
+whether the quads are specified by their vertices or by their
+midpoints. NOTE graphical argruments passed here will only have impact
+on the image plot. To change the graphical defaults for the legend use
+the \code{par} function beforehand e.g. par( lab.cex=2.0) to increase
+colorbar labels.
+}
+
+\item{add}{
+If true add image and a legend strip to the existing plot.
+}
+
+\item{bigplot}{
+Plot coordinates for image plot. If not passed
+these will
+be determined within the function.
+}
+
+\item{border}{This only works if x and y are matrices -- if NA the quadralaterals will
+have a border color that is
+the same as the interior color. Otherwise this specifies the color to use.}
+
+\item{breaks}{Break points in sorted order to indicate the intervals for assigning
+ the colors. Note that if there are nlevel colors there should be (nlevel+1)
+ breakpoints. If \code{breaks} is not specified (nlevel+1) equally spaced breaks are created where the first and last bin have their midpoints at the minimum and maximum values in \code{z} or at \code{zlim}. }
+
+
+
+
+\item{col}{
+Color table to use for image (See help file on image for details.).
+Default is a pleasing range of 64 divisions suggested by Tim Hoar and is similar to
+the MATLAB (TM) jet color scheme. Note that if \code{breaks} is specified there must be one less colors specified than the number of breaks.
+}
+
+\item{graphics.reset}{
+ If FALSE (default) the plotting region ( plt in par) will not be reset
+and one can add more information onto the image plot. (e.g. using functions
+such as points or lines.) If TRUE will reset plot parameters to the
+values before entering the function.
+}
+
+\item{horizontal}{
+If false (default) legend will be a vertical strip on the right side. If
+true the legend strip will be along the bottom.
+}
+\item{lab.breaks}{ If breaks are supplied these are text string labels
+to put at each break value. This is intended to label axis on a
+transformed scale such as logs.}
+\item{axis.args}{Additional arguments for the axis function used to
+create the legend axis. (See example below adding a log scaling.)}
+
+\item{legend.only}{
+If TRUE just add the
+legend to a the plot in the plot region defined by the coordinates in
+smallplot. In the absence of other information the range for the legend
+is determined from the \code{zlim} argument.
+}
+
+\item{legend.args}{Arguments for a complete specification of the
+legend label. This is in the form of list and is just passed to the
+mtext function. Usually this will not be needed.
+(See example below.)}
+
+\item{legend.cex}{Character expansion to change size of the legend label.}
+
+\item{legend.line}{Distance in units of character height (as in \code{mtext})
+ of the legend label from the
+color bar. Make this larger if the label collides with the color axis labels.}
+
+\item{legend.mar}{
+Width in characters of legend margin that has the axis. Default is 5.1
+for a vertical legend and 3.1 for a horizontal legend.}
+
+\item{legend.lab}{ Label for the axis of the color legend. Default is no
+label as this is usual evident from the plot title.}
+
+\item{legend.shrink}{
+ Amount to shrink the size of legend relative to the full height or width
+of the plot.
+}
+
+\item{legend.width}{
+Width in characters of the legend strip. Default is 1.2, a little bigger
+that the width of a character. }
+
+\item{lwd}{Line width of bordering lines around pixels.
+ This might need to be set less than 1.0
+to avoid visible rounding of the pixel corners.}
+
+\item{midpoint}{
+This option for the case of unequally spaced grids with x and y being
+matrices of grid point locations.
+ If FALSE (default) the quadralaterals will be extended
+to surround the z locations as midpoints. If TRUE z values will be averaged
+to yield a midpoint value and the original grid points be used to define the
+quadralaterals. (See help on poly.image for details). In most cases
+midpoint should be FALSE to preserve exact values for z and
+let the grid polygons be modified.}
+
+\item{nlevel}{
+Number of color levels used in legend strip
+}
+
+\item{smallplot}{
+Plot coordinates for legend strip. If not passed these will be determined within
+the function. Be sure to leave room for the axis labels. For example, if the legend is on the
+right side \code{smallplot= c(.85,.9,0,1) } will leave (.1 in plot coordinates) for the
+axis labels to the right of the color strip. This argument is useful for drawing a
+plot with the legend that is the same size as the plots without legends.
+}
+\item{verbose}{If TRUE prints intermediate information about setting up plots (for debugging). }
+}
+\section{Side Effects}{
+After exiting, the
+plotting region may be changed to make it possible to add more features to
+the plot. To be explicit, \code{par()\$plt} may be changed to reflect a
+smaller plotting region that has accommodated room for the legend subplot.
+
+If \code{xlim} and \code{ylim} are specified the pixels may overplot the axis lines.
+Just use the \code{box} function to redraw them.
+}
+\details{
+
+
+
+\strong{How this function works:}
+The strategy for \code{image.plot} is simple, divide the plotting region
+into two smaller regions \code{bigplot} and \code{smallplot}. The image
+goes in one and the legend in the other. This way there is always room for
+the legend. Some adjustments are made to this rule by not shrinking the
+\code{bigplot} if there is already room for the legend strip and also
+sticking the legend strip close to the image plot. One can specify the
+plot regions explicitly by \code{bigplot} and \code{smallplot} if the
+default choices do not work. There may be problems with small plotting
+regions in fitting both of these elements in the plot region and one may
+have to change the default character sizes or margins to make things fit.
+Sometimes this function will not reset the type of margins correctly and the
+"null" call \code{par(mar = par("mar"))} may help to fix this issue.
+
+\strong{Why image.plot and not image?} The R Base function \code{image} is very
+useful but it is awkward to place a legend quickly. However, that said if you are
+drawing several image plots and want ao common legend use the \code{image} function
+and just just use \code{image.plot} to add the legend. See the example in the
+help file.
+
+\strong{Almost cloropleths too:} It should be noted that this image function is slightly
+different than a cloropleth map because
+the legend is assuming that a continous scale has been discretized into a series of colors.
+To make image.plot function as a cloropleth graphic one would of course use the
+ \code{breaks} option
+and for clarity perhaps code the different regions as different integer values.
+In addition, for publication quality one would want to use the \code{legend.args} to
+add more descriptive labels at the midpoints in the color strip.
+
+\strong{Relationship of x, y and z:}
+ If the z component is a matrix then the user should be aware that
+this function locates the matrix element z[i,j] at the grid locations
+(x[i], y[j]) this is very different than simply listing out the
+matrix in the usual row column tabular form. See the example below
+for details on the difference in formatting. What does one do
+if you do not really have the "z" values on a regular grid? See the
+functions \code{quilt.plot.Rd} and \code{as.image} to discretise
+irregular observations to a grid. If the values makes sense as points on
+a smooth surface see \code{Tps} and \code{fastTps} for surface interpolation.
+
+\strong{Grids with unequally spacing:}
+If x and y are matrices that are a smooth transformation of a regular grid then
+z[i,j] is rendered at a quadrilateral that is centered at x[i,j] and
+y[i,j] (\code{midpoint} TRUE). The details of how this cell is found
+are buried in \code{poly.image} but it it essentially found using midpoints between the centers.If \code{midpoint} is FALSE then x
+and y are interpreted as the corners of the quadrilateral cells. But
+what about z? The four values of z are now averaged to represent a
+value at the midpoint of the cell and this is what is used for
+plotting. Quadrilateral grids were added to help with plotting
+the gridded output of geophysical models where the regular grid is
+defined according to one map projection but the image plotting is required
+in another projection. Typically the regular grid becomes distorted in
+a smooth way when this happens. See the regional climate example for
+a illustration of this application. One can add border colors in this case
+easily because these choices are jsut passed onto the polygon function.
+
+Adding the pixel grid for rectangular images:
+For adding the grid of pixel borders to a rectangular image try this example
+after calling \code{image.plot}
+
+%\preformatted{
+\code{ dx<- x[2] - x[1]} \cr
+\code{ dy <- y[2]-y[1]} \cr
+\code{ xtemp<- seq( min( x)- dx/2, max(x)+ dx/2,, length(x) +1) } \cr
+\code{ ytemp<- seq( min( y)- dy/2, max(y)+ dy/2,, length(y) +1)} \cr
+\code{ xline( xtemp, col="grey50", lwd=2); yline( ytemp, col="grey50", lwd=2)}
+%}
+
+Here \code{x} and \code{y} here are the x and y grid values from the image list.
+
+\strong{Fine tuning color scales:} This function gives some flexibility in
+tuning the color scale to fit the rendering of z values. This can
+either be specially designed color scale with specific colors ( see
+help on \code{designer.colors}), positioning the colors at specific
+points on the [0,1] scale, or mapping distinct colors to intervals of
+z. The examples below show how to do each of these. In addition, by
+supplying \code{lab.break} strings or axis parameters one can
+annotate the legend axis in an informative matter.
+
+
+\strong{The details of placing the legend and
+dividing up the plotting real estate:}
+It is surprising how hard it is to automatically add the
+legend! All "plotting coordinates" mentioned here are in device
+coordinates. The plot region is assumed to be [0,1]X[0,1] and plotting
+regions are defined as rectangles within this square. We found these
+easier to work with than user coordinates.
+
+\code{legend.width} and \code{legend.mar} are in units of character
+spaces. These units are helpful in thinking about axis labels that
+will be put into these areas. To add more or less space between the
+legend and the image plot alter the mar parameters. The default mar
+settings (5.1,5.1,5.1,2.1) leaves 2.1 spaces for vertical legends and
+5.1 spaces for horizontal legends.
+
+There are always problems with
+default solutions to placing information on graphs but the choices made
+here may be useful for most cases. The most annoying thing is that after
+using plot.image and adding information the next plot that is made may
+have the slightly smaller plotting region set by the image plotting.
+The user should set \code{reset.graphics=TRUE} to avoid the plotting size
+from changing. The disadvantage, however, of resetting the graphics
+is that one can no longer add additional graphics elements to the image
+plot. Note that filled.contour always resets the graphics but provides
+another mechanism to pass through plotting commands. Apparently
+\code{filled.contour}, while very pretty, does not work for multiple plots.
+\code{levelplot} that is part of the lattice package has a very
+similar function to image.plot and a formula syntax in the call.
+
+
+By keeping the \code{zlim} argument the same across images one can generate the
+same color scale. (See the \code{image} help file.) One useful technique for a
+panel of images is to just draw the images with \code{image}
+and then use image.plot to add a legend to the last plot. (See example
+below for messing with the outer margins to make this work.)
+Usually a square plot (\code{pty="s"}) done in a rectangular plot region will
+have room for the legend stuck to the right side without any other
+adjustments. See the examples below for more complicated arrangements
+ of multiple image plots and a summary
+legends.
+
+\strong{Adding just the legend strip:}
+Note that to add just the legend strip all the numerical information one
+needs is the \code{zlim} argument and the color table!
+
+\strong{About color tables:}
+We like \code{tim.colors}
+as a default color scale and so if this what you use this can be omitted. The
+topographic color scale (\code{topo.colors}) is
+also a close second showing our geophysical bias.
+Users may find \code{larry.colors} useful for coding distinct regions
+in the style of a cloropleith map. See also
+\code{terrain.colors} for a subset of the topo ones and \code{designer.colors} to "roll
+your own" color table. One nice option in this last function is to fix color transitions at
+particular quantiles of the data rather than at equally spaced
+intervals. For color choices see how the \code{nlevels} argument figures
+into the legend and main plot number of colors. Also see the \code{colors} function
+for a listing of all the colors that come with the R base environment.
+
+}
+
+\seealso{ image, poly.image, filled.contour, quilt.plot, plot.surface,
+add.image, colorbar.plot, tim.colors, designer.colors }
+
+\examples{
+ x<- 1:10
+ y<- 1:15
+ z<- outer( x,y,"+")
+ image.plot(x,y,z)
+
+# or
+ obj<- list( x=x,y=y,z=z)
+ image.plot(obj, legend.lab="Sverdrups")
+
+# add some points on diagonal using standard plot function
+#(with some clipping beyond 10 anticipated)
+ points( 5:12, 5:12, pch="X", cex=3)
+
+# adding breaks and distinct colors for intervals of z
+# with and without lab.breaks
+ brk<- quantile( c(z))
+ image.plot(x,y,z, breaks=brk, col=rainbow(4))
+# annotate legend strip at break values and add a label
+ image.plot(x,y,z, breaks=brk, col=rainbow(4),
+ lab.breaks=names(brk))
+#
+# compare to
+ zp <-quantile(c(z), c( .05, .1,.5, .9,.95))
+ image.plot(x,y,z,
+ axis.args=list( at=zp, labels=names(zp) ) )
+# a log scaling for the colors
+ ticks<- c( 1, 2,4,8,16,32)
+ image.plot(x,y,log(z), axis.args=list( at=log(ticks), labels=ticks))
+
+# see help file for designer.colors to generate a color scale that adapts to
+# quantiles of z.
+# Two add some color scales together here is an example of 5 blues to white to 5 reds
+# with white being a specific size.
+ colorTable<- designer.colors(11, c( "blue","white", "red") )
+# breaks with a gap of 10 to 17 assigned the white color
+ brks<- c(seq( 1, 10,,6), seq( 17, 25,,6))
+ image.plot( x,y,z,breaks=brks, col=colorTable)
+#
+#fat (5 characters wide) and short (50\% of figure) color bar on the bottom
+ image.plot( x,y,z,legend.width=5, legend.shrink=.5, horizontal=TRUE)
+
+# adding a label with all kinds of additional arguments.
+# use side=4 for vertical legend and side= 1 for horizontal legend
+# to be parallel to axes. See help(mtext).
+
+image.plot(x,y,z,
+ legend.args=list( text="unknown units",
+ col="magenta", cex=1.5, side=4, line=2))
+
+#### example using a irregular quadrilateral grid
+data( RCMexample)
+
+image.plot( RCMexample$x, RCMexample$y, RCMexample$z[,,1])
+ind<- 50:75 # make a smaller image to show bordering lines
+image.plot( RCMexample$x[ind,ind], RCMexample$y[ind,ind], RCMexample$z[ind,ind,1],
+ border="grey50", lwd=2)
+
+
+#### multiple images with a common legend
+
+set.panel()
+
+# Here is quick but quirky way to add a common legend to several plots.
+# The idea is leave some room in the margin and then over plot in this margin
+
+par(oma=c( 0,0,0,4)) # margin of 4 spaces width at right hand side
+set.panel( 2,2) # 2X2 matrix of plots
+
+# now draw all your plots using usual image command
+for ( k in 1:4){
+ data<- matrix( rnorm(150), 10,15)
+ image( data, zlim=c(-4,4), col=tim.colors())
+# and just for fun add a contour plot
+ contour( data, add=TRUE)
+}
+
+par(oma=c( 0,0,0,1))# reset margin to be much smaller.
+image.plot( legend.only=TRUE, zlim=c(-4,4))
+
+# image.plot tricked into plotting in margin of old setting
+
+set.panel() # reset plotting device
+
+#
+# Here is a more learned strategy to add a common legend to a panel of
+# plots consult the split.screen help file for more explanations.
+# For this example we draw two
+# images top and bottom and add a single legend color bar on the right side
+
+# first divide screen into the figure region (left) and legend region (right)
+ split.screen( rbind(c(0, .8,0,1), c(.8,1,0,1)))
+
+# now subdivide up the figure region into two parts
+ split.screen(c(2,1), screen=1)-> ind
+ zr<- range( 2,35)
+# first image
+ screen( ind[1])
+ image( x,y,z, col=tim.colors(), zlim=zr)
+
+# second image
+ screen( ind[2])
+ image( x,y,z+10, col=tim.colors(), zlim =zr)
+
+# move to skinny region on right and draw the legend strip
+ screen( 2)
+ image.plot( zlim=zr,legend.only=TRUE, smallplot=c(.1,.2, .3,.7),
+ col=tim.colors())
+
+ close.screen( all=TRUE)
+
+
+# you can always add a legend arbitrarily to any plot;
+# note that here the plot is too big for the vertical strip but the
+# horizontal fits nicely.
+plot( 1:10, 1:10)
+image.plot( zlim=c(0,25), legend.only=TRUE)
+image.plot( zlim=c(0,25), legend.only=TRUE, horizontal =TRUE)
+
+# combining the usual image function and adding a legend
+# first change margin for some more room
+\dontrun{
+par( mar=c(10,5,5,5))
+image( x,y,z, col=topo.colors(64))
+image.plot( zlim=c(0,25), nlevel=64,legend.only=TRUE, horizontal=TRUE,
+col=topo.colors(64))
+}
+#
+#
+# sorting out the difference in formatting between matrix storage
+# and the image plot depiction
+# this really has not much to do with image.plot but I hope it is useful
+
+A<- matrix( 1:48, ncol=6, nrow=8)
+# first column of A will be 1:8
+# ... second is 9:16
+
+image.plot(1:8, 1:6, A)
+# add labels to each box
+text( c( row(A)), c( col(A)), A)
+# and the indices ...
+text( c( row(A)), c( col(A))-.25,
+ paste( "(", c(row(A)), ",",c(col(A)),")", sep=""), col="grey")
+
+# "columns" of A are horizontal and rows are ordered from bottom to top!
+#
+# matrix in its usual tabular form where the rows are y and columns are x
+image.plot( t( A[8:1,]), axes=FALSE)
+
+}
+\keyword{hplot}
+% docclass is function
diff --git a/man/image.smooth.Rd b/man/image.smooth.Rd
new file mode 100644
index 0000000..41f5212
--- /dev/null
+++ b/man/image.smooth.Rd
@@ -0,0 +1,173 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{image.smooth}
+\alias{image.smooth}
+\alias{setup.image.smooth}
+\title{
+ Kernel smoother for irregular 2-d data
+}
+\description{
+Takes an image matrix and applies a kernel smoother to it. Missing values
+are handled using the Nadaraya/Watson normalization of the kernel.
+}
+\usage{
+\method{image}{smooth}(x, wght = NULL, dx = 1, dy = 1,
+ kernel.function = double.exp,
+ theta = 1, grid = NULL, tol = 1e-08, xwidth = NULL, ywidth = NULL,
+ weights = NULL,...)
+
+setup.image.smooth(nrow = 64, ncol = 64, dx = 1, dy = 1,
+ kernel.function = double.exp,
+ theta = 1, xwidth = nrow * dx, ywidth = ncol * dx, lambda=NULL, ...)}
+\arguments{
+\item{x}{
+A matrix image. Missing values can be indicated by NAs. }
+\item{wght}{
+FFT of smoothing kernel. If this is NULL the default is to compute this
+object. }
+\item{grid}{
+A list with x and y components. Each are equally spaced and define the rectangular. ( see grid.list)}
+\item{dx}{
+Grid spacing in x direction }
+\item{dy}{
+Grid spacing in x direction }
+\item{kernel.function}{
+ An R function that takes as its argument the \emph{squared} distance
+between two points divided by the bandwidth. The default is exp(
+-abs(x)) yielding a normal kernel}
+\item{theta}{the bandwidth or scale parameter.}
+\item{xwidth}{
+ Amount of zero padding in horizontal dimension in units of the grid spacing.
+If NULL the default value is equal to the width of the image the most
+conservative value but possibly inefficient for computation.
+Set this equal to zero to get periodic wrapping of the smoother. This is
+useful to smooth a Mercator map projection. }
+\item{ywidth}{
+ Same as xwidth but for the vertical dimension. }
+\item{weights}{
+Weights to apply when smoothing.}
+\item{tol}{
+Tolerance for the weights of the N-W kernel. This avoids kernel
+estimates that are "far" away from data. Grid points with weights
+less than tol are set to NA.}
+\item{nrow}{X dimension of image in setting up smoother weights}
+\item{ncol}{Y dimension of image}
+\item{lambda}{Smoothing parameter if smoother is interpreted in a spline-like
+way.}
+\item{\dots}{
+ Other arguments to be passed to the kernel function}
+}
+
+\value{
+The smoothed image in R image format. ( A list with components x, y
+and z.) \code{setup.image.smooth} returns a list with components W a
+matrix being the FFT of the kernel, dx, dy, xwidth and ywidth.}
+
+\details{
+ The function works by taking convolutions using an FFT. The missing
+pixels are taken into account and the kernel smoothing is correctly
+normalized for the edge effects following the classical Nadaraya-Watson
+estimator. For this reason the kernel doe snot have to be a desity as it
+is automatically normalized when the kernel weight function is found for
+the data. If the kernel has limited support then the width arguments
+can be set to reduce the amount of computation. (See example below.)
+For multiple smoothing compute the fft of the kernel just once using
+\code{setup.image.smooth} and pass this as the wght argument to
+image.smooth. this will save an FFT in computations.
+}
+\seealso{ as.image, sim.rf, image.plot}
+
+\examples{
+# first convert precip data to the 128X128 discretized image format ( with
+# missing values to indicate where data is not observed)
+#
+out<- as.image( RMprecip$y, x= RMprecip$x, nx=128, ny=128)
+# out$z is the image matrix
+
+dx<- out$x[2]- out$x[1]
+dy<- out$y[2] - out$y[1]
+
+#
+# grid scale in degrees and choose kernel bandwidth to be .25 degrees.
+
+look<- image.smooth( out, theta= .25)
+
+image.plot(look)
+points( RMprecip$x)
+US( add=TRUE, col="grey", lwd=2)
+
+# to save on computation, decrease the padding with zeroes
+# only pad 32 grid points around the margins ofthe image.
+
+look<- image.smooth(out$z, dx=dx, dy=dy, theta= .25, xwidth=32*dx,ywidth=32*dy)
+
+# the range of these data is ~ 10 degrees and so
+# with a padding of 32 grid points 32*( 10/128) = 2.5
+# about 10 standard deviations of the normal kernel so there is still
+# lots of room for padding
+# a minimal choice might be xwidth = 4*(.25)= 1 4 SD for the normal kernel
+# creating weighting object outside the call
+# this is useful when one wants to smooth different data sets but on the
+# same grid with the same kernel function
+#
+
+#
+# random fields from smoothing white noise with this filter.
+#
+set.seed(123)
+test.image<- matrix( rnorm(128**2),128,128)
+dx<- .1
+dy<- .8
+
+wght<- setup.image.smooth( nrow=128, ncol=128, dx=dx, dy=dy,
+ theta=.25, xwidth=2.5, ywidth=2.5)
+#
+look<- image.smooth( test.image, dx=dx, dy=dy, wght)
+
+# NOTE: this is the same as using
+#
+# image.smooth( test.image , 128,128), xwidth=2.5,
+# ywidth=2.5, dx=dx,dy=dy, theta=.25)
+#
+# but the call to image.smooth is faster because fft of kernel
+# has been precomputed.
+
+
+
+# periodic smoothing in the horizontal dimension
+
+look<- image.smooth( test.image , xwidth=1.5,
+ ywidth=2.5, dx=dx,dy=dy, theta=1.5)
+look2<- image.smooth( test.image , xwidth=0,
+ ywidth=2.5, dx=dx,dy=dy, theta=1.5)
+# compare these two
+set.panel( 1,2)
+image.plot( look, legend.mar=7.1)
+title("free boundaries")
+image.plot( look2, legend.mar=7.1) # look for periodic continuity at edges!
+title("periodic boundary in horizontal")
+set.panel(1,1)
+
+}
+\keyword{smooth}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/image2lz.Rd b/man/image2lz.Rd
new file mode 100644
index 0000000..d937e6a
--- /dev/null
+++ b/man/image2lz.Rd
@@ -0,0 +1,182 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{image2lz}
+\alias{image2lz}
+\alias{crop.image}
+\alias{in.poly}
+\alias{in.poly.grid}
+\alias{half.image}
+\alias{get.rectangle}
+\alias{average.image}
+\alias{which.max.matrix}
+\alias{which.max.image}
+ \title{Some simple functions for subsetting images}
+\description{These function help in subsetting images or reducing its
+size by averaging adjecent cells.}
+\usage{
+
+crop.image(obj, loc=NULL,...)
+which.max.matrix(z)
+which.max.image(obj)
+get.rectangle()
+average.image(obj, Q=2)
+half.image(obj)
+in.poly( xd, xp, convex.hull=FALSE, inflation=1e-07)
+in.poly.grid( grid.list,xp, convex.hull=FALSE, inflation=1e-07)
+
+}
+
+\arguments{
+ \item{obj}{A list in image format with the usual x,y defining the
+grid and z a matrix of image values.}
+
+ \item{loc}{A 2 column matrix of locations within the image region
+that define the subset. If not specified then the image is plotted and
+the rectangle can be specified interactively.}
+\item{Q}{Number of pixels to average.}
+ \item{xd}{ A 2 column matrix of locations that are the points to check
+for being inside a polygon.}
+ \item{xp}{ A 2 column matrix of locations that are vertices of a
+polygon. The last point is assumed to be connected to the first.}
+ \item{convex.hull}{If TRUE then the convex hull of \code{xp} is used
+instead of the polygon.}
+\item{grid.list}{A list with components x and y specifing the 2-d grid values.
+(See help( grid.list) for more details.)}
+ \item{inflation}{A small expansion factor to insure that points
+precisely on the boundaries and vertices of the convex hull are included
+as members.}
+\item{z}{ A matrix of numerical values}
+
+ \item{\dots}{ Graphics arguments passed to image.plot. This
+is only relevant when loc is NULL and the locator function is called via
+\code{get.rectangle}. }
+
+
+}
+\details{ If \code{loc} has more than 2 rows then the largest rectangle
+containing the locations is used.
+
+\describe{
+
+\item{crop.image}{Creates a subset of the image \code{obj} by taking
+using the largest rectangle in the locations \code{loc}. This is useful
+if one needs to extract a image that is no bigger in extant than som
+edata location. If locations are omitted the parent image is plotted
+and the locations from two mouse clicks on the image. Returned value is
+an image with appropriate \code{x,y} and \code{z} components.}
+
+\item{get.rectangle}{Given an image plots and waits for two mouse
+clicks that are returned.}
+
+\item{which.max.image}{Returns a list with components \code{x, y, z}
+, and \code{ind} giving the
+location of the maximun and value of the maximum in the image based
+on the grid values and also on the indicies of the image matrix.}
+
+\item{average.image, half.image}{Takes passed image and averages the
+pixel values and adjusts the grid to create an image that has a smaller
+number of elements. If \code{Q=2} in \code{average.image} it has the
+same effect as \code{half.image} but might be slower -- if the original
+image is mXn then half image will be an image (m/2)X(n/2). This begs the
+question what happens when m or n is odd or when (m/Q) or (n/Q) are not
+integers. In either case the largest rows or columns are dropped. (For
+large \code{Q} the function might be modified to drop about half the
+pixels at both edges.) }
+
+\item{in.poly, in.poly.grid}{Determines whether the points xd,yd are
+inside a polygon or outside. Return value is a logical vector with TRUE
+being inside or on boundary of polygon. The test expands the polygon
+slightly in size (on the order of single precision zero) to include
+points that are at the vertices. \code{in.poly} does not really depend
+on an image format however the grid version \code{in.poly.grid} is more
+efficient for considering the locations on a regular grid
+See also \code{in.land.grid} that is hard coded to work with the
+fields world map.}
+
+}
+
+}
+
+\author{Doug Nychka}
+
+\seealso{ drape.plot, image.plot,
+ interp.surface, interp.surface.grid, in.land.grid}
+
+\examples{
+data(RMelevation)
+
+# region defining Colorado Front Range
+
+ loc<- rbind( c(-106.5, 40.8),
+ c(-103.9, 37.5))
+
+# extract elevations for just CO frontrange.
+ FR<- crop.image(RMelevation, loc)
+ image.plot( FR, col=terrain.colors(256))
+
+ which.max.image( FR)
+
+# average cells 4 to 1 by doing this twice!
+ temp<- half.image( RMelevation)
+ temp<- half.image( temp)
+
+# or in one step
+ temp<- average.image( RMelevation, Q=4)-> temp
+ image.plot( temp, col=terrain.colors(256))
+
+# a polygon (no special meaning entered with just locator)
+x1p<- c(
+ -106.2017, -104.2418, -102.9182, -102.8163, -102.8927, -103.3254, -104.7763,
+ -106.5581, -108.2889, -109.1035, -109.3325, -108.7980)
+
+x2p<- c(
+ 43.02978, 42.80732, 41.89727, 40.84566, 39.81427, 38.17618, 36.53810, 36.29542,
+ 36.90211, 38.29752, 39.45025, 41.02767)
+xp<- cbind( x1p,x2p)
+
+ image.plot( temp)
+ polygon( xp[,1], xp[,2], lwd=2)
+
+# find all grid points inside poly
+ fullset<- make.surface.grid( list( x= temp$x, y= temp$y))
+ ind<- in.poly( fullset,xp)
+
+# take a look
+ plot( fullset, pch=".")
+ polygon( xp[,1], xp[,2], lwd=2)
+ points( fullset[ind,], pch="o", col="red", cex=.5)
+
+# masking out the image NA == white in the image plot
+ temp$z[!ind] <- NA
+ image.plot( temp)
+ polygon( xp[,1], xp[,2], lwd=2)
+
+# This is more efficient for large grids:
+# because the large number of grid location ( xg above) is
+# never explicitly created.
+
+ ind<- in.poly.grid( list( x= temp$x, y= temp$y), xp)
+
+# now use ind in the same way as above to mask points outside of polygon
+
+}
+
+\keyword{ hplot }% at least one, from doc/KEYWORDS
diff --git a/man/interp.surface.Rd b/man/interp.surface.Rd
new file mode 100644
index 0000000..b072bc6
--- /dev/null
+++ b/man/interp.surface.Rd
@@ -0,0 +1,119 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{interp.surface}
+\alias{interp.surface}
+\alias{interp.surface.grid}
+\title{
+ Fast bilinear interpolator from a grid.
+}
+\description{
+Uses bilinear weights to interpolate values on a rectangular
+grid to arbitrary locations or to another grid.
+}
+\usage{
+interp.surface(obj, loc)
+interp.surface.grid(obj, grid.list)
+
+}
+\arguments{
+\item{obj}{
+A list with components x,y, and z in the same style as used by contour,
+persp, image etc. x and y are the X and Y grid values and z is a matrix
+with the corresponding values of the surface
+}
+\item{loc}{
+A matrix of (irregular) locations to interpolate. First column of loc
+isthe X coordinates and second is the Y's.
+}
+\item{grid.list}{ A list with components x and y
+describing the grid to interpolate. The grids do not need to be equally spaced.}
+
+}
+\value{
+ An vector of interpolated values. NA are returned for
+regions of the obj\$z that are NA and also for locations outside of the
+range of the parent grid.
+}
+\details{
+Here is a brief explanation of
+the interpolation: Suppose that the location, (locx, locy) lies in
+between the first two grid points in both x an y. That is locx is between
+x1 and x2 and
+locy is between y1 and y2. Let ex= (l1-x1)/(x2-x1) ey= (l2-y1)/(y2-y1).
+The
+interpolant is
+
+( 1-ex)(1-ey)*z11 + (1- ex)(ey)*z12 + ( ex)(1-ey)*z21 + ( ex)(ey)*z22
+
+Where the z's are the corresponding elements of the Z matrix.
+
+Note that bilinear interpolation can produce some artifacts related to
+the grid and not reproduce higher behavior in the surface. For, example
+the extrema of the interpolated surface will always be at the parent
+grid locations. There is nothing special about about interpolating to
+another grid, this function just includes a \code{for} loop over one
+dimension and a call to the function for irregular locations. It was
+included in fields for convenience. since the grid format is so common.
+
+See also the akima package for fast interpolation from irrgeular locations.
+Many thanks to Jean-Olivier Irisson for making this code more efficient and
+concise.
+
+}
+\seealso{
+image.smooth, as.surface, as.image, image.plot, krig.image,Tps
+}
+\examples{
+#
+# evaluate an image at a finer grid
+#
+
+data( lennon)
+# create an example in the right list format like image or contour
+obj<- list( x= 1:20, y=1:20, z= lennon[ 201:220, 201:220])
+
+set.seed( 123)
+# lots of random points
+N<- 500
+loc<- cbind( runif(N)*20, runif(N)*20)
+z.new<- interp.surface( obj, loc)
+# compare the image with bilinear interpolation at scattered points
+set.panel(2,2)
+image.plot( obj)
+quilt.plot( loc, z.new)
+
+
+# sample at 100X100 equally spaced points on a grid
+
+grid.list<- list( x= seq( 1,20,,100), y= seq( 1,20,,100))
+
+interp.surface.grid( obj, grid.list)-> look
+
+# take a look
+set.panel(2,2)
+image.plot( obj)
+image.plot( look)
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/lennon.Rd b/man/lennon.Rd
new file mode 100644
index 0000000..442a95e
--- /dev/null
+++ b/man/lennon.Rd
@@ -0,0 +1,36 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{lennon}
+\alias{lennon}
+\title{
+ Gray image of John Lennon.
+}
+\description{
+A 256X256 image of John Lennon.
+Try:
+
+\code{image(lennon, col=grey(seq(0,1,,256)) )}
+
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/mKrig.MLE.Rd b/man/mKrig.MLE.Rd
new file mode 100644
index 0000000..d736ca2
--- /dev/null
+++ b/man/mKrig.MLE.Rd
@@ -0,0 +1,266 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{mKrig.MLE}
+\alias{mKrig.MLE}
+\alias{mKrig.MLE.joint}
+\alias{fastTps.MLE}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+Maximizes likelihood for the process marginal variance (rho) and
+ nugget standard deviation (sigma) parameters (e.g. lambda) over a
+ many covariance models or covariance parameter values.
+}
+\description{
+These functions are designed to explore the likelihood surface for
+different covariance parameters with the option of maximizing over
+sigma and rho. They are depreciated and my be omitted in later versions of
+fields with their roles being replaced by other functions. See details below.
+}
+\usage{
+mKrig.MLE(x, y, weights = rep(1, nrow(x)), cov.fun="stationary.cov",
+cov.args = NULL,
+ Z = NULL, par.grid = NULL, lambda = NULL, lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04, ...)
+
+mKrig.MLE.joint(x, y, weights = rep(1, nrow(x)),
+ lambda.guess = 1, cov.params.guess=NULL,
+ cov.fun="stationary.cov", cov.args=NULL,
+ Z = NULL, optim.args=NULL, find.trA.MLE = FALSE,
+ ..., verbose = FALSE)
+
+fastTps.MLE(x, y, weights = rep(1, nrow(x)), Z = NULL, ...,
+ par.grid=NULL, theta, lambda = NULL, lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+
+\item{cov.args}{ Additional arguments that would also be included in calls
+ to the covariance function to specify the fixed part of
+ the covariance model.}
+
+\item{cov.fun}{The name, a text string, of the covariance function.}
+
+\item{cov.params.guess}{A list of initial guesses for covariance parameters
+over which the user wishes to perform likelihood maximization. The list
+ contains the names of the parameters as well as the values.}
+
+ \item{find.trA.MLE}{If TRUE will estimate the effective degrees of freedom using
+ a simple Monte Carlo method throughout joint likelihood maximization.
+ Either way, the trace of the mKrig object with the best
+ log-likelihood is calculated depending on \code{find.trA}. Computing
+ the trace will add to the computational burden by approximately NtrA
+ solutions of the linear system but the cholesky decomposition is
+ reused.}
+
+\item{lambda}{If \code{lambda.profile=FALSE} the values of lambda to evaluate
+ the likelihood if "TRUE" the starting values for the
+ optimization. If lambda is NA then the optimum value from
+ previous search is used as the starting value. If lambda is
+ NA and it is the first value the starting value defaults to 1.0.}
+
+\item{lambda.guess}{The initial guess for lambda in the joint log-likelihood
+ maximization process.}
+
+\item{lambda.profile}{ If \code{TRUE} maximize likelihood over lambda.}
+
+\item{optim.args}{Additional arguments that would also be included in calls
+ to the optim function in joint likelihood maximization. If
+ \code{NULL}, this will be set to use the "BFGS-"
+ optimization method. See \code{\link{optim}} for more
+ details. The default value is:
+ \code{optim.args = list(method = "BFGS",
+ control=list(fnscale = -1,
+ ndeps = rep(log(1.1), length(cov.params.guess)+1),
+ reltol=1e-04, maxit=10))}
+ Note that the first parameter is lambda and the others are
+ the covariance parameters in the order they are given in
+ \code{cov.params.guess}. Also note that the optimization
+ is performed on a log-scale, and this should be taken into
+ consideration when passing arguments to \code{optim}.}
+
+\item{par.grid}{A list or data frame with components being parameters for
+ different covariance models. A typical component is "theta"
+ comprising a vector of scale parameters to try. If par.grid
+ is "NULL" then the covariance model is fixed at values that
+ are given in \dots.}
+
+ \item{relative.tolerance}{Relative tolerance used to declare convergence when
+ maximizing likelihood over lambda.}
+
+
+\item{theta}{Range parameter for compact Wendland covariance. (seefastTps)}
+
+\item{verbose}{If \code{TRUE} print out interesting intermediate results.}
+
+\item{weights}{Precision ( 1/variance) of each observation}
+
+\item{x}{
+Matrix of unique spatial locations (or in print or surface
+ the returned mKrig object.)}
+
+\item{y}{
+Vector or matrix of observations at spatial locations,
+ missing values are not allowed! Or in mKrig.coef a new
+ vector of observations. If y is a matrix the columns are
+ assumed to be independent observations vectors generated
+ from the same covariance and measurment error model.
+}
+
+\item{Z}{Linear covariates to be included in fixed part of the
+ model that are distinct from the default low order
+ polynomial in \code{x}}
+
+\item{\dots}{Additional arguments that would also be included in a call to
+ \code{mKrig} to specify the covariance model and fixed model
+ covariables.}
+
+}
+\details{
+The "mKrig" prefixed functions are depreciated and are replaced in functionality
+by \code{\link{mKrigMLEJoint}} and \code{\link{mKrigMLEGrid}}.
+
+The observational model follows the same as that described in the
+\code{Krig} function and thus the two primary covariance parameters
+for a stationary model are the nugget standard deviation (sigma) and
+the marginal variance of the process (rho). It is useful to
+reparametrize as rho and\ lambda= sigma^2/rho. The likelihood can be
+maximized analytically over rho and the parameters in the fixed part
+of the model the estimate of rho can be substituted back into the
+likelihood to give a expression that is just a function of lambda and
+the remaining covariance parameters. It is this expression that is
+then maximized numerically over lambda when \code{ lambda.profile =
+TRUE}.
+
+Note that \code{fastTps.MLE} is a convenient variant of this more general
+version to use directly with fastTps, and \code{mKrig.MLE.joint} is
+similar to \code{mKrig.MLE}, except it uses the \code{optim} function
+to optimize over the specified covariance parameters and lambda jointly
+rather than optimizing on a grid. Unlike \code{mKrig.MLE}, it returns
+an mKrig object.
+}
+\value{
+\code{mKrig.MLE} returns a list with the components:
+
+\item{summary}{A matrix giving the results for evaluating the
+ likelihood for each covariance model.}
+
+\item{par.grid}{The par.grid argument used.}
+
+\item{cov.args.MLE}{The list of covariance arguments (except for
+lambda) that have the largest likelihood over the list covariance
+models. To fit the surface at the largest likelihood among those tried
+
+\code{ do.call( "mKrig", c(obj$mKrig.args,
+obj$cov.args.MLE,list(lambda=obj$lambda.opt)) )} where \code{obj} is
+the list returned by this function.}
+
+\item{call}{The calling arguments to this function.}
+
+\code{mKrig.MLE.joint} returns an mKrig object with the best
+computed log-likelihood computed in the maximization process
+with the addition of the summary table for the mKrig object
+log-likelihood and:
+
+\item{lnLike.eval}{
+A table containing information on all likelihood evaluations
+performed in the maximization process.
+}
+}
+\references{
+%% ~put references to the literature/web site here ~
+http://cran.r-project.org/web/packages/fields/fields.pdf
+http://www.image.ucar.edu/~nychka/Fields/
+}
+\author{
+%% ~~who you are~~
+Douglas W. Nychka, John Paige
+}
+
+%% ~Make other sections like Warning with \section{Warning }{....} ~
+
+\seealso{
+%% ~~objects to See Also as \code{\link{help}}, ~~~
+\code{\link{mKrig}}
+\code{\link{Krig}}
+\code{\link{stationary.cov}}
+\code{\link{optim}}
+}
+\examples{
+# some synthetic data
+ N<- 100
+ set.seed(123)
+ x<- matrix(runif(2*N), N,2)
+ theta<- .2
+ Sigma<- Matern( rdist(x,x)/theta , smoothness=1.0)
+ Sigma.5<- chol( Sigma)
+ sigma<- .1
+ M<-5 # Five (5) independent spatial data sets
+ F.true<- t( Sigma.5)\%*\% matrix( rnorm(N*M), N,M)
+ Y<- F.true + sigma* matrix( rnorm(N*M), N,M)
+# find MLE for lambda with range and smoothness fixed in Matern for first
+# data set
+ obj<- mKrig.MLE( x,Y[,1], Covariance="Matern", theta=.2, smoothness=1.0)
+ obj$summary # take a look
+ fit<- mKrig( x,Y[,1], Covariance="Matern", theta=.2,
+ smoothness=1.0, lambda= obj$lambda.best)
+#
+# search over the range parameter and use all 5 replications for combined
+# likelihood
+\dontrun{
+ par.grid<- list( theta= seq(.1,.25,,6))
+# default starting value for lambda is .02 subsequent ones use previous optimum.
+ obj<- mKrig.MLE( x,Y, Covariance="Matern",lambda=c(.02,rep(NA,4)),
+ smoothness=1.0, par.grid=par.grid)
+}
+
+#perform joint likelihood maximization over lambda and theta.
+#optim can get a bad answer with poor initial guesses.
+set.seed(123)
+obj<- mKrig.MLE.joint(x,Y[,1],
+ cov.args=list(Covariance="Matern", smoothness=1.0),
+ cov.params.guess=list(theta=.2), lambda.guess=.1)
+
+#look at lnLik evaluations
+obj$lnLik.eval
+
+\dontrun{
+#perform joint likelihood maximization over lambda, theta, and smoothness.
+#optim can get a bad answer with poor initial guesses.
+set.seed(123)
+obj<- mKrig.MLE.joint(x,Y[,1],
+ cov.args=list(Covariance="Matern"),
+ cov.params.guess=list(theta=.2, smoothness=1), lambda.guess=.1)
+
+#look at lnLik evaluations
+obj$lnLik.eval
+
+#generate surface plot of results of joint likelihood maximization
+#NOTE: mKrig.MLE.joint returns mKrig object while mKrig.MLE doesn't,
+#so this won't work for mKrig.MLE.
+surface(obj)
+}
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{ Kriging }
+\keyword{ MLE }
+\keyword{ spatial }
diff --git a/man/mKrig.Rd b/man/mKrig.Rd
new file mode 100644
index 0000000..7751c75
--- /dev/null
+++ b/man/mKrig.Rd
@@ -0,0 +1,525 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{mKrig}
+\alias{mKrig}
+\alias{predict.mKrig}
+\alias{mKrig.coef}
+\alias{mKrig.trace}
+\alias{print.mKrig}
+\alias{summary.mKrig}
+\alias{mKrigCheckXY}
+
+\title{"micro Krig" Spatial process estimate of a curve or surface,
+"kriging" with a known covariance function. }
+\description{
+This is a simple version of the Krig function that is
+optimized for large data sets, sparse linear algebra, and a clear exposition of the
+computations. Lambda, the smoothing parameter must be fixed.
+This function is called higher level functions for maximum likelihood estimates of
+covariance paramters.
+}
+\usage{
+mKrig(x, y, weights = rep(1, nrow(x)), Z = NULL,
+ cov.function = "stationary.cov", cov.args = NULL,
+ lambda = 0, m = 2, chol.args = NULL, find.trA = TRUE,
+ NtrA = 20, iseed = 123, llambda = NULL, na.rm = FALSE,
+ collapseFixedEffect = TRUE,
+ ...)
+
+\method{predict}{mKrig}( object, xnew=NULL,ynew=NULL, grid.list = NULL,
+derivative=0,
+Z=NULL,drop.Z=FALSE,just.fixed=FALSE,
+collapseFixedEffect = object$collapseFixedEffect, ...)
+
+\method{summary}{mKrig}(object, ...)
+
+\method{print}{mKrig}( x, digits=4,... )
+
+mKrig.coef(object, y, collapseFixedEffect=TRUE)
+
+mKrig.trace( object, iseed, NtrA)
+
+mKrigCheckXY(x, y, weights, Z, na.rm)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+\item{collapseFixedEffect}{ If replicated fields are given to mKrig (i.e.
+\code{y} has more than one column) there is the choice of estimating the
+fixed effect coefficients (\code{d} in the returned object) separately
+for each replicate or pooling across replicates and deriving a single
+estimate. If \code{collapseFixedEffect} is TRUE (default) the estimates are
+pooled. }
+
+\item{chol.args}{A list of optional arguments (pivot, nnzR) that will
+be used with the call to the cholesky decomposition. Pivoting is done
+by default to make use of sparse matrices when they are
+generated. This argument is useful in some cases for sparse covariance
+functions to reset the memory parameter nnzR. (See example below.)}
+
+\item{cov.args}{A list of optional arguments that will be used in
+ calls to the covariance function.}
+
+\item{cov.function}{The name, a text string of the covariance function.}
+
+\item{derivative}{If zero the surface will be evaluated. If not zero
+ the matrix of partial derivatives will be computed.}
+
+\item{digits}{Number of significant digits used in printed output.}
+
+\item{drop.Z}{If true the fixed part will only be evaluated at the
+polynomial part of the fixed model. The contribution from the other
+covariates will be omitted.}
+
+\item{find.trA}{
+If TRUE will estimate the effective degrees of freedom using
+ a simple Monte Carlo method. This will add to the computational
+ burden by approximately NtrA solutions of the linear system but
+ the cholesky decomposition is reused.}
+
+\item{grid.list}{A grid.list to evaluate the surface in place of specifying
+ arbitrary locations.}
+
+\item{iseed}{Random seed ( using \code{set.seed(iseed)}) used to generate
+ iid normals for Monte Carlo estimate of the trace.}
+
+\item{just.fixed}{If TRUE only the predictions for the fixed part of
+ the model will be evaluted.}
+
+\item{lambda}{ Smoothing parameter or equivalently the ratio between
+ the nugget and process varainces.}
+
+\item{llambda}{If not \code{NULL} then \code{lambda = exp( llambda)}}
+
+\item{m}{ The degree of the polynomial used in teh fixed part is (m-1)}
+
+\item{na.rm}{If TRUE NAs in y are omitted along with corresonding rows of x.}
+
+\item{NtrA}{Number of Monte Carlo samples for the trace. But if NtrA is
+ greater than or equal to the number of observations the trace
+ is computed exactly.}
+
+\item{object}{Object returned by mKrig. (Same as "x"
+ in the print function.)}
+
+\item{weights}{Precision ( 1/variance) of each observation}
+
+\item{x}{Matrix of unique spatial locations (or in print or surface
+the returned mKrig object.)}
+
+\item{xnew}{Locations for predictions.}
+
+\item{y}{ Vector or matrix of observations at spatial locations,
+ missing values are not allowed! Or in mKrig.coef a new
+ vector of observations. If y is a matrix the columns are
+ assumed to be independent replicates of the spatial field. I.e.
+ observation vectors generated
+ from the same covariance and measurment error model but
+ independent from each other.
+}
+
+\item{ynew}{New observation vector. \code{mKrig} will reuse matrix
+ decompositions and find the new fit to these data.}
+
+\item{Z}{ Linear covariates to be included in fixed part of the
+ model that are distinct from the default low order
+ polynomial in \code{x}. (NOTE the order of the polynomial
+ determined by \code{m})}
+
+\item{\dots}{ In \code{mKrig} and \code{predict} additional arguments
+that will be passed to the covariance function.}
+
+}
+\details{
+ This function is an abridged version of Krig. The m stand for micro
+and this function focuses on the computations in Krig.engine.fixed
+done for a fixed lambda parameter, for unique spatial locations and
+for data without missing values.
+
+These restrictions simplify the code for reading. Note that also
+little checking is done and the spatial locations are not transformed
+before the estimation. Because most of the operations are linear
+algebra this code has been written to handle multiple data
+sets. Specifically if the spatial model is the same except for
+different observed values (the y's), one can pass \code{y} as a matrix
+and the computations are done efficiently for each set. Note that
+this is not a multivariate spatial model just an efficient computation
+over several data vectors without explicit looping.A big difference in
+the computations is that an exact expression for thetrace of the
+smoothing matrix is (trace A(lambda)) is computationally expensive and
+a Monte Carlo approximation is supplied instead.
+
+See \code{predictSE.mKrig} for prediction standard errors and
+\code{sim.mKrig.approx} to quantify the uncertainty in the estimated function using conditional
+simulation.
+
+\code{predict.mKrig} will evaluate the derivatives of the estimated
+function if derivatives are supported in the covariance function. For
+example the wendland.cov function supports derivatives.
+
+\code{print.mKrig} is a simple summary function for the object.
+
+\code{mKrig.coef} finds the "d" and "c" coefficients represent the
+solution using the previous cholesky decomposition for a new data
+vector. This is used in computing the prediction standard error in
+predictSE.mKrig and can also be used to evalute the estimate
+efficiently at new vectors of observations provided the locations and
+covariance remain fixed.
+
+Sparse matrix methods are handled through overloading the usual linear
+algebra functions with sparse versions. But to take advantage of some
+additional options in the sparse methods the list argument chol.args
+is a device for changing some default values. The most important of
+these is \code{nnzR}, the number of nonzero elements anticipated in
+the Cholesky factorization of the postive definite linear system used
+to solve for the basis coefficients. The sparse of this system is
+essentially the same as the covariance matrix evalauted at the
+observed locations. As an example of resetting \code{nzR} to 450000
+one would use the following argument for chol.args in mKrig:
+
+\code{ chol.args=list(pivot=TRUE,memory=list(nnzR= 450000))}
+
+\code{mKrig.trace} This is an internal function called by \code{mKrig}
+to estimate the effective degrees of freedom. The Kriging surface
+estimate at the data locations is a linear function of the data and
+can be represented as A(lambda)y. The trace of A is one useful
+measure of the effective degrees of freedom used in the surface
+representation. In particular this figures into the GCV estimate of
+the smoothing parameter. It is computationally intensive to find the
+trace explicitly but there is a simple Monte Carlo estimate that is
+often very useful. If E is a vector of iid N(0,1) random variables
+then the trace of A is the expected value of t(E)AE. Note that AE is
+simply predicting a surface at the data location using the synthetic
+observation vector E. This is done for \code{NtrA} independent N(0,1)
+vectors and the mean and standard deviation are reported in the
+\code{mKrig} summary. Typically as the number of observations is
+increased this estimate becomse more accurate. If NtrA is as large as
+the number of observations (\code{np}) then the algorithm switches to
+finding the trace exactly based on applying A to \code{np} unit
+vectors.
+}
+\value{
+ \item{d}{Coefficients of the polynomial fixed part and if present
+ the covariates (Z).To determine which is which the logical vector
+ ind.drift also part of this object is TRUE for the polynomial
+ part. }
+
+ \item{c}{ Coefficients of the nonparametric part.}
+
+ \item{nt}{ Dimension of fixed part.}
+
+ \item{np}{ Dimension of c.}
+
+ \item{nZ}{Number of columns of Z covariate matrix (can be zero).}
+
+ \item{ind.drift}{Logical vector that indicates polynomial
+ coefficients in the \code{d} coefficients vector. This is helpful
+ to distguish between polynomial part and the extra covariates
+ coefficients associated with Z. }
+
+ \item{lambda.fixed}{The fixed lambda value}
+
+ \item{x}{Spatial locations used for fitting.}
+
+ \item{knots}{The same as x}
+
+ \item{cov.function.name}{Name of covariance function used.}
+
+ \item{args}{ A list with all the covariance arguments that were
+ specified in the call.}
+
+ \item{m}{Order of fixed part polynomial.}
+
+ \item{chol.args}{ A list with all the cholesky arguments that were
+ specified in the call.}
+
+ \item{call}{ A copy of the call to mKrig.}
+
+ \item{non.zero.entries}{ Number of nonzero entries in the covariance
+matrix for the process at the observation locations.}
+
+ \item{shat.MLE}{MLE of sigma.}
+
+ \item{rho.MLE}{MLE or rho.}
+
+ \item{rhohat}{Estimate for rho adjusted for fixed model degrees of
+ freedom (ala REML).}
+
+ \item{lnProfileLike}{log Profile likelihood for lambda}
+
+ \item{lnDetCov}{Log determinant of the covariance matrix for the
+ observations having factored out rho.}
+
+ \item{Omega}{GLS covariance for the estimated parameters in the fixed
+part of the model (d coefficients0.}
+
+ \item{qr.VT, Mc}{QR and cholesky matrix decompositions needed to
+recompute the estimate for new observation vectors.}
+
+ \item{fitted.values, residuals}{Usual predictions from fit.}
+
+ \item{eff.df}{Estimate of effective degrees of freedom. Either the
+mean of the Monte Carlo sample or the exact value. }
+
+ \item{trA.info}{If NtrA ids less than \code{np} then the individual
+members of the Monte Carlo sample and \code{sd(trA.info)/ sqrt(NtrA)}
+is an estimate of the standard error. If NtrA is greater than or equal
+to \code{np} then these are the diagonal elements of A(lamdba).}
+
+ \item{GCV}{Estimated value of the GCV function.}
+
+ \item{GCV.info}{Monte Carlo sample of GCV functions}
+
+}
+\author{Doug Nychka, Reinhard Furrer, John Paige}
+\seealso{ Krig, surface.mKrig, Tps, fastTps, predictSurface, predictSE.mKrig, sim.mKrig.approx,
+ \code{ \link{mKrig.grid}}}
+\examples{
+#
+# Midwest ozone data 'day 16' stripped of missings
+ data( ozone2)
+ y<- ozone2$y[16,]
+ good<- !is.na( y)
+ y<-y[good]
+ x<- ozone2$lon.lat[good,]
+# nearly interpolate using defaults (Exponential covariance range = 2.0)
+# see also mKrigMLEGrid to choose lambda by maxmimum likelihood
+ out<- mKrig( x,y, theta = 2.0, lambda=.01)
+ out.p<- predictSurface( out)
+ surface( out.p)
+#
+# NOTE this should be identical to
+# Krig( x,y, theta=2.0, lambda=.01)
+
+##############################################################################
+# an example using a "Z" covariate and the Matern family
+# again see mKrigMLEGrid to choose parameters by MLE.
+data(COmonthlyMet)
+yCO<- CO.tmin.MAM.climate
+good<- !is.na( yCO)
+yCO<-yCO[good]
+xCO<- CO.loc[good,]
+Z<- CO.elev[good]
+out<- mKrig( xCO,yCO, Z=Z, cov.function="stationary.cov", Covariance="Matern",
+ theta=4.0, smoothness=1.0, lambda=.1)
+set.panel(2,1)
+# quilt.plot with elevations
+quilt.plot( xCO, predict(out))
+# Smooth surface without elevation linear term included
+surface( out)
+set.panel()
+
+#########################################################################
+# Interpolate using tapered version of the exponential,
+# the taper scale is set to 1.5 default taper covariance is the Wendland.
+# Tapering will done at a scale of 1.5 relative to the scaling
+# done through the theta passed to the covariance function.
+data( ozone2)
+ y<- ozone2$y[16,]
+ good<- !is.na( y)
+ y<-y[good]
+ x<- ozone2$lon.lat[good,]
+ mKrig( x,y,cov.function="stationary.taper.cov",
+ theta = 2.0, lambda=.01,
+ Taper="Wendland", Taper.args=list(theta = 1.5, k=2, dimension=2)
+ ) -> out2
+
+# Try out GCV on a grid of lambda's.
+# For this small data set
+# one should really just use Krig or Tps but this is an example of
+# approximate GCV that will work for much larger data sets using sparse
+# covariances and the Monte Carlo trace estimate
+#
+# a grid of lambdas:
+ lgrid<- 10**seq(-1,1,,15)
+ GCV<- matrix( NA, 15,20)
+ trA<- matrix( NA, 15,20)
+ GCV.est<- rep( NA, 15)
+ eff.df<- rep( NA, 15)
+ logPL<- rep( NA, 15)
+# loop over lambda's
+ for( k in 1:15){
+ out<- mKrig( x,y,cov.function="stationary.taper.cov",
+ theta = 2.0, lambda=lgrid[k],
+ Taper="Wendland", Taper.args=list(theta = 1.5, k=2, dimension=2) )
+ GCV[k,]<- out$GCV.info
+ trA[k,]<- out$trA.info
+ eff.df[k]<- out$eff.df
+ GCV.est[k]<- out$GCV
+ logPL[k]<- out$lnProfileLike
+ }
+#
+# plot the results different curves are for individual estimates
+# the two lines are whether one averages first the traces or the GCV criterion.
+#
+ par( mar=c(5,4,4,6))
+ matplot( trA, GCV, type="l", col=1, lty=2,
+ xlab="effective degrees of freedom", ylab="GCV")
+ lines( eff.df, GCV.est, lwd=2, col=2)
+ lines( eff.df, rowMeans(GCV), lwd=2)
+# add exact GCV computed by Krig
+ out0<- Krig( x,y,cov.function="stationary.taper.cov",
+ theta = 2.0,
+ Taper="Wendland", Taper.args=list(theta = 1.5, k=2, dimension=2),
+ spam.format=FALSE)
+ lines( out0$gcv.grid[,2:3], lwd=4, col="darkgreen")
+
+# add profile likelihood
+ utemp<- par()$usr
+ utemp[3:4] <- range( -logPL)
+ par( usr=utemp)
+ lines( eff.df, -logPL, lwd=2, col="blue", lty=2)
+ axis( 4)
+ mtext( side=4,line=3, "-ln profile likelihood", col="blue")
+ title( "GCV ( green = exact) and -ln profile likelihood", cex=2)
+
+#########################################################################
+# here is a series of examples with bigger datasets
+# using a compactly supported covariance directly
+
+set.seed( 334)
+N<- 1000
+x<- matrix( 2*(runif(2*N)-.5),ncol=2)
+y<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( 1000)*.1
+
+look2<-mKrig( x,y, cov.function="wendland.cov",k=2, theta=.2,
+ lambda=.1)
+
+# take a look at fitted surface
+predictSurface(look2)-> out.p
+surface( out.p)
+
+# this works because the number of nonzero elements within distance theta
+# are less than the default maximum allocated size of the
+# sparse covariance matrix.
+# see spam.options() for the default values
+
+# The following will give a warning for theta=.9 because
+# allocation for the covariance matirx storage is too small.
+# Here theta controls the support of the covariance and so
+# indirectly the number of nonzero elements in the sparse matrix
+
+\dontrun{
+ look2<- mKrig( x,y, cov.function="wendland.cov",k=2, theta=.9, lambda=.1)
+}
+
+# The warning resets the memory allocation for the covariance matrix
+# according the to values 'spam.options(nearestdistnnz=c(416052,400))'
+# this is inefficient becuase the preliminary pass failed.
+
+# the following call completes the computation in "one pass"
+# without a warning and without having to reallocate more memory.
+
+spam.options(nearestdistnnz=c(416052,400))
+ look2<- mKrig( x,y, cov.function="wendland.cov",k=2,
+ theta=.9, lambda=1e-2)
+# as a check notice that
+# print( look2)
+# reports the number of nonzero elements consistent with the specifc allocation
+# increase in spam.options
+
+
+# new data set of 1500 locations
+ set.seed( 234)
+ N<- 1500
+ x<- matrix( 2*(runif(2*N)-.5),ncol=2)
+ y<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( N)*.01
+
+\dontrun{
+# the following is an example of where the allocation (for nnzR)
+# for the cholesky factor is too small. A warning is issued and
+# the allocation is increased by 25% in this example
+#
+ look2<- mKrig( x,y,
+ cov.function="wendland.cov",k=2, theta=.1, lambda=1e2 )
+}
+# to avoid the warning
+ look2<-mKrig( x,y,
+ cov.function="wendland.cov", k=2, theta=.1,
+ lambda=1e2, chol.args=list(pivot=TRUE, memory=list(nnzR= 450000)))
+
+###############################################################################
+# fiting multiple data sets
+#
+#\dontrun{
+ y1<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( N)*.01
+ y2<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( N)*.01
+ Y<- cbind(y1,y2)
+ look3<- mKrig( x,Y,cov.function="wendland.cov",k=2, theta=.1,
+ lambda=1e2 )
+# note slight difference in summary because two data sets have been fit.
+ print( look3)
+#}
+
+##################################################################
+# finding a good choice for theta as a taper
+
+# Suppose the target is a spatial prediction using roughly 50 nearest neighbors
+# (tapering covariances is effective for roughly 20 or more in the situation of
+# interpolation) see Furrer, Genton and Nychka (2006).
+
+# take a look at a random set of 100 points to get idea of scale
+
+ set.seed(223)
+ ind<- sample( 1:N,100)
+ hold<- rdist( x[ind,], x)
+ dd<- (apply( hold, 1, sort))[65,]
+ dguess<- max(dd)
+# dguess is now a reasonable guess at finding cutoff distance for
+# 50 or so neighbors
+
+# full distance matrix excluding distances greater than dguess
+# but omit the diagonal elements -- we know these are zero!
+ hold<- nearest.dist( x, delta= dguess,upper=TRUE)
+# exploit spam format to get quick of number of nonzero elements in each row
+ hold2<- diff( hold at rowpointers)
+# min( hold2) = 55 which we declare close enough
+# now the following will use no less than 55 nearest neighbors
+# due to the tapering.
+\dontrun{
+ mKrig( x,y, cov.function="wendland.cov",k=2, theta=dguess,
+ lambda=1e2) -> look2
+}
+
+###############################################################################
+# use precomputed distance matrix
+#
+\dontrun{
+ y1<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( N)*.01
+ y2<- sin( 1.8*pi*x[,1])*sin( 2.5*pi*x[,2]) + rnorm( N)*.01
+ Y<- cbind(y1,y2)
+ #precompute distance matrix in compact form
+ distMat = rdist(x, compact=TRUE)
+ look3<- mKrig( x,Y,cov.function="stationary.cov", theta=.1,
+ lambda=1e2, distMat=distMat )
+ #precompute distance matrix in standard form
+ distMat = rdist(x)
+ look3<- mKrig( x,Y,cov.function="stationary.cov", theta=.1,
+ lambda=1e2, distMat=distMat )
+}
+}
+\references{
+%% ~put references to the literature/web site here ~
+http://cran.r-project.org/web/packages/fields/fields.pdf
+http://www.image.ucar.edu/~nychka/Fields/
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{spatial }
diff --git a/man/mKrigMLE.Rd b/man/mKrigMLE.Rd
new file mode 100644
index 0000000..2a3c8b3
--- /dev/null
+++ b/man/mKrigMLE.Rd
@@ -0,0 +1,356 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{mKrigMLE}
+\alias{mKrigMLEJoint}
+\alias{mKrigMLEGrid}
+\alias{fastTpsMLE}
+\alias{mKrigJointTemp.fn}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+Maximizes likelihood for the process marginal variance (rho) and
+ nugget standard deviation (sigma) parameters (e.g. lambda) over a
+ many covariance models or covariance parameter values.
+}
+\description{
+These function are designed to explore the likelihood surface for
+different covariance parameters with the option of maximizing over
+sigma and rho. They used the \code{mKrig} base are designed for computational efficiency.
+}
+\usage{
+mKrigMLEGrid(x, y, weights = rep(1, nrow(x)), Z = NULL, mKrig.args = NULL,
+ cov.fun = "stationary.cov", cov.args = NULL, na.rm = TRUE,
+ par.grid = NULL, lambda = NULL, lambda.profile = TRUE,
+ relative.tolerance = 1e-04,
+ REML = FALSE, verbose = FALSE)
+
+mKrigMLEJoint(x, y, weights = rep(1, nrow(x)), Z = NULL, mKrig.args
+ = NULL, na.rm = TRUE, cov.fun = "stationary.cov",
+ cov.args = NULL, lambda.start = 0.5, cov.params.start
+ = NULL, optim.args = NULL, abstol = 1e-04,
+ parTransform = NULL, REML = FALSE, verbose = FALSE)
+
+
+fastTpsMLE(x, y, weights = rep(1, nrow(x)), Z = NULL, ...,
+ par.grid=NULL, theta, lambda = NULL, lambda.profile = TRUE,
+ verbose = FALSE, relative.tolerance = 1e-04)
+
+mKrigJointTemp.fn(parameters, mKrig.args, cov.args, parTransform,
+ parNames, REML = FALSE, capture.env)
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{abstol}{Absolute convergence tolerance used in optim.}
+
+ \item{capture.env}{For the ML obective function the frame to save the results of the evaluation. This should be the environment of the function calling optim.}
+
+\item{cov.fun}{
+The name, a text string, of the covariance function.
+}
+\item{cov.args}{
+Additional arguments that would also be included in calls
+ to the covariance function to specify the fixed part of
+ the covariance model.
+}
+
+\item{cov.params.start}{
+A list of initial starts for covariance parameters over which
+ the user wishes to perform likelihood maximization. The list
+ contains the names of the parameters as well as the values.
+}
+ \item{lambda}{
+If \code{lambda.profile=FALSE} the values of lambda to evaluate
+ the likelihood if "TRUE" the starting values for the
+ optimization. If lambda is NA then the optimum value from
+ previous search is used as the starting value. If lambda is
+ NA and it is the first value the starting value defaults to
+ 1.0.
+}
+ \item{lambda.start}{
+The initial guess for lambda in the joint log-likelihood
+ maximization process
+}
+
+ \item{lambda.profile}{
+ If \code{TRUE} maximize likelihood over lambda.
+}
+
+\item{mKrig.args}{A list of additional parameters to supply to the base
+\code{mKrig} function that are distinct from the covariance model.
+For example \code{mKrig.args= list( m=1 )} will set the polynomial to be
+just a constant term (degree = m -1 = 0).
+}
+
+\item{na.rm}{Remove NAs from data.}
+
+\item{optim.args}{
+Additional arguments that would also be included in calls
+ to the optim function in joint likelihood maximization. If
+ \code{NULL}, this will be set to use the "BFGS-"
+ optimization method. See \code{\link{optim}} for more
+ details. The default value is:
+ \code{optim.args = list(method = "BFGS",
+ control=list(fnscale = -1,
+ ndeps = rep(log(1.1), length(cov.params.start)+1),
+ abstol=1e-04, maxit=20))}
+ Note that the first parameter is lambda and the others are
+ the covariance parameters in the order they are given in
+ \code{cov.params.start}. Also note that the optimization
+ is performed on a transformed scale (based on the function
+ \code{parTransform} ), and this should be taken into
+ consideration when passing arguments to \code{optim}.
+}
+\item{parameters}{The parameter values for evaluate the likelihood.}
+
+ \item{par.grid}{
+A list or data frame with components being parameters for
+ different covariance models. A typical component is "theta"
+ comprising a vector of scale parameters to try. If par.grid
+ is "NULL" then the covariance model is fixed at values that
+ are given in \dots.
+}
+
+\item{parNames}{Names of the parameters to optimize over.}
+
+\item{parTransform}{A function that maps the parameters to a scale
+for optimization or
+effects the inverse map from the transformed scale into the original values. See below for more details.
+}
+
+
+\item{relative.tolerance}{
+Tolerance used to declare convergence when
+ maximizing likelihood over lambda.
+}
+
+\item{REML}{Currently using REML is not implemented.}
+
+\item{theta}{Range parameter for compact Wendland covariance. (see
+ fastTps)}
+
+\item{verbose}{ If \code{TRUE} print out interesting intermediate results.
+}
+
+\item{weights}{
+Precision ( 1/variance) of each observation
+}
+
+ \item{x}{
+
+Matrix of unique spatial locations (or in print or surface
+ the returned mKrig object.)
+}
+
+ \item{y}{
+Vector or matrix of observations at spatial locations,
+ missing values are not allowed! Or in mKrig.coef a new
+ vector of observations. If y is a matrix the columns are
+ assumed to be independent observations vectors generated
+ from the same covariance and measurment error model.
+}
+
+\item{Z}{
+Linear covariates to be included in fixed part of the
+ model that are distinct from the default low order
+ polynomial in \code{x}
+}
+
+\item{\dots}{Other arguments to pass to the mKrig function. }
+}
+\details{
+The observational model follows the same as that described in the
+\code{Krig} function and thus the two primary covariance parameters
+for a stationary model are the nugget standard deviation (sigma) and
+the marginal variance of the process (rho). It is useful to
+reparametrize as rho and\ lambda= sigma^2/rho. The likelihood can be
+maximized analytically over rho and the parameters in the fixed part
+of the model the estimate of rho can be substituted back into the
+likelihood to give a expression that is just a function of lambda and
+the remaining covariance parameters. It is this expression that is
+then maximized numerically over lambda when \code{ lambda.profile =
+TRUE}.
+
+Note that \code{fastTpsMLE} is a convenient variant of this more general
+version to use directly with fastTps, and \code{mKrigMLEJoint} is
+similar to \code{mKrigMLEGrid}, except it uses the \code{optim} function
+to optimize over the specified covariance parameters and lambda jointly
+rather than optimizing on a grid. Unlike \code{mKrigMLEJoint}, it returns
+an mKrig object.
+
+For \code{mKrigMLEJoint} the
+ default transformation of the parameters is set up for a log/exp transformation:
+\preformatted{
+ parTransform <- function(ptemp, inv = FALSE) {
+ if (!inv) {
+ log(ptemp)
+ }
+ else {
+ exp(ptemp)
+ }
+ }
+}
+}
+
+\value{
+\strong{\code{mKrigMLEGrid}} returns a list with the components:
+
+\item{summary}{A matrix giving the results for evaluating the
+ likelihood for each covariance model.}
+
+\item{par.grid}{The par.grid argument used.}
+
+\item{cov.args.MLE}{The list of covariance arguments (except for
+lambda) that have the largest likelihood over the list covariance
+models. NOTE: To fit the surface at the largest likelihood among those tried
+\code{ do.call( "mKrig", c(obj$mKrig.args,
+obj$cov.args.MLE,list(lambda=obj$lambda.opt)) )} where \code{obj} is
+the list returned by this function.}
+
+\item{call}{The calling arguments to this function.}
+
+\strong{\code{mKrigMLEJoint}} returns a list with components:
+
+\item{summary}{A vector giving the MLEs and the log likelihood at the maximum}
+
+\item{lnLike.eval}{
+A table containing information on all likelihood evaluations
+performed in the maximization process.
+}
+\item{optimResults}{The list returned from the optim function.}
+
+\item{par.MLE}{The maximum likelihood estimates.}
+
+\item{parTransform}{The transformation of the parameters used in the optimziation.}
+
+}
+\references{
+%% ~put references to the literature/web site here ~
+http://cran.r-project.org/web/packages/fields/fields.pdf
+http://www.image.ucar.edu/~nychka/Fields/
+}
+\author{
+%% ~~who you are~~
+Douglas W. Nychka, John Paige
+}
+
+\seealso{
+%% ~~objects to See Also as \code{\link{help}}, ~~~
+\code{\link{mKrig}}
+\code{\link{Krig}}
+\code{\link{stationary.cov}}
+\code{\link{optim}}
+}
+
+\examples{
+#perform joint likelihood maximization over lambda and theta.
+#optim can get a bad answer with poor initial starts.
+data(ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+obj<- mKrigMLEJoint(x,y,
+ cov.args=list(Covariance="Matern", smoothness=1.0),
+ cov.params.start=list(theta=.2), lambda.start=.1)
+#
+# check lnLikeihood evaluations that were culled from optim
+# these are in obj$lnLike.eval
+# funny ranges are set to avoid very low likelihood values
+
+quilt.plot( log10(cbind(obj$lnLike.eval[,1:2])), obj$lnLike.eval[,5],
+ xlim=c(-1.2,-.40), ylim=c( -1,1), zlim=c( -625, -610))
+ points( log10(obj$pars.MLE[1]), log10(obj$pars.MLE[2]),
+ pch=16, col="grey" )
+
+# some synthetic data with replicates
+ N<- 50
+ set.seed(123)
+ x<- matrix(runif(2*N), N,2)
+ theta<- .2
+ Sigma<- Matern( rdist(x,x)/theta , smoothness=1.0)
+ Sigma.5<- chol( Sigma)
+ sigma<- .1
+ # 250 independent spatial data sets but a common covariance function
+ # -- there is little overhead in
+ # MLE across independent realizations and a good test of code validity.
+ M<-250
+ #F.true<- t( Sigma.5)%*% matrix( rnorm(N*M), N,M)
+ F.true<- t( Sigma.5)\%*\% matrix( rnorm(N*M), N,M)
+ Y<- F.true + sigma* matrix( rnorm(N*M), N,M)
+
+# find MLE for lambda with grid of ranges
+# and smoothness fixed in Matern
+ par.grid<- list( theta= seq( .1,.35,,8))
+ obj1b<- mKrigMLEGrid( x,Y,
+ cov.args = list(Covariance="Matern", smoothness=1.0),
+ par.grid = par.grid
+ )
+ obj$summary # take a look
+# profile over theta
+ plot( par.grid$theta, obj1b$summary[,"lnProfileLike.FULL"],
+ type="b", log="x")
+
+ \dontrun{
+# m=0 is a simple switch to indicate _no_ fixed spatial drift
+# (the default and highly recommended is linear drift, m=2).
+# this results in MLEs that are less biased -- in fact it nails it !
+ obj1a<- mKrigMLEJoint(x,Y,
+ cov.args=list(Covariance="Matern", smoothness=1.0),
+ cov.params.start=list(theta=.5), lambda.start=.5,
+ mKrig.args= list( m=0))
+
+ test.for.zero( obj1a$summary["sigmaMLE"], sigma, tol=.0075)
+ test.for.zero( obj1a$summary["theta"], theta, tol=.05)
+}
+
+\dontrun{
+#perform joint likelihood maximization over lambda, theta, and smoothness.
+#optim can get a bad answer with poor initial guesses.
+obj2<- mKrigMLEJoint(x,Y,
+ cov.args=list(Covariance="Matern", smoothness=1),
+ cov.params.start=list(theta=.2),
+ lambda.start=.1)
+
+#look at lnLikelihood evaluations
+obj2$summary
+#compare to REML
+obj3<- mKrigMLEJoint(x,Y,
+ cov.args=list(Covariance="Matern", smoothness=1),
+ cov.params.start=list(theta=.2),
+ lambda.start=.1, REML=TRUE)
+}
+\dontrun{
+#look at lnLikelihood evaluations
+obj3$summary
+# check convergence of MLE to true fit with no fixed part
+#
+obj4<- mKrigMLEJoint(x,Y,
+ mKrig.args= list( m=0),
+ cov.args=list(Covariance="Matern", smoothness=1),
+ cov.params.start=list(theta=.2),
+ lambda.start=.1, REML=TRUE)
+#look at lnLikelihood evaluations
+obj4$summary
+# nails it!
+}
+}
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+
+\keyword{spatial}
diff --git a/man/minitri.Rd b/man/minitri.Rd
new file mode 100644
index 0000000..254a185
--- /dev/null
+++ b/man/minitri.Rd
@@ -0,0 +1,55 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{minitri}
+\alias{minitri}
+\title{
+ Mini triathlon results
+}
+\description{
+Results from a mini triathlon sponsored by Bud Lite, held in
+Cary, NC, June 1990. Times are in minutes for the male 30-34
+group. Man was it hot and humid! (DN)
+
+The events in order were
+swim: (1/2 mile)
+bike: (15 miles)
+run: (4 miles)
+
+<s-section
+name= "DATA DESCRIPTION">
+This is a dataframe. Row names are the place within this age group based
+on total time.
+}
+\arguments{
+\item{swim}{
+swim times
+}
+\item{bike}{
+bike times
+}
+\item{run}{
+run times
+}
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/ozone.Rd b/man/ozone.Rd
new file mode 100644
index 0000000..31e1235
--- /dev/null
+++ b/man/ozone.Rd
@@ -0,0 +1,66 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{Chicago ozone test data}
+\alias{ChicagoO3}
+\alias{ozone}
+\title{
+ Data set of ozone measurements at 20 Chicago monitoring stations.
+}
+\description{
+This data set used be named \code{ozone} but was
+changed to avoid conflict with other packages.
+The \code{ChicagoO3} data is a list of
+components, x and y. x
+component is longitude and latitude position of each of the 20
+Chicago monitoring stations, y is the average
+daily ozone values over the time period 6/3/87-8/30/87.
+These data are used extensively for the test scripts and
+simple examples. The lasting scientific value is probably
+minimal.
+
+}
+
+\format{
+This data set is a list containing the following components:
+
+\describe{
+ \item{lon.lat}{ Longitude-latitude positions of monitoring stations. }
+ \item{x}{An approximate Cartesian set of coordinates for the locations
+where the units are in miles. The origin is in the center of the
+locations. }
+ \item{y}{ Average daily ozone values over 1987 summer. }
+ }
+}
+\source{
+AIRS, the EPA air quality data base.
+}
+\seealso{
+Tps, Krig
+}
+\examples{
+fit<- Tps(ChicagoO3$x, ChicagoO3$y)
+# fitting a surface to ozone measurements.
+surface( fit, type="I")
+}
+\keyword{datasets}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/ozone2.Rd b/man/ozone2.Rd
new file mode 100644
index 0000000..e46cf34
--- /dev/null
+++ b/man/ozone2.Rd
@@ -0,0 +1,71 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{ozone2}
+\alias{ozone2}
+\title{
+ Daily 8-hour ozone averages for sites in the Midwest
+}
+\description{
+The response is 8-hour average (surface) ozone ( from 9AM-4PM) measured in
+parts per billion (PPB) for 153 sites in the midwestern US over the period
+June 3,1987 through August 31, 1987, 89 days. This season of high
+ozone corresponds with a large modeling experiment using the EPA Regional
+Oxidant Model.
+}
+\usage{
+data(ozone2)
+}
+\format{
+The data list has components:
+<s-args>
+<s-arg name="y">
+a 89X153 matrix of ozone values. Rows are days and columns are the
+sites.
+</s-arg>
+</s-arg name="lon.lat">
+Site locations in longitude and latitude as a 153X2 table
+</s-arg>
+<s-arg name="chicago.subset">
+Logical vector indicating stations that form teh smaller
+Chicagoland subset. (see FIELDS ozone data set)
+</s-arg>
+</s-args>
+<s-section name="Reference">
+Nychka, D., Cox, L., Piegorsch, W. (1998) Case Studies in Environmental
+Statistics Lecture Notes in Statistics, Springer
+Verlag, New York
+}
+\examples{
+data( ozone2)
+
+# pairwise correlation among all stations
+# ( See cover.design to continue this example)
+cor.mat<- cor( ozone2$y, use="pairwise")
+
+#raw data image for day number 16
+good<- !is.na( ozone2$y[16,])
+out<- as.image( ozone2$y[16,good], x=ozone2$lon.lat[good,])
+image.plot( out)
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/plot.Krig.Rd b/man/plot.Krig.Rd
new file mode 100644
index 0000000..310c24d
--- /dev/null
+++ b/man/plot.Krig.Rd
@@ -0,0 +1,113 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{plot.Krig}
+\alias{plot.Krig}
+\alias{plot.sreg}
+
+\title{
+ Diagnostic and summary plots of a Kriging, spatialProcess or spline object.
+}
+\description{
+Plots a series of four diagnostic plots that summarize the fit.
+}
+\usage{
+\method{plot}{Krig}(x, digits=4, which= 1:4,...)
+\method{plot}{sreg}(x, digits = 4, which = 1:4, ...)
+
+
+}
+\arguments{
+\item{x}{ A Krig or an sreg object}
+
+\item{digits}{
+Number of significant digits for the RMSE label.
+}
+\item{which}{
+A vector specifying by number which of the four plots to draw.
+1:4 plots all four.
+}
+\item{\dots}{
+Optional graphics arguments to pass to each plot.
+}
+}
+\details{
+ This function creates four summary plots of the Krig or sreg object. The
+default is to put these on separate pages. However if the screen is
+already divided in some other fashion the plots will just be added
+according to that scheme. This option is useful to compare to compare
+several different model fits.
+
+The first is a scatterplot of predicted value against observed.
+
+The second plot is "standardized" residuals against predicted value.
+Here we mean that the residuals are divided by the GCV estimate for sigma
+and multiplied by the square root of any weights that have been specified.
+In the case of a "correlation model" the residuals are also divided by the
+marginal standard deviation from this model.
+
+The third plot are the values of the GCV function against the effective
+degrees of freedom. When there are replicate points several versions of
+the GCV function may be plotted. GCV function is with respect to the
+standardized data if a correlation model is specified. A vertical line
+indicates the minimium found.
+
+For \code{Krig} and \code{sreg} objects the fourth plot is a histogram of the standardized residuals.
+For sreg if multiple lambdas are given plotted are boxplots of the
+residuals for each fit.
+
+For \code{spatialProcess} object the fourth plot is the profile likelihood for the
+theta parameter. Points are the actual evaluated log likelihoods and the dashed line is
+just a spline interpolation to help with visualization.
+
+}
+\seealso{
+Krig, spatialProcess, summary.Krig, Tps, set.panel
+}
+\examples{
+
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+fit1<-Krig(x,y, theta=200)
+# fitting a surface to ozone
+# measurements
+set.panel( 2,2)
+plot(fit1)
+
+fit2<-spatialProcess(x,y)
+# fitting a spatial process model to ozone
+# measurements
+# Although an example does not make too much sense for only 20 observations!
+set.panel( 2,2)
+plot(fit2)
+
+# fit rat data
+fit3<-sreg(rat.diet$t,rat.diet$con)
+set.panel(2,2)
+plot(fit3)
+
+set.panel(1,1) # reset graphics window.
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/plot.surface.Rd b/man/plot.surface.Rd
new file mode 100644
index 0000000..245ba36
--- /dev/null
+++ b/man/plot.surface.Rd
@@ -0,0 +1,105 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{plot.surface}
+\alias{plot.surface}
+\title{
+ Plots a surface
+}
+\description{
+Plots a surface object in several different ways to give 3-d
+information e.g. a contour plots, perspective plots.
+}
+\usage{
+\method{plot}{surface}(x, main = NULL, type = "C", zlab = NULL, xlab = NULL,
+ ylab = NULL, levels = NULL, zlim = NULL, graphics.reset = NULL,
+ labcex = 0.6, add.legend=TRUE, ...)
+}
+\arguments{
+\item{x}{
+A surface object. At the minimum a list with components x,y and z
+in the same form as the input list for the standard contour, persp
+or image functions. This can also be an object from predictSurface.
+}
+\item{main}{
+Title for plot.
+
+}
+\item{type}{
+type="p" for a perspective/drape plot (see drape.plot),
+type="I" for an image plot with a legend
+strip (see image.plot), type="c" draws a contour plot, type="C" is the "I" option but with contours lines
+added. type="b" gives both "p" and "C" as a 2X1 panel }
+\item{zlab}{
+z-axes label
+}
+\item{xlab}{
+x-axes label
+}
+\item{ylab}{
+y-axes labels
+}
+\item{levels}{
+Vector of levels to be passed to contour function.
+}
+\item{graphics.reset}{
+Reset to original graphics parameters after function plotting.
+Default is to reset if type ="b" but not for the single plot options.
+}
+\item{zlim}{
+Sets z limits on perspective plot. }
+\item{labcex}{
+Label sizes for axis labeling etc.
+}
+\item{add.legend}{ If TRUE adds a legend to the draped perspective plot}
+\item{\dots}{
+Other graphical parameters that are passed along to either drape.persp
+or image.plot
+}
+
+}
+\seealso{
+surface, predictSurface, as.surface, drape.plot, image.plot
+}
+\examples{
+x<- seq( -2,2,,80)
+y<- seq( -2,2,,80)
+# a lazy way to create some test image
+z<- outer( x,y, "+")
+
+# create basic image/surface object
+obj<- list(x=x, y=y,z=z)
+
+# basic contour plot
+# note how graphical parameters appropriate to contour are passed
+plot.surface( obj, type="c", col="red")
+
+# using a fields function to fit a surface and evaluate as surface object.
+fit<- Tps( BD[,1:4], BD$lnya) # fit surface to data
+# surface of variables 2 and 3 holding 1 and 4 fixed at their median levels
+ out.p<-predictSurface(fit, xy=c(2,3))
+
+ plot.surface(out.p) # surface plot
+
+}
+\keyword{hplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/poly.image.Rd b/man/poly.image.Rd
new file mode 100644
index 0000000..5efd5bf
--- /dev/null
+++ b/man/poly.image.Rd
@@ -0,0 +1,130 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{poly.image}
+\alias{poly.image}
+\alias{poly.image.regrid}
+\title{Image plot for cells that are irregular quadrilaterals.}
+\description{
+ Creates an image using polygon filling based on a grid of irregular
+quadrilaterals. This function is useful for a regular grid that has
+been transformed to another nonlinear or rotated coordinate system. This
+situation comes up in lon-lat grids created under different map projections.
+Unlike the usual image format this function requires the grid to be
+specified as two matrices x and y that given the grid x and y coordinates
+explicitly for every grid point.
+}
+\usage{
+poly.image(x, y, z, col = tim.colors(64), breaks, transparent.color = "white",
+ midpoint = FALSE, zlim = range(z, na.rm = TRUE),
+ xlim = range(x), ylim = range(y), add = FALSE, border=NA,lwd.poly=1,...)
+
+poly.image.regrid(x)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{x}{A matrix of the x locations of the grid. }
+ \item{y}{A matrix of the y locations of the grid. }
+ \item{z}{Values for each grid cell. Can either be the value at the
+grid points or interpreted as the midpoint of the grid cell. }
+ \item{col}{ Color scale for plotting. }
+ \item{breaks}{Numerical breaks to match to the colors. If missing breaks are
+ equally spaced on the range \code{zlim}.}
+ \item{transparent.color}{ Color to plot cells that are outside the
+range specified in the function call. }
+ \item{midpoint}{
+ Only relevant if the dimensions of x,y, and z are the same. If TRUE
+the z values will be averaged and then used as the cell midpoints. If
+FALSE the x/y grid will be expanded and shifted to represent grid cells
+corners. (See poly.image.regrid.) }
+
+ \item{zlim}{ Plotting limits for z. }
+ \item{xlim}{Plotting limits for x. }
+ \item{ylim}{Plotting limits for y.}
+ \item{add}{ If TRUE will add image onto current plot. }
+ \item{border}{Color of the edges of the quadrilaterals, the default is
+no color.}
+ \item{lwd.poly}{Line width for the mesh surface. i.e. the outlines of the quadrilateral
+facets. This might have to be set smaller than one if rounded corners on the facets are visible. }
+ \item{\dots}{ If add is FALSE, additional graphical arguments that
+ will be supplied to the plot function. }
+}
+\details{
+ This function is straightforward except in the case when the dimensions
+of x,y, and z are equal. In this case the relationship of the values to
+the grid cells is ambigious and the switch midpoint gives two possible
+solutions. The z values at 4 neighboring grid cells can be averaged to
+estimate a new value interpreted to be at the center of the grid. This
+is done when midpoint is TRUE. Alternatively the full set of z values
+can be retained by redefining the grid. This is accomplisehd by finding
+the midpoints of x and y grid points and adding two outside rows and
+cols to complete the grid. The new result is a new grid that is is
+(M+1)X (N+1) if z is MXN. These new grid points define cells that
+contain each of the original grid points as their midpoints. Of course
+the advantage of this alternative is that the values of z are preserved
+in the image plot; a feature that may be important for some uses.
+
+The function image.plot uses this function internally when image
+information is passed in this format and can add a legend. In most cases
+just use image.plot.
+
+The function \code{poly.image.regrid} does a simple averaging and
+extrapolation of the grid locations to shift from midpoints to
+corners. In the interior grid corners are found by the average of the
+4 closest midpoints. For the edges the corners are just extrapolated
+based on the separation of nieghboring grid cells.
+}
+\author{Doug Nychka }
+\seealso{image.plot}
+\examples{
+data(RCMexample)
+set.panel( 1,2)
+par(pty="s")
+# plot with grid modified
+poly.image( RCMexample$x, RCMexample$y, RCMexample$z[,,1])
+
+# use midpoints of z
+poly.image( RCMexample$x, RCMexample$y, RCMexample$z[,,1],midpoint=TRUE)
+
+ set.panel()
+# an example with quantile breaks
+
+ brk<- quantile( RCMexample$z[,,1], c( 0, .9,.95,.99,1.0) )
+ poly.image( RCMexample$x, RCMexample$y, RCMexample$z[,,1], breaks=brk, col=
+ rainbow(4))
+
+
+# images are very similar.
+ set.panel()
+# Regridding of x and y
+ l1<- poly.image.regrid( RCMexample$x)
+ l2<- poly.image.regrid( RCMexample$y)
+
+# test that this works
+ i<- 1:10
+ plot( l1[i,i], l2[i,i])
+ points( RCMexample$x[i,i], RCMexample$y[i,i],col="red")
+
+
+
+ }
+% Add one or more standard keywords, see file 'KEYWORDS' in the
+% R documentation directory.
+\keyword{spatial}
diff --git a/man/predict.Krig.Rd b/man/predict.Krig.Rd
new file mode 100644
index 0000000..c679c61
--- /dev/null
+++ b/man/predict.Krig.Rd
@@ -0,0 +1,189 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{predict.Krig}
+\alias{predict.Krig}
+\alias{predict.Tps}
+\alias{predictDerivative.Krig}
+\alias{predict.fastTps}
+\title{
+ Evaluation of Krig spatial process estimate.
+}
+\description{
+Provides predictions from the Krig spatial process estimate at arbitrary
+points, new data (Y) or other values of the smoothing parameter (lambda)
+including a GCV estimate.
+}
+\usage{
+\method{predict}{Krig}(
+object, x = NULL, Z = NULL, drop.Z = FALSE, just.fixed
+ = FALSE, lambda = NA, df = NA, model = NA,
+ eval.correlation.model = TRUE, y = NULL, yM = NULL,
+ verbose = FALSE, ...)
+predictDerivative.Krig(object, x = NULL, verbose = FALSE,...)
+
+\method{predict}{Tps}(object, ... )
+
+\method{predict}{fastTps}(object, xnew = NULL, grid.list = NULL, ynew = NULL,
+ derivative = 0, Z = NULL, drop.Z = FALSE, just.fixed =
+ FALSE, xy = c(1, 2), ...)
+
+}
+\arguments{
+
+\item{derivative}{The degree of the derivative to be evauated. Default is
+0 (evaluate the function itself), 1 is supported by some covariance functions,
+Higher derivatives are not supported in this version and for mKrig.}
+
+\item{df}{
+Effective degrees of freedom for the predicted surface. This can be used
+in place of lambda ( see the function Krig.df.to.lambda)
+}
+
+\item{eval.correlation.model}{
+If true ( the default) will multiply the predicted function by marginal
+sd's
+and add the mean function. This usually what one wants. If false will
+return predicted surface in the standardized scale. The main use of this
+option is a call from Krig to find MLE's of rho and sigma2
+}
+
+\item{grid.list}{A \code{grid.list} specfiying a grid of locations to
+evaluate the fitted surface.}
+
+
+\item{just.fixed}{ Only fixed part of model is evaluated}
+
+
+\item{lambda}{ Smoothing parameter. If omitted, out\$lambda will be
+used. (See also df and gcv arguments) } \item{model}{ Generic
+argument that may be used to pass a different lambda. }
+\item{object}{ Fit object from the Krig, Tps, mKrig, or fastTps
+functions. }
+
+ \item{verbose}{ Print out all kinds of intermediate stuff for
+debugging }
+
+\item{xy}{The column positions that locate the x and y variables for
+evaluating on a grid. This is mainly useful if the surface has more
+than 2 dimensions.}
+
+\item{y}{ Evaluate the estimate using the new data vector y (in the
+same order as the old data). This is equivalent to recomputing the
+Krig object with this new data but is more efficient because many
+pieces can be reused. Note that the x values are assumed to be the
+same. } \item{x}{ Matrix of x values on which to evaluate the kriging
+surface. If omitted, the data x values, i.e. out\$x will be used. }
+
+\item{xnew}{Same as x above.}
+
+\item{ynew}{Same as y above.}
+
+\item{yM}{ If not NULL evaluate the
+estimate using this vector as the replicate mean data. That is, assume
+the full data has been collapsed into replicate means in the same
+order as xM. The replicate weights are assumed to be the same as the
+original data. (weightsM) }
+
+
+\item{Z}{ Vector/Matrix of additional covariates to be included in
+fixed part of spatial model} \item{drop.Z}{ If TRUE only spatial fixed
+part of model is evaluated. i.e. Z covariates are not used. }
+
+\item{\dots}{Other arguments passed to covariance function. In the case of
+\code{fastTps} these are the same arguments as \code{predict.mKrig}.
+This argument is usually not needed.
+}
+
+}
+\value{
+Vector of predicted responses or a matrix of the partial derivatives.
+}
+\details{
+ The main goal in this function is to reuse the Krig object to rapidly
+evaluate different estimates. Thus there is flexibility in changing the
+value of lambda and also the independent data without having to
+recompute the matrices associated with the Krig object. The reason this
+is possible is that most on the calculations depend on the observed
+locations not on lambda or the observed data. Note the version for
+evaluating partial derivatives does not provide the same flexibility as
+\code{predict.Krig} and makes some assumptions about the null model
+(as a low order polynomial) and can not handle the correlation model form.
+
+}
+\seealso{
+Krig, predictSurface gcv.Krig
+}
+
+\examples{
+ Krig(ChicagoO3$x,ChicagoO3$y, theta=50) ->fit
+ predict( fit) # gives predicted values at data points should agree with fitted.values
+ # in fit object
+
+# predict at the coordinate (-5,10)
+ x0<- cbind( -5,10) # has to be a 1X2 matrix
+ predict( fit,x= x0)
+
+# redoing predictions at data locations:
+ predict( fit, x=ChicagoO3$x)
+
+# only the fixed part of the model
+ predict( fit, just.fixed=TRUE)
+
+# evaluating estimate at a grid of points
+ grid<- make.surface.grid( list( seq( -40,40,,15), seq( -40,40,,15)))
+ look<- predict(fit,grid) # evaluate on a grid of points
+
+# some useful graphing functions for these gridded predicted values
+ out.p<- as.surface( grid, look) # reformat into $x $y $z image-type object
+ contour( out.p)
+
+# see also the functions predictSurface and surface
+# for functions that combine these steps
+
+
+# refit with 10 degrees of freedom in surface
+ look<- predict(fit,grid, df=15)
+# refit with random data
+ look<- predict( fit, grid, y= rnorm( 20))
+
+
+# finding partial derivatives of the estimate
+#
+# find the partial derivatives at observation locations
+# returned object is a two column matrix.
+# this does not make sense for the exponential covariance
+# but can illustrate this with a thin plate spline with
+# a high enough order ( i.e. need m=3 or greater)
+#
+ data(ozone2)
+# the 16th day of this ozone spatial dataset
+ fit0<- Tps( ozone2$lon.lat, ozone2$y[16,], m=3)
+ look1<- predictDerivative.Krig( fit0)
+# for extra credit compare this to
+ look2<- predictDerivative.Krig( fit0, x=ozone2$lon.lat)
+# (why are there more values in look2)
+
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/predictSE.Krig.Rd b/man/predictSE.Krig.Rd
new file mode 100644
index 0000000..4779223
--- /dev/null
+++ b/man/predictSE.Krig.Rd
@@ -0,0 +1,139 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{predictSE}
+\alias{predictSE}
+\alias{predictSE.Krig}
+\alias{predictSE.mKrig}
+\alias{predictSEUsingKrigA}
+\title{
+ Standard errors of predictions for Krig spatial process estimate
+}
+\description{
+Finds the standard error ( or covariance) of prediction based on a linear
+combination of
+the observed data. The linear combination is usually the "Best Linear
+Unbiased Estimate" (BLUE) found from the Kriging equations.
+This statistical computation is done under the assumption that the
+covariance function is known.
+}
+\usage{
+predictSE(object, ...)
+\method{predictSE}{Krig}(object, x = NULL, cov = FALSE, verbose = FALSE,...)
+\method{predictSE}{mKrig}(object, xnew = NULL, Z = NULL, verbose = FALSE, drop.Z
+ = FALSE, ...)
+ }
+\arguments{
+\item{drop.Z}{If FALSE find standard error without including the additional spatial covariates
+described by \code{Z}. If TRUE find full standard error with spatial covariates if they are part of the model.}
+\item{object}{ A fitted object that can be used to find prediction standard errors. This is usually from fitting a spatial model to data. e.g. a Krig or mKrig object.
+}
+\item{xnew}{
+Points to compute the predict standard error or the prediction
+cross covariance matrix.
+}
+\item{x}{
+Same as \code{xnew} -- points to compute the predict standard error or the prediction
+cross covariance matrix.
+}
+\item{cov}{
+If TRUE the full covariance matrix for the predicted values is returned.
+Make sure this will not be big if this option is used. ( e.g. 50X50 grid
+will return a matrix that is 2500X2500!) If FALSE just the marginal
+standard deviations of the predicted values are returned. Default is
+FALSE -- of course.
+}
+
+\item{verbose}{If TRUE will print out various information for debugging.}
+
+\item{\dots}{
+These additional arguments passed to the predictSE function.
+}
+\item{Z}{Additional matrix of spatial covariates used for prediction. These are used to
+determine the additional covariance contributed in teh fixed part of the model.}
+
+}
+
+\value{
+A vector of standard errors for the predicted values of the Kriging fit.
+}
+\details{
+The predictions are represented as a linear combination of the dependent
+variable, Y. Call this LY. Based on this representation the conditional
+variance is the same as the expected value of (P(x) + Z(X) - LY)**2.
+where
+P(x)+Z(x) is the value of the surface at x and LY is the linear
+combination that estimates this point. Finding this expected value is
+straight forward given the unbiasedness of LY for P(x) and the covariance
+for Z and Y.
+
+In these calculations it is assumed that the covariance parameters are fixed.
+This is an approximation since in most cases they have been estimated from the
+data. It should also be noted that if one assumes a Gaussian field and known
+parameters in the covariance, the usual Kriging estimate is the
+conditional mean of the field given the data. This function finds the
+conditional standard deviations (or full covariance matrix) of the
+fields given the data.
+
+ There are two useful extensions supported by this function. Adding the
+variance to the estimate of the spatial mean if this is a correlation
+model. (See help file for Krig) and calculating the variances under
+covariance misspecification. The function \code{predictSE.KrigA} uses
+the smoother matrix ( A(lambda) ) to find the standard errors or
+covariances directly from the linear combination of the spatial
+predictor. Currently this is also the calculation in
+\code{predictSE.Krig} although a shortcut is used
+\code{predictSE.mKrig} for mKrig objects.
+
+}
+\seealso{
+Krig, predict.Krig, predictSurfaceSE
+}
+\examples{
+#
+# Note: in these examples predictSE will default to predictSE.Krig using
+# a Krig object
+
+ fit<- Krig(ChicagoO3$x,ChicagoO3$y,cov.function="Exp.cov", theta=10) # Krig fit
+ predictSE.Krig(fit) # std errors of predictions at obs.
+
+# make a grid of X's
+ xg<-make.surface.grid(
+ list(East.West=seq(-27,34,,20),North.South=seq(-20,35,,20)))
+ out<- predictSE(fit,xg) # std errors of predictions
+
+#at the grid points out is a vector of length 400
+#reshape the grid points into a 20X20 matrix etc.
+
+ out.p<-as.surface( xg, out)
+ surface( out.p, type="C")
+
+# this is equivalent to the single step function
+# (but default is not to extrapolation beyond data
+# out<- predictSurfaceSE( fit)
+# image.plot( out)
+
+
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/predictSurface.Rd b/man/predictSurface.Rd
new file mode 100644
index 0000000..c146c6a
--- /dev/null
+++ b/man/predictSurface.Rd
@@ -0,0 +1,203 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{predictSurface}
+\alias{predictSurface}
+\alias{predictSurface.default}
+\alias{predictSurface.mKrig}
+\alias{predictSurface.Krig}
+\alias{predictSurface.fastTps}
+\alias{predictSurfaceSE}
+\alias{predictSurfaceSE.default}
+\alias{predict.surface}
+
+\title{
+ Evaluates a fitted function or the prediction error as a surface that is suitable for plotting with
+the image, persp, or contour functions.
+}
+\description{
+Evaluates a a fitted model or the prediction error on a 2-D grid keeping any other variables constant.
+The resulting object is suitable for use with functions for viewing 3-d
+surfaces.
+}
+\usage{
+\method{predictSurface}{default}(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE, ...)
+
+\method{predictSurface}{fastTps}(object, grid.list = NULL,
+ extrap = FALSE, chull.mask = NA, nx = 80, ny = 80,
+ xy = c(1,2), verbose = FALSE, ...)
+
+\method{predictSurface}{Krig}(object, grid.list = NULL, extrap = FALSE, chull.mask = NA,
+nx = 80, ny = 80, xy = c(1, 2), verbose = FALSE, ZGrid = NULL,
+ drop.Z = FALSE, just.fixed=FALSE, ...)
+
+\method{predictSurface}{mKrig}(object, ...)
+
+\method{predictSurfaceSE}{default}( object, grid.list = NULL, extrap =
+FALSE, chull.mask = NA, nx = 80, ny = 80, xy = c(1,2), verbose =
+FALSE, ...)
+
+\method{predict}{surface}(object,...)
+}
+
+\arguments{
+\item{object}{
+An object from fitting a function to data. In fields this is usually a
+Krig, mKrig, or fastTps object.
+}
+\item{grid.list}{
+A list with as many components as variables describing the surface.
+All components should have a single value except the two that give the
+grid points for evaluation. If the matrix or data frame has column names,
+these must appear in the grid list. See the grid.list help file for more
+details. If this is omitted and the fit just depends on two variables the
+grid will be made from the ranges of the observed variables.
+(See the function \code{fields.x.to.grid}.)
+
+}
+\item{extrap}{
+ Extrapolation beyond the range of the data. If \code{FALSE} (the
+default) the predictions will be restricted to the convex hull of the observed
+data or the convex hull defined from the points from the argument chull.mask.
+This function may be slightly faster if this logical is set to
+\code{TRUE} to avoid checking the grid points for membership in the
+convex hull. For more complicated masking a low level creation of a bounding
+polygon and testing for membership with \code{in.poly} may be useful.
+
+}
+
+\item{chull.mask}{
+Whether to restrict the fitted surface to be on a convex hull, NA's
+are assigned to values outside the
+convex hull. chull.mask should be a sequence of points defining a convex
+hull. Default is to form the convex hull from the observations if this
+argument is missing (and extrap is false).
+}
+
+\item{nx}{
+Number of grid points in X axis. }
+\item{ny}{
+Number of grid points in Y axis. }
+
+\item{xy}{ A two element vector giving the positions for the "X" and "Y"
+variables for the surface. The positions refer to the columns of the x
+matrix used to define the multidimensional surface. This argument is
+provided in lieu of generating the grid list. If a 4 dimensional surface
+is fit to data then \code{ xy= c(2,4)} will evaluate a surface using the
+second and fourth variables with variables 1 and 3 fixed at their median
+values. NOTE: this argument is ignored if a grid.list argument is
+passed. }
+\item{drop.Z}{If TRUE the fixed part of model depending on covariates is omitted.}
+\item{just.fixed}{If TRUE the nonparametric surface is omitted.}
+ \item{\dots}{
+Any other arguments to pass to the predict function associated with the fit
+object.
+Some of the usual arguments for several of the fields fitted objects include:
+\describe{
+
+\item{ynew}{ New values of y used to reestimate the surface.}
+
+\item{Z}{A matrix of covariates for the fixed part of model.}
+}
+}
+
+\item{ZGrid}{An array or list form of covariates to use for
+ prediction. This must match the
+\code{grid.list} argument. e.g. ZGrid and grid.list describe the same
+ grid.
+If ZGrid is an array then the first two indices are the x and y
+ locations in the
+grid. The third index, if present, indexes the covariates. e.g. For
+ evaluation on
+a 10X15 grid and with 2 covariates. \code{ dim( ZGrid) == c(10,15, 2)}.
+If ZGrid is a list then the components x and y shold match those of
+ grid.list and
+the z component follows the shape described above for the no list
+case.
+}
+
+ \item{verbose}{If TRUE prints out some imtermediate results for debugging.}
+
+}
+
+\value{
+The usual list components for making contour and perspective plots
+(x,y,z) along with labels for the x and y variables. For
+\code{predictSurface.derivative} the component \code{z} is a three
+dimensional array with \code{nx}, \code{ny}, 2.
+ }
+\details{ This
+function creates the right grid using the grid.list information or the
+attribute in xg, calls the predict function for the object with these
+points and also adding any extra arguments passed in the ... section,
+and then reforms the results as a surface object (as.surface). To
+determine the what parts of the prediction grid are in the convex hull
+of the data the function \code{in.poly} is used. The argument
+inflation in this function is used to include a small margin around
+the outside of the polygon so that point on convex hull are
+included. This potentially confusing modification is to prevent
+excluding grid points that fall exactly on the ranges of the
+data. Also note that as written there is no computational savings for
+evaluting only the convex subset compared to the full grid.
+
+\code{predictSurface.fastTps} is a specific version ( m=2, and k=2)
+that can be much more efficient because it takes advantage of a low
+level FORTRAN call to evaluate the Wendland covariance function. Use
+\code{predictSurface} or \code{predict} for other choices of m and k.
+
+\code{predictSurface.Krig} is designed to also include covariates for the fixed in terms of grids. Due to similarity in output and the model. \code{predictSurface.mKrig} just uses the Krig method.
+
+NOTE: \code{predict.surface} has been depreciated and just prints out
+a warning when called.
+
+ }
+\seealso{
+Tps, Krig, predict, grid.list, make.surface.grid, as.surface, surface,
+in.poly
+}
+\examples{
+fit<- Tps( BD[,1:4], BD$lnya) # fit surface to data
+
+# evaluate fitted surface for first two
+# variables holding other two fixed at median values
+
+out.p<- predictSurface(fit)
+surface(out.p, type="C")
+
+#
+# plot surface for second and fourth variables
+# on specific grid.
+
+glist<- list( KCL=29.77, MgCl2= seq(3,7,,25), KPO4=32.13,
+ dNTP=seq( 250,1500,,25))
+
+out.p<- predictSurface(fit, glist)
+surface(out.p, type="C")
+
+out.p<- predictSurfaceSE(fit, glist)
+surface(out.p, type="C")
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/print.Krig.Rd b/man/print.Krig.Rd
new file mode 100644
index 0000000..b0a333f
--- /dev/null
+++ b/man/print.Krig.Rd
@@ -0,0 +1,55 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{print.Krig}
+\alias{print.Krig}
+\title{
+ Print kriging fit results.
+}
+\description{
+Prints the results from a fitting a spatial process estimate (Krig)
+}
+\usage{
+\method{print}{Krig}(x,digits=4,...)
+}
+\arguments{
+\item{x}{
+Object from Krig function.
+}
+\item{digits}{
+Number of significant digits in printed output. Default is 4.
+}
+\item{\dots}{ Other arguments to print.}
+}
+\value{
+Selected summary results from Krig.
+}
+\seealso{
+print, summary.Krig, Krig
+}
+\examples{
+fit<- Krig(ChicagoO3$x,ChicagoO3$y, theta=100)
+print(fit) # print the summary
+fit # this will work too
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/pushpin.Rd b/man/pushpin.Rd
new file mode 100644
index 0000000..cd048f5
--- /dev/null
+++ b/man/pushpin.Rd
@@ -0,0 +1,72 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{pushpin}
+\alias{pushpin}
+\title{ Adds a "push pin" to an existing 3-d plot}
+\description{Adds to an existing 3-d perspective plot
+a push pin to locate a specific point.}
+\usage{
+pushpin( x,y,z,p.out, height=.05,col="black",text=NULL,adj=-.1,cex=1.0,...)
+}
+
+\arguments{
+ \item{x}{x location}
+ \item{y}{y location}
+ \item{z}{z location}
+ \item{p.out}{Projection information returned by persp}
+ \item{height}{Height of pin in device coordinates (default is about
+ 5\% of the vertical distance ). }
+ \item{col}{Color of pin head.}
+ \item{text}{Optional text to go next to pin head.}
+ \item{adj}{Position of text relative to pin head.}
+ \item{cex}{Character size for pin head and/or text}
+ \item{\dots}{Additional graphics arguments that are passed to the text
+ function.}
+}
+
+\details{
+See the help(text) for the conventions on
+the \code{adj} argument and other options for placing text.
+
+ }
+\author{Doug Nychka}
+
+\seealso{drape.plot,persp}
+
+\examples{
+# Dr. R's favorite New Zealand Volcano!
+ data( volcano)
+ M<- nrow( volcano)
+ N<- ncol( volcano)
+ x<- seq( 0,1,,M)
+ y<- seq( 0,1,,N)
+
+ drape.plot( x,y,volcano, col=terrain.colors(128))-> pm
+
+ max( volcano)-> zsummit
+ xsummit<- x[ row( volcano)[volcano==zsummit]]
+ ysummit<- y[ col( volcano)[volcano==zsummit]]
+
+pushpin( xsummit,ysummit,zsummit,pm, text="Summit")
+
+}
+
+\keyword{hplot}
diff --git a/man/qsreg.Rd b/man/qsreg.Rd
new file mode 100644
index 0000000..fad4bf8
--- /dev/null
+++ b/man/qsreg.Rd
@@ -0,0 +1,181 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{qsreg}
+\alias{qsreg}
+\title{
+ Quantile or Robust spline regression
+}
+\description{
+Uses a penalized likelihood approach to estimate the conditional
+quantile function for regression data. This method is only implemented
+for univariate data. For the pairs (X,Y) the
+conditional quantile, f(x), is P( Y<f(x)| X=x) = alpha. This estimate
+is useful for determining the envelope of a scatterplot or assessing
+departures from a constant variance with respect to the independent
+variable.
+}
+\usage{
+qsreg(x, y, lam = NA, maxit = 50, maxit.cv = 10, tol =
+ 1e-07, offset = 0, sc = sqrt(var(y)) * 1e-05, alpha =
+ 0.5, wt = rep(1, length(x)), cost = 1, nstep.cv = 80,
+ hmin = NA, hmax = NA, trmin = 2 * 1.05, trmax = 0.95
+ * length(unique(x)))
+}
+\arguments{
+\item{x}{
+Vector of the independent variable in y = f(x) + e}
+\item{y}{
+Vector of the dependent variable}
+\item{lam}{
+Values of the smoothing parameter. If omitted is found by GCV based on the
+the quantile criterion
+}
+\item{maxit}{
+Maximum number of iterations used to estimate each quantile spline.
+}
+\item{maxit.cv}{
+Maximum number of iterations to find GCV minimum.
+}
+\item{tol}{
+Tolerance for convergence when computing quantile spline.
+}
+\item{cost}{
+Cost value used in the GCV criterion. Cost=1 is the usual GCV
+denominator.
+}
+\item{offset}{
+Constant added to the effective degrees of freedom in the GCV function.
+}
+\item{sc}{
+Scale factor for rounding out the absolute value function at zero to a
+quadratic. Default is a small scale to produce something more like
+quantiles. Scales on the order of the residuals will result is a robust
+regression fit using the Huber weight function. The default is 1e-5 of the
+variance of the Y's. The larger this value the better behaved the problem
+is numerically and requires fewer iterations for convergence at each new
+value of lambda.
+}
+\item{alpha}{
+Quantile to be estimated. Default is find the median.
+}
+\item{wt}{
+Weight vector default is constant values. Passing nonconstant weights is a
+pretty strange thing to do.
+}
+\item{nstep.cv}{
+Number of points used in CV grid search
+}
+\item{hmin}{
+Minimum value of log( lambda) used for GCV grid search.
+}
+\item{hmax}{
+Maximum value of log( lambda) used for GCV grid search.
+}
+\item{trmin}{
+Minimum value of effective degrees of freedom in model used
+for specifying the range of lambda in the GCV grid search.
+}
+\item{trmax}{
+Maximum value of effective degrees of freedom in model used
+for specifying the range of lambda in the GCV grid search.
+}
+
+}
+\value{
+
+\item{trmin trmax }{
+Define the minimum and maximum values for the CV grid search in terms of
+the effective number of parameters. (see hmin, hmax)
+Object of class qsreg with many arguments similar to a sreg object.
+One difference is that cv.grid has five columns the last being
+the number of iterations for convergence at each value of lambda.
+}
+}
+\details{
+This is an experimental function to find the smoothing parameter for a
+quantile or robust spline using a more appropriate criterion than mean squared
+error prediction.
+The quantile spline is found by an iterative algorithm using weighted
+least squares cubic splines. At convergence the estimate will also be a
+weighted natural cubic spline but the weights will depend on the
+estimate.
+Alternatively at convergence the estimate will be a least squares spline applied to the
+empirical psuedo data. The user is referred to the paper by Oh and Nychka ( 2002) for the
+details and properties of the robust cross-validation using empirical psuedo data.
+Of course these weights are crafted so that the resulting spline is an
+estimate of the alpha quantile instead of the mean. CV as function of
+lambda can be strange so it should be plotted.
+
+}
+\seealso{
+\code{\link{sreg}}
+}
+\examples{
+
+ # fit a CV quantile spline
+ fit50<- qsreg(rat.diet$t,rat.diet$con)
+ # (default is .5 so this is an estimate of the conditional median)
+ # control group of rats.
+ plot( fit50)
+ predict( fit50)
+ # predicted values at data points
+ xg<- seq(0,110,,50)
+ plot( fit50$x, fit50$y)
+ lines( xg, predict( fit50, xg))
+
+ # A robust fit to rat diet data
+ #
+ SC<- .5* median(abs((rat.diet$con- median(rat.diet$con))))
+ fit.robust<- qsreg(rat.diet$t,rat.diet$con, sc= SC)
+ plot( fit.robust)
+
+ # The global GCV function suggests little smoothing so
+ # try the local
+ # minima with largest lambda instead of this default value.
+ # one should should consider redoing the three quantile fits in this
+ # example after looking at the cv functions and choosing a good value for
+ #lambda
+ # for example
+ lam<- fit50$cv.grid[,1]
+ tr<- fit50$cv.grid[,2]
+ # lambda close to df=6
+ lambda.good<- max(lam[tr>=6])
+ fit50.subjective<-qsreg(rat.diet$t,rat.diet$con, lam= lambda.good)
+ fit10<-qsreg(rat.diet$t,rat.diet$con, alpha=.1, nstep.cv=200)
+ fit90<-qsreg(rat.diet$t,rat.diet$con, alpha=.9, nstep.cv=200)
+ # spline fits at 50 equally spaced points
+ sm<- cbind(
+
+ predict( fit10, xg),
+ predict( fit50.subjective, xg),predict( fit50, xg),
+ predict( fit90, xg))
+
+ # and now zee data ...
+ plot( fit50$x, fit50$y)
+ # and now zee quantile splines at 10% 50% and 90%.
+ #
+ matlines( xg, sm, col=c( 3,3,2,3), lty=1) # the spline
+
+}
+\keyword{smooth}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/quilt.plot.Rd b/man/quilt.plot.Rd
new file mode 100644
index 0000000..7ac3312
--- /dev/null
+++ b/man/quilt.plot.Rd
@@ -0,0 +1,117 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{quilt.plot}
+\alias{quilt.plot}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{Image plot for irregular spatial data. }
+\description{
+Given a vector of z values associated with 2-d locations this
+function produces an
+image-like plot where the locations are discretized to a grid and the z
+values are coded as a color level from a color scale. }
+\usage{
+quilt.plot(x, y, z, nx = 64, ny = 64, grid = NULL,
+ add.legend=TRUE,add=FALSE, nlevel=64,
+ col = tim.colors(nlevel),
+ nrow=NULL, ncol=NULL,FUN =
+ NULL, plot=TRUE, na.rm=FALSE, ...)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{x}{A vector of the x coordinates of the locations -or- a
+ a 2 column matrix of the x-y coordinates. }
+ \item{y}{A vector of the y coordinates -or- if the locations are passed
+in x the z vector }
+ \item{z}{Values of the variable to be plotted.}
+ \item{nlevel}{Number of color levels.}
+ \item{nx}{Number of grid boxes in x if a grid is not specified.}
+ \item{ny}{Number of grid boxes in y. }
+ \item{nrow}{Depreciated, same as nx.}
+ \item{ncol}{Depreciated same as ny. }
+ \item{grid}{A grid in the form of a \code{grid list}. }
+ \item{add.legend}{If TRUE a legend color strip is added}
+ \item{add}{If FALSE add to existing plot.}
+ \item{col}{Color scale for the image, the default is tim.colors --
+ a pleasing spectrum.}
+ \item{plot}{If FALSE just returns the image object instead of plotting it.}
+ \item{FUN}{The function to apply to values that are common to a grid box. The default is to find the mean. (see \code{as.image}).}
+ \item{na.rm}{If FALSE NAs are not removed from zand so a grid box
+ even one of these values may be an NA. (See details below.)}
+ \item{\dots}{ arguments to be passed to the image.plot function }
+}
+\details{
+This function combines the discretization to an image by the function
+\code{as.image} and is then graphed by \code{image.plot}.
+By default, locations that fall into the same grid box will have their
+z values averaged. This also means that observations that are NA will
+result in the grid box average also being NA and can produce unexpected
+results because the NA patterns can dominate the figure. If you are
+unsure of the effect try \code{na.rm = TRUE} for a comparison.
+
+A similar function exists in the lattice package and produces good
+looking plots. The advantage of this fields version is that it uses the
+standard R graphics functions and is written in R code. Also, the
+aggregation to average values for z values in the same grid box allows
+for different choices of grids. If two locations are very close,
+separating them could result in very small boxes.
+
+As always, legend placement is never completely automatic. Place the
+legend independently for more control, perhaps using \code{image.plot}
+in tandem with \code{split.screen} or enlarging the plot margin
+See \code{help(image.plot)} for examples of this function and these
+strategies. }
+\author{D.Nychka}
+\seealso{ as.image, image.plot, lattice, persp, drape.plot }
+\examples{
+
+data( ozone2)
+# plot 16 day of ozone data set
+
+quilt.plot( ozone2$lon.lat, ozone2$y[16,])
+US( add=TRUE, col="grey", lwd=2)
+
+#
+# and ... if you are fussy
+# do it again
+# quilt.plot( ozone2$lon.lat, ozone2$y[16,],add=TRUE)
+# to draw over the state boundaries.
+#
+
+### adding a common legend strip "by hand"
+## and a custom color table
+
+coltab<- two.colors( 256, middle="grey50" )
+
+par( oma=c( 0,0,0,5)) # save some room for the legend
+set.panel(2,2)
+zr<- range( ozone2$y, na.rm=TRUE)
+
+for( k in 1:4){
+quilt.plot( ozone2$lon.lat, ozone2$y[15+k,], add.legend=FALSE,
+ zlim=zr, col=coltab, nx=40, ny=40)
+US( add=TRUE)
+}
+par( oma=c(0,0,0,1))
+image.plot(zlim=zr,legend.only=TRUE, col=coltab)
+# may have to adjust number of spaces in oma to make this work.
+ }
+\keyword{hplot}% at least one, from doc/KEYWORDS
diff --git a/man/rat.diet.Rd b/man/rat.diet.Rd
new file mode 100644
index 0000000..ea2aa34
--- /dev/null
+++ b/man/rat.diet.Rd
@@ -0,0 +1,52 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{rat.diet}
+\alias{rat.diet}
+\title{
+ Experiment studying an appetite supressant in rats.
+}
+\description{
+The `rat.diet' data frame has 39 rows and 3 columns.
+These are data from a study of an appetite supressant given to young rats.
+The suppressant was removed from the treatment group at around 60 days.
+The responses are the median food intake and each group had approximately
+10 animals.
+}
+\usage{
+data(rat.diet)
+}
+\format{
+This data frame contains the following columns:
+
+\describe{
+\item{t}{
+Time in days}
+\item{con}{
+Median food intake of the control group }
+\item{trt}{
+Median food intake of the treatment group}
+}
+
+}
+\keyword{datasets}
+% docclass is data
+% Converted by Sd2Rd version 1.21.
diff --git a/man/rdist.Rd b/man/rdist.Rd
new file mode 100644
index 0000000..7d59f29
--- /dev/null
+++ b/man/rdist.Rd
@@ -0,0 +1,157 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{rdist}
+\alias{rdist}
+\alias{fields.rdist.near}
+\alias{rdist.vec}
+
+\title{
+ Euclidean distance matrix or vector
+}
+\description{
+Given two sets of locations \code{rdist} and \code{fields.rdist.near} computes the full
+Euclidean distance matrix among all pairings or a sparse version for points within a
+fixed threshhold distance. \code{rdist.vec} computes a vector of pairwise distances
+between corresponding elements of the input locations and is used in empirical
+variogram calculations.
+}
+\usage{
+rdist(x1, x2 = NULL, compact = FALSE)
+
+fields.rdist.near(x1,x2, delta, max.points= NULL, mean.neighbor = 50)
+
+rdist.vec(x1, x2)
+}
+\arguments{
+\item{x1}{
+Matrix of first set of locations where each row gives the coordinates of a
+particular
+point.
+}
+\item{x2}{
+Matrix of second set of locations where each row gives the coordinates of
+a particular point. If this is not passed or given as NULL x1 is used.
+}
+\item{compact}{
+Whether or not to return a distance matrix in compact form inheriting class ``dist'' (as returned by the \code{dist} function in base R). Only values for one triangle of the symmetric distance matrix are returned. This saves time evaluating the returned matrix and the covariance. Note that this option is ignored when \code{x2} is not NULL.
+}
+\item{delta}{ Threshhold distance. All pairs of points that separated by more than
+delta in distance are ignored. }
+
+\item{max.points}{Size of the expected number of pairs less than or equal to
+delta. The default is set to the nrow(x1)*mean.neighbor. }
+
+\item{mean.neighbor}{ Sets the temp space for max.points}
+
+}
+\section{Returned values}{
+Let D be the mXn distance matrix,
+with m= nrow(x1) and n=nrow( x2). The elements are
+the Euclidean distances between the all locations x1[i,] and x2[j,].
+That is,
+
+D.ij = sqrt( sum.k (( x1[i,k] - x2[j,k]) **2 ).
+
+\code{rdist}
+The distance matrix D is returned.
+
+\code{fields.rdist.near}
+The elements of D that are less than or equal to delta are returned in
+the form of a list.
+
+List components:
+\describe{
+\item{ind}{ Row and column indices of elements }
+\item{ra}{ (Distances ( D.ij)}
+\item{da}{ Dimensions of full distance matrix. }
+}
+
+This is a simple sparse format that can be manipulated by several fields
+functions. E.g. ind2spam will convert this list to the format used by
+the spam sparse matrix package. ind2full will convert this to an
+ordinary matrix with zeroes.
+
+}
+
+\details{
+
+More about fields.rdist.near:
+
+The sparse version is designed to work with the sparse covariance
+functions in fields and anticipates that the full matrix, D is too large
+to store. The argument max.points is set as a default to nrow( x1)*100
+and allocates the space to hold the sparse elements. In case that there
+are more points that are within delta the function stops with an error
+but lists the offending rows. Just rerun the function with a larger
+choice for max.points
+
+It possible that for certain x1 points there are no x2 points within a distance
+delta. This situation will cause an error if the list is converted to spam format.
+
+}
+\author{Doug Nychka, John Paige}
+\seealso{ \link{stationary.cov}, \link{Exp.cov}, \link{rdist.earth}, \link{dist}, ind2spam, ind2full }
+\examples{
+
+out<- rdist( ChicagoO3$x)
+# out is a 20X20 matrix.
+
+out2<- rdist( ChicagoO3$x[1:5,], ChicagoO3$x[11:20,])
+#out2 is a 5X10 matrix
+
+set.seed(123)
+x1<- matrix( runif( 20*2), 20,2)
+x2<- matrix( runif( 15*2), 15,2)
+
+out3<- fields.rdist.near( x1,x2, delta=.5)
+# out3 is a sparse structure in list format
+
+# or to "save" work space decrease size of temp array
+
+ out3<- fields.rdist.near( x1,x2, delta=.5,max.points=20*15)
+
+# explicitly reforming as a full matrix
+temp<- matrix( NA, nrow=out3$da[1], ncol= out3$da[2])
+temp[ out3$ind] <- out3$ra
+
+# or justuse
+
+ temp<- spind2full( out3)
+ image( temp)
+
+# this is identical to
+ temp2<- rdist( x1,x2)
+ temp2[ temp2<= .5] <- NA
+
+#compute pairwise distance vector
+x1 = 1:10
+x2 = seq(from=10, to=1)
+rdist.vec(x1, x2)
+
+#calculate output matrix in compact form:
+distOut = rdist(1:10, compact=TRUE)
+distOut
+as.vector(distOut)
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/rdist.earth.Rd b/man/rdist.earth.Rd
new file mode 100644
index 0000000..345202e
--- /dev/null
+++ b/man/rdist.earth.Rd
@@ -0,0 +1,88 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{rdist.earth}
+\alias{rdist.earth}
+\alias{rdist.earth.vec}
+
+\title{
+ Great circle distance matrix or vector
+}
+\description{
+Given two sets of longitude/latitude locations, \code{rdist.earth} computes
+the Great circle (geographic) distance matrix among all pairings and
+\code{rdist.earth.vec} computes a vector of pairwise great circle distances
+between corresponding elements of the input locations using the Haversine
+method and is used in empirical variogram calculations.
+}
+\usage{
+rdist.earth(x1, x2, miles = TRUE, R = NULL)
+rdist.earth.vec(x1, x2, miles = TRUE, R = NULL)
+}
+\arguments{
+\item{x1}{
+Matrix of first set of lon/lat coordinates first column is the
+longitudes
+and second is the latitudes.
+}
+\item{x2}{
+Matrix of second set of lon/lat coordinates first column is the
+longitudes
+and second is the latitudes. If missing x1 is used.
+}
+\item{miles}{
+If true distances are in statute miles if false distances in kilometers.
+}
+\item{R}{
+Radius to use for sphere to find spherical distances. If NULL the radius
+is either in miles or kilometers depending on the values of the miles
+argument. If R=1 then distances are of course in radians.
+}
+}
+\value{
+The great circle distance matrix if nrow(x1)=m and nrow(
+x2)=n then the returned matrix will be mXn.
+}
+\details{
+Surprisingly the distance matrix is computed efficiently in R by dot products of the
+direction cosines. Thanks to Qing Yang for pointing this out a long time
+ago. }
+\author{Doug Nychka, John Paige}
+\seealso{
+ \link{rdist}, \link{stationary.cov}, \link{fields.rdist.near}
+}
+\examples{
+data(ozone2)
+out<- rdist.earth ( ozone2$lon.lat)
+#out is a 153X153 distance matrix
+upper<- col(out)> row( out)
+# histogram of all pairwise distances.
+hist( out[upper])
+
+#get pairwise distances between first 10 and second 10 lon/lat points
+x1 = ozone2$lon.lat[1:10,]
+x2 = ozone2$lon.lat[11:20,]
+dists = rdist.earth.vec(x1, x2)
+print(dists)
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/registeredC.Rd b/man/registeredC.Rd
new file mode 100644
index 0000000..125b4b2
--- /dev/null
+++ b/man/registeredC.Rd
@@ -0,0 +1,85 @@
+\name{registeringCode}
+\alias{addToDiagC}
+\alias{ExponentialUpperC}
+\alias{compactToMatC}
+\alias{multebC}
+\alias{multwendlandg}
+\alias{mltdrb}
+\alias{RdistC}
+%
+\docType{data}
+\title{Information objects that register
+C and FORTRAN functions.
+}
+\description{
+These are objects of class \code{CallRoutine} or \code{FortranRoutine} and also \code{\link{NativeSymbolInfo}}
+They provide information for compiledfunctions called with \code{.Call},
+or \code{.Fortran}.
+Ordinarily one would not need to consult these and they are used to make
+the search among dynamically loaded libraries ( in particular the fields library) have less ambiguity and
+also be faster. These are created when the package/library is loaded
+are have their definitions from the compliation of \code{init.c} in the
+package source (src) directory.
+}
+%ExponentialUpperC.Rd compactToMatC.Rd multebC.Rd
+%RdistC.Rd mltdrb.Rd multwendlandg.Rd
+
+%%\usage{
+%data(addToDiagC)
+
+%data(ExponentialUpperC)
+
+%data(compactToMatC)
+
+%data(multebC)
+
+%data(multwendlandg)
+
+%data(mltdrb)
+
+%data(RdistC)
+%}
+\format{
+ The format is a list with components:
+ \describe{
+ \item{name}{The (registration ?) name of the C function.}
+ \item{address}{See \link{NativeSymbolInfo}. }
+ \item{dll}{Dynamically linked library information.}
+ \item{numParameters}{Number of calling arguments in function.}
+ }
+ }
+\details{
+ \describe{
+\item{addToDiagC}{ adds diagonal elements to a matrix. See code{mKrig}.}
+
+\item{ExponentialUpperC}{Fills in upper triangle of a matrix with the exponential covariance function. See \code{ExponentialUpper}}
+
+\item{compactToMatC}{ Converts compact format to full matrix format. See \code{compactToMat}.}
+
+\item{multebC}{Mulitplies a vector/matrix with an exponential covariance function. See \code{exp.cov}}
+
+\item{multwendlandg}{This has been mysteriously included but it is not a function! }
+
+\item{mltdrb}{Evaluates the derivatives of thin plate sline radial basis functions. See \code{rad.cov}. }
+
+\item{RdistC}{ Euclidean distance function between sets of coordinates.
+See \code{rdist}.}
+}
+
+See \code{package_native_routine_registration_skeleton} for the
+utility used to create these data objects.
+
+It is not clear why these routines have been flagged as needing
+documentation while other routines have not.
+
+}
+\references{
+For background on registering C, C++ and Fortran functions see 5.4 of
+Writing R Extensions.
+See \url{http://r.789695.n4.nabble.com/Registration-of-native-routines-td4728874.html} for additional dicsussion of code registration.
+}
+\examples{
+print(addToDiagC)
+
+}
+\keyword{datasets}
diff --git a/man/ribbon.plot.Rd b/man/ribbon.plot.Rd
new file mode 100644
index 0000000..9ef3fb7
--- /dev/null
+++ b/man/ribbon.plot.Rd
@@ -0,0 +1,89 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{ribbon.plot}
+\alias{ribbon.plot}
+%- Also NEED an '\alias' for EACH other topic documented here.
+ \title{ Adds to an existing plot, a ribbon of color, based on values from
+a color scale, along a sequence of line segments.}
+ \description{
+Given a series of 2-d points and values at these segments,
+the function colors the segments according to a color scale and the
+segment values. This is essentially an image plot restricted to line segments.
+}
+\usage{
+
+ribbon.plot(x,y,z,zlim=NULL, col=tim.colors(256),
+ transparent.color="white",...)
+
+}
+
+\arguments{
+ \item{x}{x locations of line segments}
+ \item{y}{y locations of line segments}
+ \item{z}{ Values associated with each segment.}
+
+ \item{zlim}{Range for z values to determine color scale. }
+
+ \item{col}{Color table used for strip. Default is our favorite
+tim.colors being a scale from a dark blue to dark red.}
+
+\item{transparent.color}{Color used for missing values. Default is that
+missing values make the ribbon transparent.}
+
+ \item{\dots}{Optional graphical arguments that are passed to the
+ \code{segment} plotting function. A favorite is lwd to make a broad
+ ribbon. }
+
+}
+
+\details{
+Besides possible 2-d applications,
+this function is useful to annotate a curve on a surface using colors.
+The values mapped to acolor scheme could indicate a feature
+other than the height of the surface.
+For example, this function could indicate the slope of the surface.
+
+ }
+\author{Doug Nychka}
+
+\seealso{ image.plot, arrow.plot, add.image, colorbar.plot}
+
+\examples{
+plot( c(-1.5,1.5),c(-1.5,1.5), type="n")
+temp<- list( x= seq( -1,1,,40), y= seq( -1,1,,40))
+temp$z <- outer( temp$x, temp$y, "+")
+contour( temp, add=TRUE)
+
+t<- seq( 0,.5,,50)
+y<- sin( 2*pi*t)
+x<- cos( pi*t)
+z<- x + y
+
+ribbon.plot( x,y,z, lwd=10)
+
+persp( temp, phi=15, shade=.8, col="grey")-> pm
+trans3d( x,y,z,pm)-> uv
+ribbon.plot( uv$x, uv$y, z**2,lwd=5)
+
+}
+
+\keyword{ hplot }% at least one, from doc/KEYWORDS
diff --git a/man/set.panel.Rd b/man/set.panel.Rd
new file mode 100644
index 0000000..e1afd65
--- /dev/null
+++ b/man/set.panel.Rd
@@ -0,0 +1,78 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{set.panel}
+\alias{set.panel}
+\title{
+ Specify a panel of plots
+}
+\description{
+Divides up the graphics window into a matrix of plots.
+}
+\usage{
+set.panel(m=1, n=1, relax=FALSE)
+}
+\arguments{
+\item{m}{
+Number of rows in the panel of plots
+}
+\item{n}{
+Number of columns in the panel.
+}
+\item{relax}{
+If true and the par command is already set for multiple plots,
+then the set.panel command is ignored. The default is
+relax set to false.
+}
+}
+\details{
+After set.panel is called, the graphics screen is reset to
+put plots according to a m x n table. Plotting starts in
+the upper left hand corner and proceeds row by row. After
+m x n plots have been drawn, the next plot will erase the
+window and start in the 1,1 position again. This function
+is just a repackaging for specifying the mfrow argument to
+par.
+Setting up a panel of plots is a quick way to change the
+aspect ratio of the graph (ratio of height to width) or
+the size. For example, plotting 2 plots to a page produces
+a useful size graph for including in a report. You can
+print out the graphs at any stage without having to fill
+up the entire window with plots. This function, except for the "relax"
+option is equivalent to the S sequence: par( mfrow=c(m,n)).
+}
+\section{Side Effects}{
+The function will echo your choice of m and n to the terminal.
+}
+\seealso{
+\code{par}
+}
+\examples{
+set.panel(5,2) #divide screen to hold 10 plots where there are 5 rows
+ #and 2 columns
+plot( 1:10)
+plot( 2:8)
+
+set.panel() #reset screen to one plot per screen
+}
+\keyword{hplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/sim.Krig.Rd b/man/sim.Krig.Rd
new file mode 100644
index 0000000..930edf7
--- /dev/null
+++ b/man/sim.Krig.Rd
@@ -0,0 +1,347 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{sim.spatialProcess}
+\alias{sim.Krig}
+\alias{sim.spatialProcess}
+\alias{sim.Krig.approx}
+\alias{sim.mKrig.approx}
+\alias{sim.fastTps.approx}
+
+\title{Conditional simulation of a spatial process}
+\description{
+Generates exact (or approximate) random draws from the conditional
+distribution of a spatial process given specific observations. This is a
+useful way to characterize the uncertainty in the predicted process from
+data. This is known as conditional simulation in geostatistics or
+generating an ensemble prediction in the geosciences. sim.Krig.grid can
+generate a conditional sample for a large regular grid but is restricted
+to stationary correlation functions.
+ }
+\usage{
+sim.spatialProcess(object, xp, M = 1, verbose = FALSE, ...)
+
+sim.Krig(object, xp, M = 1, verbose = FALSE, ...)
+
+sim.Krig.approx(object, grid.list = NULL, M = 1, nx = 40, ny = 40,
+ verbose = FALSE, extrap = FALSE,...)
+
+sim.mKrig.approx(mKrigObject, predictionPoints = NULL,
+ predictionPointsList = NULL, simulationGridList =
+ NULL, gridRefinement = 5, gridExpansion = 1 + 1e-07, M
+ = 1, nx = 40, ny = 40, nxSimulation = NULL,
+ nySimulation = NULL, delta = NULL, verbose = FALSE,...)
+
+sim.fastTps.approx(fastTpsObject,
+ predictionPointsList, simulationGridList =
+ NULL, gridRefinement = 5, gridExpansion = 1 + 1e-07, M
+ = 1, delta = NULL, verbose=FALSE,...)
+
+
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+
+ \item{delta}{If the covariance has compact support the simulation method can
+take advantage of this. This is the amount of buffer added for the simulation domain in the circulant embedding method.
+A minimum size would be \code{theta} for the Wendland but a multiple of this maybe needed to obtain a positive definite
+circulant covariance function. }
+
+ \item{extrap}{ If FALSE conditional process is not evaluated outside
+ the convex hull of observations. }
+
+ \item{fastTpsObject}{The output object returned by fastTps}
+
+ \item{grid.list}{Grid information for evaluating the conditional
+ surface as a grid.list.}
+ \item{gridRefinement}{Amount to increase the number of grid points
+ for the simulation grid.}
+
+ \item{gridExpansion}{Amount to increase the size of teh simulation
+grid. This is used to increase the simulation domain so that the
+circulant embedding algorithm works.}
+
+ \item{mKrigObject}{An mKrig Object}
+
+
+ \item{M}{Number of draws from conditional distribution.}
+
+ \item{nx}{ Number of grid points in prediction locations for x coordinate.}
+ \item{ny}{ Number of grid points in prediction locations for x coordinate.}
+
+ \item{nxSimulation}{ Number of grid points in the circulant embedding simulation x coordinate.}
+ \item{nySimulation}{ Number of grid points in the circulant embedding simulation x coordinate.}
+
+ \item{object}{A Krig object.}
+
+ \item{predictionPoints}{A matrix of locations defining the
+ points for evaluating the predictions.}
+
+ \item{predictionPointsList}{ A \code{grid.list} defining the
+rectangular grid for evaluating the predictions.}
+
+ \item{simulationGridList}{ A \code{gridlist} describing grid for
+simulation. If missing this is created from the range of the
+locations, \code{nx}, \code{ny}, \code{gridRefinement}, and \code{gridExpansion}
+or from the range and and \code{nxSimulation}, \code{nySimulation}.}
+
+
+ \item{xp}{Same as predictionPoints above.}
+% \item{Zp}{The covariate vector or matrix for predicting at the locations xp}
+
+ \item{\dots}{Any other arguments to be passed to the predict function.
+ Usually this is the \code{Z} or \code{drop.Z} argument when there are
+ additional covariates in the fixed part of the model.
+ (See example below.) }
+
+ \item{verbose}{If true prints out intermediate information. }
+}
+
+\details{
+These functions generate samples from a conditional multivariate
+distribution, or an approximate one, that describes the uncertainty in the estimated spatial
+process under Gaussian assumptions. An important assumption throughout
+these functions is that all covariance parameters are fixed at their
+estimated or prescribed values from the passed object.
+
+Given a spatial process h(x)= P(x) + g(x) observed at
+
+Y.k = Z(x.k)d + P(x.k) + g(x.k) + e.k
+
+where P(x) is a low order, fixed polynomial and g(x) a Gaussian spatial
+process and Z(x.k) is a vector of covariates that are also indexed by space (such as elevation). Z(x.k)d is a linear combination of the the covariates with the parameter vector d being a component of the fixed part of the
+model and estimated in the usual way by generalized least squares.
+
+With Y= Y.1, ..., Y.N,
+the goal is to sample the conditional distribution of the process.
+
+[h(x) | Y ] or the full prediction Z(x)d + h(x)
+
+For fixed a covariance this is just a multivariate normal sampling
+problem. \code{sim.Krig.standard} samples this conditional process at
+the points \code{xp} and is exact for fixed covariance parameters.
+\code{sim.Krig.grid} also assumes fixed covariance parameters and does
+approximate sampling on a grid.
+
+The outline of the algorithm is
+
+0) Find the spatial prediction at the unobserved locations based on
+ the actual data. Call this h.hat(x) and this is the conditional mean.
+
+1) Generate an unconditional spatial process and from this process
+simluate synthetic observations. At this point the approximation is
+introduced where the field at the observation locations is
+approximated using interpolation from the nearest grid points.
+
+2) Use the spatial prediction model ( using the true covariance) to
+estimate the spatial process at unobserved locations.
+
+3) Find the difference between the simulated process and its
+prediction based on synthetic observations. Call this e(x).
+
+4) h.hat(x) + e(x) is a draw from [h(x) | Y ].
+
+
+\code{sim.spatialProcess} Follows this algorithm exactly. For the case of an
+addtional covariate this of course needs to be included. For a model
+with covariates use \code{drop.Z=TRUE} for the function to ignore prediction
+using the covariate and generate conditional samples for just the spatial
+process and any low order polynomial. Finally, it should be noted that this
+function will also work with an \code{mKrig} object because the essential
+prediction information in the mKrig and spatialProcess objects are the same.
+The naming is through convenience.
+
+\code{sim.Krig} Also follows this algorithm exactly but for the older \code{Krig} object. Note the inclusion of
+drop.Z=TRUE or FALSE will determine whether the conditional simulation
+includes the covariates Z or not. (See example below.)
+
+
+\code{sim.Krig.approx} and \code{sim.mKrig.approx} evaluate the
+conditional surface on grid and simulates the values of h(x) off the
+grid using bilinear interpolation of the four nearest grid
+points. Because of this approximation it is important to choose the
+grid to be fine relative to the spacing of the observations. The
+advantage of this approximation is that one can consider conditional
+simulation for large grids -- beyond the size possible with exact
+methods. Here the method for simulation is circulant embedding and so
+is restricted to stationary fields. The circulant embedding method is
+known to fail if the domain is small relative to the correlation
+range. The argument \code{gridExpansion} can be used to increase the
+size of the domain to make the algorithm work.
+
+\code{sim.fastTps.approx} Is optimized for the approximate thin plate
+spline estimator in two dimensions and \code{k=2}. For efficiency the
+ensemble prediction locations must be on a grid.
+
+}
+\value{
+\code{sim.Krig and sim.spatialProcess}
+ a matrix with rows indexed by the locations in \code{xp} and columns being the
+ \code{M} independent draws.
+
+\code{sim.Krig.approx} a list with components \code{x}, \code{y} and \code{z}.
+x and y define the grid for the simulated field and z is a three dimensional array
+ with dimensions \code{c(nx, ny, M)} where the
+first two dimensions index the field and the last dimension indexes the draws.
+
+\code{sim.mKrig.approx} a list with \code{predictionPoints} being the
+locations where the field has been simulated.If these have been created from a
+grid list that information is stored in the \code{attributes} of \code{predictionPoints}.
+ \code{Ensemble} is a
+matrix where rows index the simulated values of the field and columns
+are the different draws, \code{call} is the calling sequence. Not that if \code{predictionPoints}
+has been omitted in the call or is created beforehand using \code{make.surface.grid} it is
+easy to reformat the results into an image format for ploting using \code{as.surface}.
+e.g. if \code{simOut} is the output object then to plot the 3rd draw:
+
+\preformatted{
+ imageObject<- as.surface(simOut$PredictionGrid, simOut$Ensemble[,3] )
+ image.plot( imageObject)
+}
+
+\code{sim.fastTps.approx} is a wrapper function that calls \code{sim.mKrig.approx}.
+
+}
+
+\author{Doug Nychka}
+\seealso{ sim.rf, Krig, spatialProcess}
+\examples{
+
+\dontrun{
+# conditional simulation with covariates
+# colorado climate example
+ data(COmonthlyMet)
+ fit1E<- spatialProcess(CO.loc,CO.tmin.MAM.climate, Z=CO.elev )
+# conditional simulation at missing data
+ good<- !is.na(CO.tmin.MAM.climate )
+ infill<- sim.spatialProcess( fit1E, xp=CO.loc[!good,],
+ Z= CO.elev[!good], M= 10)
+# get an elevation grid ... NGRID<- 50 gives a nicer image but takes longer
+ NGRID <- 25
+ # get elevations on a grid
+ COGrid<- list( x=seq( -109.5, -101, ,NGRID), y= seq(39, 41.5,,NGRID) )
+ COGridPoints<- make.surface.grid( COGrid)
+ # elevations are a bilinear interpolation from the 4km
+ # Rocky Mountain elevation fields data set.
+ data( RMelevation)
+ COElevGrid<- interp.surface( RMelevation, COGridPoints )
+# NOTE call to sim.Krig treats the grid points as just a matrix
+# of locations the plot has to "reshape" these into a grid
+# to use with image.plot
+ SEout<- sim.spatialProcess( fit1E, xp=COGridPoints, Z= COElevGrid, M= 30)
+# for just the smooth surface in lon/lat
+# SEout<- sim.spatialProcess( fit1E, xp=COGridPoints, drop.Z=TRUE, M= 30)
+# in practice M should be larger to reduce Monte Carlo error.
+ surSE<- apply( SEout, 2, sd )
+ image.plot( as.surface( COGridPoints, surSE))
+ points( fit1E$x, col="magenta", pch=16)
+
+}
+
+data( ozone2)
+set.seed( 399)
+# fit to day 16 from Midwest ozone data set.
+ out<- Krig( ozone2$lon.lat, ozone2$y[16,], Covariance="Matern",
+ theta=1.0,smoothness=1.0, na.rm=TRUE)
+
+# NOTE theta =1.0 is not the best choice but
+# allows the sim.rf circulant embedding algorithm to
+# work without increasing the domain.
+
+#six missing data locations
+ xp<- ozone2$lon.lat[ is.na(ozone2$y[16,]),]
+
+# 5 draws from process at xp given the data
+# this is an exact calculation
+ sim.Krig( out,xp, M=5)-> sim.out
+
+# Compare: stats(sim.out)[3,] to Exact: predictSE( out, xp)
+# simulations on a grid
+# NOTE this is approximate due to the bilinear interpolation
+# for simulating the unconditional random field.
+# also more grids points ( nx and ny) should be used
+
+sim.Krig.approx(out,M=5, nx=20,ny=20)-> sim.out
+
+# take a look at the ensemble members.
+
+predictSurface( out, grid= list( x=sim.out$x, y=sim.out$y))-> look
+
+zr<- c( 40, 200)
+
+set.panel( 3,2)
+image.plot( look, zlim=zr)
+title("mean surface")
+for ( k in 1:5){
+image( sim.out$x, sim.out$y, sim.out$z[,,k], col=tim.colors(), zlim =zr)
+}
+
+
+
+\dontrun{
+data( ozone2)
+y<- ozone2$y[16,]
+good<- !is.na( y)
+y<-y[good]
+x<- ozone2$lon.lat[good,]
+O3.fit<- mKrig( x,y, Covariance="Matern", theta=.5,smoothness=1.0, lambda= .01 )
+set.seed(122)
+O3.sim<- sim.mKrig.approx( O3.fit, nx=100, ny=100, gridRefinement=3, M=5 )
+set.panel(3,2)
+surface( O3.fit)
+for ( k in 1:5){
+image.plot( as.surface( O3.sim$predictionPoints, O3.sim$Ensemble[,k]) )
+}
+# conditional simulation at missing data
+xMissing<- ozone2$lon.lat[!good,]
+O3.sim2<- sim.mKrig.approx( O3.fit, xMissing, nx=80, ny=80,
+ gridRefinement=3, M=4 )
+}
+\dontrun{
+#An example for fastTps:
+ data(ozone2)
+ y<- ozone2$y[16,]
+ good<- !is.na( y)
+ y<-y[good]
+ x<- ozone2$lon.lat[good,]
+ O3FitMLE<- fastTpsMLE( x,y, theta=1.5 )
+ O3Obj<- fastTps( x,y, theta=1.5, lambda=O3FitMLE$lambda.MLE)
+# creating a quick grid list based on ranges of locations
+ grid.list<- fields.x.to.grid( O3Obj$x, nx=100, ny=100)
+ O3Sim<- sim.fastTps.approx( O3Obj,predictionPointsList=grid.list,M=5)
+# controlling the grids
+ xR<- range( x[,1], na.rm=TRUE)
+ yR<- range( x[,2], na.rm=TRUE)
+ simulationGridList<- list( x= seq(xR[1],xR[2],,400), y= seq( yR[1],yR[2], ,400))
+# very fine localized prediction grid
+ O3GridList<- list( x= seq( -90.5,-88.5,,200), y= seq( 38,40,,200))
+ O3Sim<- sim.fastTps.approx( O3Obj, M=5, predictionPointsList=O3GridList,
+ simulationGridList = simulationGridList)
+# check
+ plot( O3Obj$x)
+ US( add=TRUE)
+ image.plot( as.surface( O3GridList,O3Sim$Ensemble[,1] ), add=TRUE)
+ points( O3Obj$x, pch=16, col="magenta")
+}
+}
+\keyword{spatial}
+% at least one, from doc/KEYWORDS
diff --git a/man/sim.rf.Rd b/man/sim.rf.Rd
new file mode 100644
index 0000000..8b21342
--- /dev/null
+++ b/man/sim.rf.Rd
@@ -0,0 +1,120 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{sim.rf}
+\alias{sim.rf}
+\title{
+ Simulates a Stationary Gaussian random field
+}
+\description{
+Simulates a stationary Gaussian random field on a regular grid with
+unit marginal variance.
+}
+\usage{
+sim.rf(obj)
+}
+\arguments{
+
+\item{obj}{
+A covariance object that includes information about the covariance
+function and the grid for evaluation. Usually this is created by a
+setup call to Exp.image.cov, stationary.image.cov, matern.image.cov or
+other related covariance functions. (See details below.)
+}
+
+\item{\dots}{
+Additional arguments passed to a particular method.}
+
+}
+\value{
+A matrix with the random field values
+}
+\details{
+The simulated field has the marginal variance that is determined by
+the covariance function for zero distance. Within fields the
+exponential and matern set this equal to one ( e.g. Matern(0) ==1) so
+that one simulates a random field with a marginal variance of one. For
+stationary.cov the marginal variance is \code{cov.function(0)} and we
+recommend that alternative covariance functions also be normalized so
+that this is one.
+
+Of course if one requires a Gaussian field with different marginal
+variance one can simply scale the result of this function. See the
+third example below.
+
+This function takes an object that includes some preliminary
+calculations and so is more efficient for simulating more than one
+field from the same covariance. However, the algorithm using a 2-d FFT
+(known as circulant embedding) may not always work if the correlation
+range is large. The simple fix is to increase the size of the domain
+so that the correlation scale becomes smaller relative to the extent
+of the domain. Increasing the size can be computationally expensive
+however and so this method has some limitations. But when it works it is
+and exact simulation of the random field.
+
+For a stationary model the covariance object should have the components:
+
+names( obj)
+ "m" "n" "grid" "N" "M" "wght",
+
+where m and n are the number of grid points in x and y, grid is a list
+with components x and y giving the grid points in each coordinate. N
+and M is the size of the larger grid that is used for
+simulation. Usually M = 2*m and N =2*n and results in an exact
+simulation of the stationary Gaussian field. wght is a matrix from
+the FFT of the covariance function. The easiest way to create this
+object is to use for example Exp.image.cov with setup=T ( see below).
+
+The classic reference for this algorithm is
+Wood, A.T.A. and Chan, G. (1994).
+ Simulation of Stationary Gaussian Processes in [0,1]^d . Journal of
+Computational and Graphical Statistics, 3, 409-432. Micheal Stein and
+Tilman Gneiting have also made some additional contributions to the
+algortihms and theory.
+
+}
+\seealso{
+Exp.image.cov, matern.image.cov, stationary.image.cov
+}
+\examples{
+#Simulate a Gaussian random field with an exponential covariance function,
+#range parameter = 2.0 and the domain is [0,5]X [0,5] evaluating the
+#field at a 100X100 grid.
+grid<- list( x= seq( 0,5,,100), y= seq(0,5,,100))
+obj<-Exp.image.cov( grid=grid, theta=.5, setup=TRUE)
+look<- sim.rf( obj)
+# Now simulate another ...
+look2<- sim.rf( obj)
+
+# Suppose one requires an exponential, range = 2
+# but marginal variance = 10 ( rho in fields notation)
+look3<- sqrt( 10)* sim.rf( obj)
+
+# take a look at first two
+set.panel(2,1)
+ image.plot( grid$x, grid$y, look)
+ title("simulated gaussian fields")
+ image.plot( grid$x, grid$y, look2)
+ title("another realization ...")
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/smooth.2d.Rd b/man/smooth.2d.Rd
new file mode 100644
index 0000000..eb4bbc1
--- /dev/null
+++ b/man/smooth.2d.Rd
@@ -0,0 +1,163 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{smooth.2d}
+\alias{smooth.2d}
+\title{
+ Kernel smoother for irregular 2-d data
+}
+\description{
+An approximate Nadaraya Watson kernel smoother is obtained by first
+discretizing the locations to a grid and then using convolutions to find
+and to apply the kernel weights. The main advantage of this function
+is a smoother that avoids explicit looping.
+}
+\usage{
+smooth.2d(Y, ind = NULL, weight.obj = NULL, setup = FALSE, grid = NULL,
+ x = NULL, nrow = 64, ncol = 64, surface = TRUE, cov.function =
+gauss.cov, Mwidth = NULL, Nwidth = NULL, ...)
+}
+\arguments{
+\item{Y}{
+A vector of
+data to be smoothed
+}
+\item{ind}{
+Row and column indices that correspond to
+the locations of the data on regular grid. This is most useful when
+smoothing the same locations many times. (See also the x argument.)
+}
+\item{weight.obj }{
+An object that
+has the FFT of the convolution kernel and other information ( i.e. the
+result from calling this with setup=TRUE).
+}
+\item{setup}{
+If true creates a list that includes the FFT of the
+convolution kernel. In this case the function will return this
+list. Default is false.
+}
+\item{grid}{
+A list with components x and y
+being equally spaced values that define the grid. Default are integers
+1:nrow, 1:ncol. If x is given the ranges will be used to define the grid.
+}
+\item{x}{
+Actual locations
+of the Y values. Not needed if ind is specified.
+}
+\item{nrow}{
+ Number of
+points in the horizontal (x) axis of the grid. Not needed if grid is
+specified the default is 64
+}
+\item{ncol}{
+ Number of points in the vertical (y)
+axis of the grid. Not needed if grid list is specified the default is 64
+}
+\item{surface}{
+If true (the default) a surface object is returned suitable for use by
+image, persp or
+contour functions. If false then just the nrowXncol matrix of smoothed
+values is returned.
+}
+\item{cov.function}{
+S function describing the kernel function. To be consistent with the other
+spatial function this is in the form of a covariance function. The only
+assumption is that this be stationary. Default is the (isotropic) Gaussian.
+}
+\item{Nwidth}{
+The size of the padding regions of zeroes when computing the
+(exact) convolution of the kernel with the data. The most conservative
+values are 2*nrow and 2*ncol, the default. If the kernel has support of
+say 2L+1 grid points then the padding region need only be of size L+1.
+}
+\item{Mwidth}{
+See Nwidth.
+}
+\item{\dots}{
+Parameters that are passed to the smoothing kernel. ( e.g. the scale
+parameter theta for the
+exponential or gaussian)
+}
+}
+\value{
+Either a matrix of smoothed values or a surface object.
+The surface object also has a component 'ind' that gives the subscripts of the image
+matrix where the data is present.
+}
+\details{
+The irregular locations are first discretized to a regular grid ( using
+as.image)
+then a 2d- FFT is used to compute a
+Nadaraya-Watson type kernel estimator. Here we take advantage of two
+features. The kernel estimator is a convolution and by padding the regular
+by zeroes where data is not obsevred one can sum the kernel over irregular
+sets of locations.
+A second convolutions to find the normalization of the kernel
+weights.
+
+The kernel function is specified by an function that should evaluate with
+the kernel for two matrices of locations. Assume that the kernel has the
+form: K( u-v) for two locations u and v. The function given as the
+argument to cov.function should
+have the call myfun( x1,x2) where x1 and x2 are matrices of 2-d locations
+if nrow(x1)=m and nrow( x2)=n then this function should return a mXn
+matrix where the (i,j) element is K( x1[i,]- x2[j,]). Optional arguments
+that are included in the ... arguments are passed to this function when it
+is used. The default kernel is the Gaussian and the argument theta is the
+bandwidth. It is easy to write other other kernels, just use
+Exp.cov.simple as
+a template.
+}
+\examples{
+# Normal kernel smooth of the precip data with bandwidth of .5 ( degree)
+#
+look<- smooth.2d( RMprecip$y, x=RMprecip$x, theta=.25)
+
+# finer resolution used in computing the smooth
+look3<-smooth.2d( RMprecip$y, x=RMprecip$x, theta=.25, nrow=256,
+ncol=256,Nwidth=32,
+Mwidth=32)
+# if the width arguments were omitted the padding would create a
+# 512X 512 matrix with the data filled in the upper 256X256 part.
+# with a bandwidth of .25 degrees the normal kernel is essentially zero
+# beyond 32 grid points from its center ( about 6 standard deviations)
+#
+# take a look:
+
+#set.panel(2,1)
+#image( look3, zlim=c(-8,12))
+#points( RMprecip$x, pch=".")
+#image( look, zlim =c(-8,12))
+#points( RMprecip$x, pch=".")
+
+
+# bandwidth changed to .25, exponential kernel
+look2<- smooth.2d( RMprecip$y, x=RMprecip$x, cov.function=Exp.cov,theta=.25)
+#
+
+
+
+}
+\keyword{smooth}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/spam2lz.Rd b/man/spam2lz.Rd
new file mode 100644
index 0000000..f860769
--- /dev/null
+++ b/man/spam2lz.Rd
@@ -0,0 +1,123 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{spam2lz}
+\alias{spind2spam}
+\alias{spam2spind}
+\alias{spind2full}
+\alias{spam2full}
+
+\title{Conversion of formats for sparse matrices}
+\description{
+Some supporting functions that are internal to fields top level
+methods. These are used to convert between the efficient but
+opaque format used by spam and more easily checked format based directly
+on the row and column indices of non zero elements.
+}
+\usage{
+spind2full(obj)
+
+spam2full(obj)
+
+spind2spam(obj, add.zero.rows=TRUE)
+
+spam2spind(obj)
+
+
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+\item{obj}{
+Either a list with the sparse index components (spind) or an
+obj of class spam.}
+\item{add.zero.rows}{If TRUE an entire row is zero add a hard zero value to the element in the first column for each zero row. The spam format requires at least one element in each row to have an explicit value. It is OK if this value is zero but one must be specified. }
+}
+
+
+\details{
+The differencee in formats is best illustarted by an example:
+
+A 4X5 sparse matrix:
+\preformatted{
+ [,1] [,2] [,3] [,4] [,5]
+[1,] 1 9 0 0 33
+[2,] 0 0 0 26 34
+[3,] 3 11 0 27 35
+[4,] 0 12 20 0 36
+}
+
+spind format is a list with components "ind", "ra" and "da"
+here is how the matrix above would be encoded:
+
+\preformatted{
+ind
+ I
+ [1,] 1 1
+ [2,] 1 2
+ [3,] 1 5
+ [4,] 2 4
+ [5,] 2 5
+ [6,] 3 1
+ [7,] 3 2
+ [8,] 3 4
+ [9,] 3 5
+[10,] 4 2
+[11,] 4 3
+[12,] 4 5
+
+da
+[1] 4 5
+
+ra
+ [1] 1 9 33 26 34 3 11 27 35 12 20 36
+}
+
+spam format is an S4 class with slot names
+"entries", "colindices", "rowpointers" and "dimension".
+
+entries
+
+ [1] 1 9 33 26 34 3 11 27 35 12 20 36
+
+colindices
+
+ [1] 1 2 5 4 5 1 2 4 5 2 3 5
+
+rowpointers
+
+ [1] 1 4 6 10 13
+
+dimension
+
+ [1] 4 5
+
+The row pointers are the position in the array of entries where the next row
+starts.
+
+NOTE: It is possible for the spind format to have a missing row of all
+zeroes but this not allowed in spam format and produces an error message.
+
+
+}
+\author{Doug Nychka}
+\seealso{as.spam}
+\keyword{spatial}
+% at least one, from doc/KEYWORDS
diff --git a/man/spatialProcess.Rd b/man/spatialProcess.Rd
new file mode 100644
index 0000000..ca67f5a
--- /dev/null
+++ b/man/spatialProcess.Rd
@@ -0,0 +1,376 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{spatialProcess}
+\alias{spatialProcess}
+\alias{plot.spatialProcess}
+\alias{print.spatialProcess}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+ Estimates a spatial process model.
+%% ~~function to do ... ~~
+}
+\description{
+ For a given covariance function estimates the nugget (sigma^2) and process variance (rho) and the
+ range parameter (theta) by restricted maximum likelihood and then computes the
+ spatial model with these estimated parameters.
+ Other parameters of the covariance
+ are kept fixed and need to be specified.
+%% ~~ A concise (1-5 lines) description of what the function does. ~~
+}
+\usage{
+spatialProcess(x, y, weights = rep(1, nrow(x)), Z = NULL,
+mKrig.args = list(m = 2),
+cov.function = "stationary.cov", cov.args = list(Covariance = "Matern",
+ smoothness = 1), theta = NULL, theta.start = NULL, lambda.start = 0.5,
+ theta.range = NULL,
+ abstol = 1e-04, na.rm = TRUE, verbose = FALSE, REML = FALSE, ...)
+
+\method{print}{spatialProcess}(x, digits = 4, ...)
+\method{plot}{spatialProcess}(x, digits = 4, which = 1:4, ...)
+
+}
+
+\arguments{
+\item{x}{Observation locations}
+
+\item{y}{Observation values}
+
+\item{weights}{Weights for the error term (nugget) in units of reciprocal variance.}
+
+\item{Z}{A matrix of extra covariates for the fixed part of spatial model.
+E.g. elevation for fitting climate data over space. }
+
+\item{mKrig.args}{Arguments passed to the mKrig function.}
+
+\item{cov.function}{A character string giving the name of the covariance function
+ for the spatial component.}
+
+ \item{cov.args}{ A list specifying parameters and other components of the covariance function.}
+
+ \item{theta}{If not NULL the range parameter for the covariance is fixed at this value. }
+
+ \item{theta.start}{Starting value for MLE fitting of the scale (aka range) parameter. If omitted the starting value is taken from a grid search ove theta.}
+
+ \item{lambda.start}{Starting value for MLE fitting of the lambda parameter. Note lambda
+ is the ratio of the nugget variance to the process variance. In code variables this is
+ \code{sigma^2} divided by \code{rho}. }
+ \item{ theta.range }{A range for the ML search to estimate theta. Default is based on quantiles of the location pairwise distances.}
+
+ \item{na.rm}{If TRUE NAs are removed from the data. }
+
+% \item{gridN}{Number of grid points for evaluating profile likelihood over
+% theta and
+% also over lambda (using MLE for theta).}
+
+ %\item{optim.args}{Arguements to be used in optim for finding joint
+ %MLEs for theta
+ %and lambda. }
+
+ \item{REML}{ If TRUE the parameters are found by restricted maximum likelihood.}
+
+ \item{verbose}{If TRUE print out intermediate information for debugging.}
+
+ \item{\dots}{
+ Any other arguments that will be passed to the \code{mKrig} function and interpreted
+ as additional arguments to the covariance function. E.g. \code{smoothness} for the
+ Matern covariance.
+ }
+
+ \item{abstol}{The absolute tolerance bound used to judge convergence. This is applied
+ to the difference in log likelihood values. }
+
+ \item{digits}{Number of significant digits in printed summary}
+
+ \item{which}{The vector 1:4 or any subset of 1:4, giving the plots to draw.
+ See the description ofthese plots below.}
+
+}
+\details{
+ This function makes many choices for the user in terms of defaults and it is
+ important to be aware of these.
+ The spatial model is
+
+ Y.k= P(x.k) + Z(x.k)\%*\%d2 + g(x.k) + e.k
+
+where ".k" means subscripted by k, Y.k is the dependent variable
+observed at location x.k. P is a low degree polynomial (default is a
+linear function in the spatial coordinates) and Z is a matrix of covariates (optional) that enter as a linear model the fixed part. g is a mean zero,
+Gaussian stochastic process with a marginal variance of rho and a
+scale (or range) parameter, theta. The measurement errors, e.k, are
+assumed to be uncorrelated, normally distributed with mean zero and
+standard deviation sigma. If weights are supplied then the variance of e is assumed to be \code{sigma^2/ weights}.
+
+
+Perhaps the most important aspect of this function is that
+ the range (theta), nugget (sigma**2) and process variance (rho) parameters
+ for the covariance are estimated by restricted maximum
+ likelihood and this is the model that is then used for spatial
+ prediction. Geostatistics usaually refers to sigma**2 + rho as the
+ "sill" and often these parameters are estimated by variogram fitting rather
+ than maximum likelihood. To be consistent with spline models and to focus
+ on the key part of model we reparametrize as lambda= sigma**2/
+ rho and rho. Thinking about h as the spatial signal and e as the noise lambda
+ can be interpreted as the noise to signal variance ratio in this spatial
+ context.(See the comparision with fitting the geoR model in the
+examples section.)
+
+ The likelihood and the cross valdiation function
+ can be concentrated to only depend on lambda and theta and so
+ in reported the optimiztation of these two criterion we focus
+ on this form of the parameters. Once lambda and theta are
+ found, the MLE for rho has a closed form and of course then
+ sigma is then determined from lambda and rho.
+
+ Often the lambda
+ parameter is difficult to interpret when covariates and a
+ linear function of the coordinates is included and also when
+ the range becomes large relative to the size of the spatial
+ domain. For this reason it is convenient to report the
+ effective degrees of freedom (also referred to trA in R code and
+ the output summaries) associated with the predicted
+ surface or curve. This measure has a one to one relationship
+ with lamdba and is easier to interpret. For example an eff
+ degrees of freedom that is very small suggests that the
+ surface is rwell represented by a low ordoer
+ polynomial. Degrees of freedom close to the number of
+ locations indicates a surface that is close to interpolating
+ the observations and suggests a small or zero value for the
+ nugget variance.
+
+ The default covariance model is assumed to follow a Matern
+ with smoothness set to 1.0. This is implementd using the
+ \code{stationary.cov} covariance that can take a argument for
+ the form of the covariance, a sill and range parameters and
+ possibily additional parameter might comtrol the shape.
+
+ See the example below how to switch to another model. (Note
+ that the exponential is also part of the Matern family with
+ smoothness set to .5. )
+
+ The parameter estimation is done by \code{MLESpatialProcess}
+ and the returned list from this function is added to the Krig
+ output object that is returned by this function. The estimate
+ is a version of maximum likelihood where the observations are
+ transfromed to remove the fixed linear part of the model. If
+ the user just wants to fix the range parameter theta then
+ \code{Krig} can be used.
+
+ NOTE: The defaults for the \code{optim} function used in MLESpatialProcess are:
+
+ \preformatted{
+ list(method = "BFGS",
+ control=list(fnscale = -1,
+ ndeps = rep(log(1.1),length(cov.params.start)+1),
+ abstol = abstol,
+ maxit = 20))
+}
+
+
+There is always a hazard in providing a simple to use method that
+makes many default choices for the spatial model. As in any analysis
+be aware of these choices and try alternative models and parameter
+values to assess the robustness of your conclusions. Also examine the
+residuals to check the adequacy of the fit. See the examples below for
+some help in how to do this easily in fields. Also see quilt.plot to
+get an quick plot to discern spatial paterns.
+
+\strong{plot} method provides a panel of 4 diagnositic plots of the fit.
+Use \code{set.panel(2,2)} to see all 4 at once. The third plot gives the likelihood and
+GCV functions as a function of lambda evaluated at the global MLE for theta.
+This is based on the gird evaluations in the component MLEInfo$MLEProfileLambda.
+The fourth
+plot is a profile likelihood trace for theta having maximized over lambda and is based on the component MLEInfo$MLEGrid.
+
+\strong{print} method gives a summary of the fit.
+
+%% ~~ If necessary, more details than the description above ~~
+}
+\value{
+ An object of classes \code{mKrig} and \code{SpatialProcess}. The main difference
+ from mKrig is an extra component, \code{MLEInfo} that has the results of the grid
+ evaluation over theta (maximizing lamdba), joint maximization over theta and lambda,
+ and a grid evaluation over lambda with theta fixed at its MLE.
+}
+
+\author{
+Doug Nychka%% ~~who you are~~
+}
+
+\seealso{
+Tps, MLESpatialProcess, mKrigMLEGrid, mKrigMLEJoint, plot.Krig, predict.mKrig, predictSE.mKrig
+}
+\examples{
+data( ozone2)
+# x is a two column matrix where each row is a location in lon/lat
+# coordinates
+ x<- ozone2$lon.lat
+# y is a vector of ozone measurements at day 16 a the locations.
+ y<- ozone2$y[16,]
+ obj<- spatialProcess( x, y)
+# summary of model
+ summary( obj)
+# diagnostic plots
+ set.panel(2,2)
+ plot(obj)
+# plot 1 data vs. predicted values
+# plot 2 residuals vs. predicted
+# plot 3 criteria to select the smoothing
+# parameter lambda = sigma^2 / rho
+# the x axis has transformed lambda
+# in terms of effective degrees of freedom
+# to make it easier to interpret
+# Note that here the GCV function is minimized
+# while the REML is maximzed.
+# plot 4 the log profile likelihood used to
+# determine theta.
+#
+# predictions on a grid
+ surface( obj)
+#(see also predictSurface for more control on evaluation grid
+# and plotting)
+#
+
+\dontrun{
+# working with covariates and filling in missing station data
+# using an ensemble method
+# see the example under help(sim.spatialProcess) to see how to
+# handle a conditional simulation on a grid of predictions with
+# covariates.
+data(COmonthlyMet)
+ fit1E<- spatialProcess(CO.loc,CO.tmin.MAM.climate, Z=CO.elev,
+ theta.range= c(.25, 2.0) )
+ set.panel( 2,2)
+ plot( fit1E)
+
+# conditional simulation at missing data
+ notThere<- is.na(CO.tmin.MAM.climate )
+ xp <- CO.loc[notThere,]
+ Zp <- CO.elev[notThere]
+ infill<- sim.spatialProcess( fit1E, xp=xp,
+ Z= Zp, M= 10)
+#
+# interpretation is that these infilled values are all equally plausible
+# given the observations and also given the estimated covariance model
+#
+# for extra credit one could now standardized the infilled values to have
+# conditional mean and variance from the exact computations
+# e.g. predict( fit1E, xp=CO.loc[!good,], Z= CO.elev[!good])
+# and predictSE(fit1E, xp=CO.loc[!good,], Z= CO.elev[!good])
+# with these standardization one would still preserve the correlations
+# among the infilled values that is also important for considering them as a
+# multivariate prediction.
+# conditional simulation on a grid but not using the covariate of elevation
+ fit2<- spatialProcess(CO.loc,CO.tmin.MAM.climate,
+ theta.range= c(.25, 2.0) )
+# note larger range parameter
+# create 2500 grids using handy function
+gridList <- fields.x.to.grid( fit2$x, nx=50,ny=50)
+xGrid<- make.surface.grid( gridList)
+ensemble<- sim.spatialProcess( fit2, xp=xGrid, M= 5)
+# this is an "n^3" computation so increasing the grid size
+# can slow things down for computation
+image.plot( as.surface( xGrid, ensemble[1,]))
+set.panel()
+}
+
+\dontrun{
+data( ozone2)
+# x is a two column matrix where each row is a location in lon/lat
+# coordinates
+ x<- ozone2$lon.lat
+# y is a vector of ozone measurements at day 16 a the locations.
+ y<- ozone2$y[16,]
+# a comparison to using an exponential and Wendland covariance function
+# and great circle distance -- just to make range easier to interpret.
+ obj <- spatialProcess( x, y,
+ Distance = "rdist.earth")
+ obj2<- spatialProcess( x, y,
+ cov.args = list(Covariance = "Exponential"),
+ Distance = "rdist.earth" )
+ obj3<- spatialProcess( x, y,
+ cov.args = list(Covariance = "Wendland",
+ dimension = 2,
+ k = 2),
+ Distance = "rdist.earth")
+# obj2 could be also be fit using the argument:
+# cov.args = list(Covariance = "Matern", smoothness=.5)
+#
+# Note very different range parameters - BTW these are in miles
+# but similar nugget variances.
+obj$pars
+obj2$pars
+obj3$pars
+# since the exponential is Matern with smoothness == .5 the first two
+# fits can be compared in terms of their likelihoods
+# the REML value is slightly higher for obj verses obj2 (598.4 > 596.7)
+# these are the _negative_ log likelihoods so suggests a preference for the
+# exponential model
+#
+# does it really matter in terms of spatial prediction?
+set.panel( 3,1)
+surface( obj)
+US( add=TRUE)
+title("Matern sm= 1.0")
+surface( obj2)
+US( add=TRUE)
+title("Matern sm= .5")
+surface( obj3)
+US( add=TRUE)
+title("Wendland k =2")
+# prediction standard errors
+# these take a while because prediction errors are based
+# directly on the Kriging weight matrix
+# see mKrig for an alternative.
+set.panel( 2,1)
+out.p<- predictSurfaceSE( obj, nx=40,ny=40)
+surface( out.p)
+US( add=TRUE)
+title("Matern sm= 1.0")
+points( x, col="magenta")
+#
+out.p<- predictSurfaceSE( obj, nx=40,ny=40)
+surface( out.p)
+US( add=TRUE)
+points( x, col="magenta")
+title("Matern sm= .5")
+}
+set.panel(1,1)
+
+\dontrun{
+### comparison with GeoR
+ data(ozone2)
+ x<- ozone2$lon.lat
+ y<- ozone2$y[16,]
+ good<-!is.na(y)
+ x1<- x[good,]
+ y1<- y[good]
+
+ obj<- spatialProcess( x, y, mKrig.args= list(m=1), smoothness = .5 )
+
+ library( geoR)
+ ml.n <- likfit(coords= x1, data=y1, ini = c(570, 3), nug = 50)
+ # compare to
+ stuffFields<- obj$MLEInfo$MLEJoint$summary[c(1,3,4,5)]
+ stuffGeoR<- c( ml.n$loglik, ml.n$phi, sqrt(ml.n$nugget),ml.n$sigmasq)
+ test.for.zero( stuffFields, stuffGeoR, tol=.005)
+}
+}
+\keyword{ spatial}% __ONLY ONE__ keyword per line
diff --git a/man/splint.Rd b/man/splint.Rd
new file mode 100644
index 0000000..e655ded
--- /dev/null
+++ b/man/splint.Rd
@@ -0,0 +1,112 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{splint}
+\alias{splint}
+\title{
+ Cubic spline interpolation
+}
+\description{
+A fast, FORTRAN based function for cubic spline interpolation.
+}
+\usage{
+splint(x, y, xgrid, wt=NULL, derivative=0,lam=0, df=NA, lambda=NULL, nx=NULL)
+}
+\arguments{
+\item{x}{
+The x values that define the curve or a two column matrix of
+x and y values.
+}
+\item{y}{
+The y values that are paired with the x's.
+}
+\item{xgrid}{
+The grid to evaluate the fitted cubic interpolating curve.
+}
+\item{derivative}{
+Indicates whether the function or a a first or second derivative
+should be evaluated.
+}
+\item{wt}{Weights for different obsrevations in the scale of reciprocal
+variance.}
+\item{lam}{ Value for smoothing parameter. Default value is zero giving
+interpolation.}
+\item{lambda}{Same as \code{lam} just to make this easier to remember.}
+\item{df}{ Effective degrees of freedom. Default is to use lambda =0 or a
+df equal to the number of observations.}
+\item{nx}{If not NULL this should be the number of points
+to evaluate on an equally spaced grid in the
+range of \code{x}}
+
+}
+\value{
+A vector consisting of the spline evaluated at the grid values in \code{xgrid}.
+}
+\details{
+Fits a piecewise interpolating or smoothing cubic
+polynomial to the x and y values.
+This code is designed to be fast but does not many options in
+\code{sreg} or other more statistical implementations.
+To make the solution well posed the
+the second and third derivatives are set to zero at the limits of the x
+values. Extrapolation outside the range of the x
+values will be a linear function.
+
+It is assumed that there are no repeated x values; use sreg followed by
+predict if you do have replicated data.
+
+}
+\section{References}{
+See Additive Models by Hastie and Tibshriani.
+}
+\seealso{
+sreg, Tps
+}
+\examples{
+x<- seq( 0, 120,,200)
+
+# an interpolation
+splint(rat.diet$t, rat.diet$trt,x )-> y
+
+plot( rat.diet$t, rat.diet$trt)
+lines( x,y)
+#( this is weird and not appropriate!)
+
+# the following two smooths should be the same
+
+splint( rat.diet$t, rat.diet$con,x, df= 7)-> y1
+
+# sreg function has more flexibility than splint but will
+# be slower for larger data sets.
+
+sreg( rat.diet$t, rat.diet$con, df= 7)-> obj
+predict(obj, x)-> y2
+
+# in fact predict.sreg interpolates the predicted values using splint!
+
+# the two predicted lines (should) coincide
+lines( x,y1, col="red",lwd=2)
+lines(x,y2, col="blue", lty=2,lwd=2)
+
+}
+\keyword{smooth}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/sreg.Rd b/man/sreg.Rd
new file mode 100644
index 0000000..ccd4f01
--- /dev/null
+++ b/man/sreg.Rd
@@ -0,0 +1,339 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{sreg}
+\alias{sreg}
+\alias{predict.sreg}
+\title{
+ Cubic smoothing spline regression
+}
+\description{
+Fits a cubic smoothing spline to univariate data. The amount of
+smoothness can be specified or estimated from the data by GCV.
+<!--brief description-->
+}
+\usage{
+sreg(x, y, lambda = NA, df = NA, offset = 0,
+ weights = rep(1, length(x)), cost = 1,
+ nstep.cv = 80, tol=1e-5,find.diagA = TRUE, trmin = 2.01,
+ trmax = NA, lammin = NA,
+ lammax = NA, verbose = FALSE,
+ do.cv = TRUE, method = "GCV", rmse = NA,
+ na.rm = TRUE)
+
+\method{predict}{sreg}(object, x, derivative = 0, model = 1,...)
+}
+
+\arguments{
+\item{x}{
+Vector of x value}
+
+\item{y}{
+Vector of y values}
+
+\item{lambda}{
+Single smoothing parameter or a vector of values . If omitted
+smoothing parameter estimated by GCV. NOTE: lam here is equivalent to
+the value lambda*N in Tps/Krig where N is the number of unique observations.
+See example below.}
+
+\item{object}{An sreg object.}
+\item{derivative}{Order of deriviatve to evaluate. Must be 0,1, or 2.}
+\item{df}{
+Amount of smoothing in term of effective degrees of freedom for the
+spline}
+
+\item{offset}{
+an offset added to the term cost*degrees of freedom in the denominator of
+the GCV function. (This would be used for adjusting the df from fitting
+other models such as in back-fitting additive models.)}
+
+\item{model}{Specifies which model parameters to use.}
+
+\item{weights}{
+A vector that is proportional to the reciprocal variances of the
+errors.}
+
+\item{cost}{
+Cost value to be used in the GCV criterion.}
+
+\item{nstep.cv }{
+Number of grid points of smoothing parameter for GCV grid search.}
+
+\item{tol}{Tolerance for convergence in minimizing the GCV or other
+criteria to estimate the smoothing parameter.}
+\item{find.diagA}{
+If TRUE calculates the diagonal elements of the smoothing matrix. The
+effective
+number of degrees of freedom is the sum of these diagonal elements.
+Default is true. This requires more stores if a grid of smoothing
+parameters is passed. ( See returned values below.)}
+
+\item{trmin}{
+Sets the minimum of the smoothing parameter range for the GCV grid
+search in terms of effective degrees of freedom.}
+
+\item{trmax}{
+Sets the maximum of the smoothing parameter range for the GCV grid
+search in terms of effective degrees of freedom. If NA the range is set
+to .99 of number of unique locations.}
+
+\item{lammin}{
+Same function as trmin but in the lambda scale.}
+
+\item{lammax}{
+Same function as trmax but in the lambda scale.}
+
+\item{verbose}{
+Print out all sorts of debugging info. Default is falseof course!}
+
+\item{do.cv}{
+Evaluate the spline at the GCV minimum. Default is true.}
+
+\item{method}{
+A character string giving the
+method for determining the smoothing
+parameter. Choices are
+"GCV", "GCV.one", "GCV.model", "pure error", "RMSE". Default is "GCV". }
+
+\item{rmse}{
+Value of the root mean square error to match by varying lambda.}
+
+\item{na.rm}{If TRUE NA's are removed from y before analysis.}
+
+\item{\dots}{Other optional arguments to pass to the predict function.}
+
+}
+\value{
+Returns a list of class sreg.
+Some of the returned components are
+
+\item{call}{
+Call to the function
+}
+\item{yM}{
+Vector of dependent variables. If replicated data is given these are the
+replicate group means. }
+
+\item{xM}{
+Unique x values matching the y's. }
+\item{weights}{
+Proportional to reciprocal variance of each data point.
+}
+\item{weightsM}{
+Proportional to reciprocal pooled variance of each
+replicated mean data value (xM).}
+
+\item{x}{
+Original x data. }
+
+\item{y}{
+Original y data. }
+
+\item{method}{
+Method used to find the smoothing parameter.}
+
+\item{pure.ss}{
+Pure error sum of squares from replicate groups. }
+
+\item{shat.pure.error}{
+Estimate of sigma from replicate groups.}
+
+\item{shat.GCV}{
+Estimate of sigma using estimated lambda from GCV minimization }
+
+\item{trace}{
+Effective degrees of freedom for the spline estimate(s)}
+
+\item{gcv.grid}{
+Values of trace, GCV, shat. etc. for a grid of smoothing parameters.
+If lambda ( or df) is specified those values are used. }
+
+\item{lambda.est}{
+Summary of various estimates of the smoothing parameter}
+
+\item{lambda}{
+If lambda is specified the passed vector, if missing the estimated value.}
+
+\item{residuals}{
+Residuals from spline(s). If lambda or df is specified the residuals from
+these values. If lambda and df are omitted then the spline having
+estimated lambda. This will be a matrix with as many columns as the values
+of lambda. }
+
+\item{fitted.values}{
+Matrix of fitted values. See notes on residuals. }
+
+\item{predicted}{
+A list with components x and y. x is the unique values of xraw in sorted
+order. y is a matrix of the spline estimates at these values. }
+
+\item{eff.df}{
+Same as trace.}
+
+\item{diagA}{
+Matrix containing diagonal elements of the smoothing matrix. Number of
+columns is the number of lambda values.
+WARNING: If there is replicated data the
+diagonal elements are those for the smoothing the group means at the
+unique x locations. }
+}
+
+\details{
+MODEL: The assumed model is Y.k=f(x.k) +e.k where e.k should be
+approximately
+normal and independent errors with variances sigma**2/w.k
+
+ESTIMATE: A smoothing spline is a locally weighted average of the y's
+based
+on the relative locations of the x values. Formally the estimate is
+the curve that minimizes the criterion:
+
+
+(1/n) sum(k=1,n) w.k( Y.k - f( X.k))**2 + lambda R(f)
+
+where R(f) is the integral of the squared second derivative of f over
+the range of the X values. Because of the inclusion of the (1/n) in the
+sum of squares the lambda parameter in sreg corresponds to the a value of
+lambda*n in the Tps function and in the Krig function.
+
+ The solution to this minimization is a piecewise cubic polynomial with
+the join points at the unique set of X values. The polynomial segments
+are constructed so that the entire curve has continuous first and second
+derivatives and the second and third derivatives are zero at the
+boundaries. The smoothing has the range [0,infinity]. Lambda equal to
+zero gives a cubic spline interpolation of the data. As lambda diverges
+to infinity ( e.g lambda =1e20) the estimate will converge to the
+straight line estimated by least squares.
+
+ The values of the estimated function at the data points can be
+expressed in the matrix form:
+
+ predicted values= A(lambda)Y
+
+where A is an nXn symmetric matrix that does NOT depend on Y.
+The diagonal elements are the leverage values for the estimate and the
+sum of these (trace(A(lambda)) can be interpreted as the effective
+number of parameters that are used to define the spline function.
+IF there are replicate points the A matrix is the result of finding group
+averages and applying a weighted spline to the means.
+The A matrix is also used to find "Bayesian" confidence intervals for the
+estimate, see the example below.
+
+CROSS-VALIDATION:The GCV criterion with no replicate points for a fixed
+value of lambda is
+
+ (1/n)(Residual sum of squares)/((1-(tr(A)-offset)*cost + offset)/n)**2,
+
+Usually offset =0 and cost =1. Variations on GCV with replicate points are
+described in the documentation help file for Krig. With an appropriate
+choice for the smoothing parameter, the estimate of sigma**2 is found by
+(Residual sum of squares)/tr(A).
+
+COMPUTATIONS: The computations for 1-d splines exploit the banded
+structure of the matrices needed to solve for the spline coefficients.
+Banded structure also makes it possible to get the diagonal elements of A
+quickly. This approach is different from the algorithms in Tps and
+tremendously more efficient for larger numbers of unique x values ( say >
+200). The advantage of Tps is getting "Bayesian" standard errors at
+predictions different from the observed x values. This function is similar
+to the S-Plus smooth.spline. The main advantages are more information and
+control over the choice of lambda and also the FORTRAN source code is
+available (css.f).
+
+See also the function \code{splint} which is designed to be a bare bones
+but fast smoothing spline.
+
+}
+\seealso{
+Krig, Tps, splint
+}
+\examples{
+# fit a GCV spline to
+# control group of rats.
+fit<- sreg(rat.diet$t,rat.diet$con)
+summary( fit)
+
+set.panel(2,2)
+plot(fit) # four diagnostic plots of fit
+set.panel()
+
+predict( fit) # predicted values at data points
+
+xg<- seq(0,110,,50)
+sm<-predict( fit, xg) # spline fit at 50 equally spaced points
+der.sm<- predict( fit, xg, deriv=1) # derivative of spline fit
+set.panel( 2,1)
+plot( fit$x, fit$y) # the data
+lines( xg, sm) # the spline
+plot( xg,der.sm, type="l") # plot of estimated derivative
+set.panel() # reset panel to 1 plot
+
+
+# the same fit using the thin plate spline numerical algorithms
+# sreg does not scale the obs so instruct Tps not to sacel either
+# this will make lambda comparable within factor of n.
+
+ fit.tps<-Tps( rat.diet$t,rat.diet$con, scale="unscaled")
+ summary( fit.tps)
+
+# compare sreg and Tps results to show the adjustment to lambda.
+
+ predict( fit)-> look
+ predict( fit.tps, lambda=fit$lambda*fit$N)-> look2
+ test.for.zero( look, look2) # silence means it checks to 1e-8
+
+# finding approximate standard errors at observations
+
+SE<- fit$shat.GCV*sqrt(fit$diagA)
+
+# compare to predictSE( fit.tps) differences are due to
+# slightly different lambda values and using shat.MLE instad of shat.GCV
+#
+
+# 95% pointwise prediction intervals
+Zvalue<- qnorm(.0975)
+upper<- fit$fitted.values + Zvalue* SE
+lower<- fit$fitted.values - Zvalue* SE
+#
+# conservative, simultaneous Bonferroni bounds
+#
+ZBvalue<- qnorm(1- .025/fit$N)
+upperB<- fit$fitted.values + ZBvalue* SE
+lowerB<- fit$fitted.values - ZBvalue* SE
+#
+# take a look
+
+plot( fit$x, fit$y)
+lines( fit$predicted, lwd=2)
+matlines( fit$x,
+cbind( lower, upper, lowerB, upperB), type="l", col=c( 2,2,4,4), lty=1)
+title( "95 pct pointwise and simultaneous intervals")
+# or try the more visually honest:
+plot( fit$x, fit$y)
+lines( fit$predicted, lwd=2)
+segments( fit$x, lowerB, fit$x, upperB, col=4)
+segments( fit$x, lower, fit$x, upper, col=2, lwd=2)
+title( "95 pct pointwise and simultaneous intervals")
+
+set.panel( 1,1)
+}
+\keyword{smooth}
diff --git a/man/stats.Rd b/man/stats.Rd
new file mode 100644
index 0000000..82bc666
--- /dev/null
+++ b/man/stats.Rd
@@ -0,0 +1,75 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{stats}
+\alias{stats}
+\title{
+ Calculate summary statistics
+}
+\description{
+Various summary statistics are calculated for different
+types of data.
+}
+\usage{
+stats(x, by)
+}
+\arguments{
+\item{x}{
+The data structure to compute the statistics. This can either be a
+vector, matrix (data sets are the columns), or a list (data sets are
+the components).
+}
+\item{by}{
+If x is a vector, an optional vector (either character or numerical)
+specifying the categories to divide x into separate data sets.
+}
+}
+\value{
+A matrix where rows index the summary statistics and the columns
+index the separate data sets.
+}
+\details{
+Stats breaks x up into separate data sets and then calls describe
+to calculate the statistics.
+Statistics are found by columns for matrices, by components for a list and
+by the relevent groups when a numeric vector and a by vector are given.
+The default set of statistics are the number of
+(nonmissing) observations, mean, standard deviation, minimum, lower quartile,
+median, upper quartile, maximum, and number of missing observations. If any
+data set is nonnumeric, missing values are returned for the statistics.
+The by argument is a useful way to calculate statistics on parts of a
+data set according to different cases.
+}
+\seealso{
+stats.bin, stats.bplot, describe
+}
+\examples{
+#Statistics for 8 normal random samples:
+zork<- matrix( rnorm(200), ncol=8)
+stats(zork)
+
+zork<- rnorm( 200)
+id<- sample( 1:8, 200, replace=TRUE)
+stats( zork, by=id)
+}
+\keyword{univar}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/stats.bin.Rd b/man/stats.bin.Rd
new file mode 100644
index 0000000..e342689
--- /dev/null
+++ b/man/stats.bin.Rd
@@ -0,0 +1,86 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{stats.bin}
+\alias{stats.bin}
+\title{
+ Bins data and finds some summary statistics.
+}
+\description{
+Cuts up a numeric vector based on binning by a covariate and applies the
+fields
+stats function to each group
+}
+\usage{
+stats.bin(x, y, N = 10, breaks = NULL)
+}
+\arguments{
+\item{x}{
+Values to use to decide bin membership
+}
+\item{y}{
+A vector of data
+}
+\item{N}{
+Number of bins. If the breaks is missing there are N bins equally spaced
+on the range of x.
+}
+\item{breaks}{
+The bin boundaries. If there are N+1 of these there will be N bins.
+The bin widths can be unequal.
+}
+}
+\value{
+A list with several components. stats is a matrix with columns indexing
+the bins and
+rows being summary statistics found by the stats function. These are:
+ number of obs, mean, sd, min, quartiles, max and number of NA's.
+(If there is no data for a given bin, NA's are filled in. )
+breaks are the breaks passed to the function and centers are the bin
+centers.
+}
+\seealso{
+bplot, stats
+}
+\examples{
+u<- rnorm( 2000)
+v<- rnorm( 2000)
+x<- u
+y<- .7*u + sqrt(1-.7**2)*v
+
+look<- stats.bin( x,y)
+look$stats["Std.Dev.",]
+
+data( ozone2)
+# make up a variogram day 16 of Midwest daily ozone ...
+look<- vgram( ozone2$lon.lat, c(ozone2$y[16,]), lon.lat=TRUE)
+
+# break points
+brk<- seq( 0, 250,,40)
+
+out<-stats.bin( look$d, look$vgram, breaks=brk)
+# plot bin means, and some quantiles Q1, median, Q3
+matplot( out$centers, t(out$stats[ c("mean", "median","Q1", "Q3"),]),
+type="l",lty=c(1,2,2,2), col=c(3,4,3,4), ylab="ozone PPB")
+}
+\keyword{univar}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/summary.Krig.Rd b/man/summary.Krig.Rd
new file mode 100644
index 0000000..623eada
--- /dev/null
+++ b/man/summary.Krig.Rd
@@ -0,0 +1,68 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{summary.Krig}
+\alias{summary.Krig}
+\alias{summary.spatialProcess}
+\title{
+ Summary for Krig or spatialProcess estimated models.
+}
+\description{
+Creates a list of summary results including estimates for the nugget
+variance (sigma) and the smoothing
+parameter (lambda). This list is usually printed using a "print.summary"
+function for nice formatting.
+}
+\usage{
+\method{summary}{Krig}(object, digits=4,...)
+\method{summary}{spatialProcess}(object, digits=4,...)
+
+}
+\arguments{
+\item{object}{
+A Krig or spatialProcess object.
+}
+\item{digits}{
+Number of significant digits in summary.
+}
+\item{\dots}{Other arguments to summary}
+}
+\value{
+Gives a summary of the Krig object. The components
+include the function call, number of observations, effective degrees
+of freedom, residual degrees of freedom, root mean squared error,
+R-squared and adjusted R-squared, log10(lambda), cost, GCV minimum and
+a summary of the residuals.
+}
+\details{
+This function is a method for the generic function summary for class
+Krig. The results are formatted and printed using print.summary.Krig.
+}
+\seealso{
+Krig, summary, print.summary.Krig
+}
+\examples{
+fit<- Krig(ChicagoO3$x, ChicagoO3$y, theta=100)
+summary(fit) # summary of fit
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/summary.ncdf.Rd b/man/summary.ncdf.Rd
new file mode 100644
index 0000000..c593d7b
--- /dev/null
+++ b/man/summary.ncdf.Rd
@@ -0,0 +1,54 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{summary.ncdf}
+\alias{summary.ncdf}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{Summarizes a netCDF file handle}
+\description{
+Provides a summary of the variable names and sizes from the handle
+returned from netCDF file.
+}
+\usage{
+\method{summary}{ncdf}(object,...)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{object}{
+The "handle" returned by the \code{read.ncdf} function from the
+ncdf package.
+ }
+\item{\dots}{ Other arguments to pass to this function. Currently, no
+other arguments are used.
+}
+}
+\details{
+This function is out of place in fields but was included because often large
+geophysical data sets are in netCDF format and the ncdf R package is also
+needed. To date the summary capability in the ncdf package is limited and
+this function is used to supplement it use. The function is also a a
+useful device to see how the ncdf object is structured.
+}
+
+\author{ D. Nychka }
+
+\seealso{ ncdf}
+\keyword{ IO }% at least one, from doc/KEYWORDS
diff --git a/man/supportsArg.Rd b/man/supportsArg.Rd
new file mode 100644
index 0000000..dc7f63d
--- /dev/null
+++ b/man/supportsArg.Rd
@@ -0,0 +1,85 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+\name{supportsArg}
+\alias{supportsArg}
+%- Also NEED an '\alias' for EACH other topic documented here.
+\title{
+Tests if function supports a given argument
+}
+\description{
+%% ~~ A concise (1-5 lines) description of what the function does. ~~
+Tests if the given function supports the given argument. Commonly
+ used in fields code for determining if a covariance function
+ supports precomputation of the distance matrix and evaluation of
+ the covariance matrix over only the upper triangle.
+}
+\usage{
+supportsArg(fun=stationary.cov, arg)
+}
+%- maybe also 'usage' for other objects documented here.
+\arguments{
+ \item{fun}{
+The function tested for support for whether it supports the argument
+ \code{arg} as input
+}
+ \item{arg}{
+The argument to check if \code{fun} supports using as input
+}
+}
+\details{
+Currently only \code{stationary.cov} and \code{Exp.cov} support
+ evaluation of the covariance matrix over the upper triangle
+ (and diagonal) only via the onlyUpper argument and distance
+ matrix precomputation via the distMat argument.
+}
+
+\value{
+A logical indicating whether the given function supports use of the
+ given argument
+}
+\author{
+%% ~~who you are~~
+John Paige
+}
+
+%% ~Make other sections like Warning with \section{Warning }{....} ~
+
+\seealso{
+%% ~~objects to See Also as \code{\link{help}}, ~~~
+\code{\link{stationary.cov}}, \code{\link{Exp.cov}}
+These covariance functions have the \code{onlyUpper} option allowing
+ the user to evaluate the covariance matrix over the upper triangle and
+ diagonal only and to pass a precomputed distance matrix
+}
+\examples{
+################
+#Test covariance function to see if it supports evaluation of
+#covariance matrix over upper triangle only
+################
+
+supportsArg(Rad.cov, "distMat")
+supportsArg(Rad.cov, "onlyUpper")
+supportsArg(stationary.cov, "distMat")
+supportsArg(stationary.cov, "onlyUpper")
+supportsArg(Exp.cov, "distMat")
+supportsArg(Exp.cov, "onlyUpper")
+}
+
diff --git a/man/surface.Krig.Rd b/man/surface.Krig.Rd
new file mode 100644
index 0000000..db288b9
--- /dev/null
+++ b/man/surface.Krig.Rd
@@ -0,0 +1,123 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{surface.Krig}
+\alias{surface.Krig}
+\alias{surface.mKrig}
+
+\title{
+ Plots a surface and contours
+}
+\description{
+Creates different plots of the fitted surface of a Krig object. This is a
+quick way to look at the fitted function over reasonable default
+ranges.
+}
+\usage{
+\method{surface}{Krig}(
+object, grid.list = NULL, extrap = FALSE,
+ graphics.reset = NULL, xlab = NULL, ylab = NULL, main
+ = NULL, zlab = NULL, zlim = NULL, levels = NULL, type
+ = "C", nx = 80, ny = 80, ...)
+
+\method{surface}{mKrig}(
+object, grid.list = NULL, extrap = FALSE,
+ graphics.reset = NULL, xlab = NULL, ylab = NULL, main
+ = NULL, zlab = NULL, zlim = NULL, levels = NULL, type
+ = "C", nx = 80, ny = 80, ...)
+}
+\arguments{
+\item{object}{
+A Krig object or an mKrig object.
+}
+\item{grid.list}{
+A list with as many components as variables describing the surface.
+All components should have a single value except the two that give the
+grid points for evaluation. If the matrix or data frame has column names,
+these must appear in the grid list. If grid.list is missing an the
+surface has just two dimensions the grid is based on the ranges of the
+observed data.
+
+}
+\item{extrap}{
+Extrapolation beyond the range of the data. If false only the
+convex hull of the observations is plotted. Default is false.
+}
+\item{graphics.reset}{
+Reset to original graphics parameters after function plotting.
+}
+\item{type}{
+Type of plot as a character. "p" perspective plot (persp). "c" contour
+plot (contour). "b" a two panel figure with perspective and contour
+plots. "I" image plot with legend strip (image.plot). "C" image plot
+with contours overlaid. Image with contour is the default.
+
+}
+\item{main}{
+Title of plot}
+\item{xlab}{
+x axis label}
+\item{ylab}{
+y axis label}
+\item{zlab}{
+z axis label if "p" or "b" type is used.}
+\item{zlim}{
+Z limits passed to persp}
+\item{levels}{
+Contour levels passed to contour. }
+\item{nx}{
+ Number of grid points to evaluate surface on the horizontal
+axis (the x-axis). }
+
+\item{ny}{
+ Number of grid points to evaluate surface on the vertical
+axis (the y-axis). }
+
+\item{\dots}{
+Any other plotting options.
+}
+}
+\details{
+This function is essentially a combination of predictSurface and
+plot.surface. It may not always give a great rendition but is easy to use
+for checking the fitted surface. The default of extrap=F is designed to
+discourage looking at the estimated surface outside the range of
+the observations.
+
+NOTE: that any Z covariates will b edropped and only the spatial part of
+the model will be evaluated.
+
+}
+\seealso{
+\code{\link{Krig}}
+predictSurface, plot.surface, image.plot
+}
+\examples{
+fit<- Krig(ChicagoO3$x,ChicagoO3$y, theta=30) # krig fit
+
+#Image plot of surface with nice, smooth contours and shading
+
+surface(fit, type="C", nx=128, ny=128)
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/tim.colors.Rd b/man/tim.colors.Rd
new file mode 100644
index 0000000..80b17d7
--- /dev/null
+++ b/man/tim.colors.Rd
@@ -0,0 +1,182 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{tim.colors}
+\alias{tim.colors}
+\alias{larry.colors}
+\alias{two.colors}
+\alias{designer.colors}
+\alias{color.scale}
+\alias{fieldsPlotColors}
+\title{ Some useful color tables for images and tools to handle them. }
+\description{
+Several color scales useful for image plots: a pleasing rainbow style
+color table patterned after that used in Matlab by Tim Hoar and also
+some simple color interpolation schemes between two or more
+colors. There is also a function that converts between colors and a
+real valued vector.
+}
+\usage{
+tim.colors(n = 64, alpha=1.0)
+
+larry.colors()
+
+two.colors(n=256, start="darkgreen", end="red", middle="white",
+alpha=1.0)
+
+designer.colors( n=256, col= c("darkgreen", "white", "darkred"), x=
+ seq(0,1,, length(col)) ,alpha=1.0)
+
+color.scale( z, col=tim.colors(256), zlim =NULL,
+transparent.color="white", eps= 1e-8)
+
+fieldsPlotColors( col,...)
+
+}
+\arguments{
+\item{alpha}{The transparency of the color -- 1.0 is opaque and 0 is
+ transparent. This is useful for overlays of color and
+ still being able to view the graphics that is covered. }
+
+\item{n}{ Number of color levels. The setting \code{n}=64 is the
+orignal definition.}
+
+\item{start}{Starting color for lowest values in color scale}
+
+\item{end}{ Ending color.}
+
+\item{middle}{Color scale passes through this color at halfway}
+
+\item{col}{A list of colors (names or hex values) to interpolate}
+
+\item{x}{Positions of colors on a [0,1] scale. Default is to assume
+that the x values are equally spacesd from 0 to 1.}
+
+\item{z}{Real vector to encode in a color table.}
+
+\item{zlim}{Range to use for color scale. Default is the
+\code{range(z)} inflated by 1- eps and 1+eps.}
+
+\item{transparent.color}{Color value to use for NA's or values outside
+\code{zlim}}
+
+\item{eps}{A small inflation of the range to avoid boundary values of
+\code{z} being coded as NAs}
+\item{\dots}{Additional plotting arguments to code{image.plot}}
+
+}
+
+\details{
+The color in R can be represented as three vectors in RGB coordinates
+and these coordinates are interpolated separately using a cubic spline
+to give color values that intermediate to the specified colors.
+
+ Ask Tim Hoar about \code{tim.colors}! He is a matlab black belt and
+this is his favorite scale in that system. \code{two.colors} is
+really about three different colors. For other colors try
+\code{fields.color.picker} to view possible choices.
+\code{start="darkgreen", end="azure4"} are the options used to get a
+nice color scale for rendering aerial photos of ski trails. (See
+\url{http://www.image.ucar.edu/Data/MJProject}.) \code{larry.colors}
+is a 13 color palette used by Larry McDaniel and is particularly
+useful for visualizing fields of climate variables.
+
+ \code{designer.color} is the master function for two.colors and
+tim.colors. It can be useful if one wants to customize the color
+table to match quantiles of a distribution. e.g. if the median of the
+data is at .3 with respect to the range then set \code{x} equal to
+c(0,.3,1) and specify three colors to provide a transtion that matches
+the median value. In fields language this function interpolates
+between a set of colors at locations x. While you can be creative
+about these colors just using another color scale as the basis is
+easy. For example
+
+\code{designer.color( 256, rainbow(4), x= c( 0,.2,.8,1.0))}
+
+leaves the choice of the colors to Dr. R after a thunderstorm.
+
+\code{color.scale} assigns colors to a numerical vector in the same way as
+the \code{image} function. This is useful to kept the assigment of colors consistent
+across several vectors by specifiying a common \code{zlim} range.
+
+\code{plotColorScale} A simple function to plot a vector of colors to examinet their values.
+}
+
+\value{
+
+A vector giving the colors in a hexadecimal format, two extra hex
+digits are added for the alpha channel.
+
+}
+\seealso{ topo.colors, terrain.colors, image.plot, quilt.plot, grey.scale,
+fields.color.picker }
+\examples{
+
+tim.colors(10)
+# returns an array of 10 character strings encoding colors in hex format
+
+# e.g. (red, green, blue) values of (16,255, 239)
+# translates to "#10FFEF"
+# rgb( 16/255, 255/255, 239/255, alpha=.5)
+# gives "#10FFEF80" note extra "alpha channel"
+
+# veiw some color table choices
+set.panel( 2,3)
+z<- outer( 1:20,1:20, "+")
+obj<- list( x=1:20,y=1:20,z=z )
+
+image( obj, col=tim.colors( 200)) # 200 levels
+
+image( obj, col=two.colors() )
+
+# using tranparency without alpha the image plot would cover points
+plot( 1:20,1:20)
+image(obj, col=two.colors(alpha=.5), add=TRUE)
+
+coltab<- designer.colors(col=c("blue", "grey", "green"),
+ x= c( 0,.3,1) )
+image( obj, col= coltab )
+
+# peg colors at some desired quantiles of data.
+# NOTE need 0 and 1 for the color scale to make sense
+x<- quantile( c(z), c(0,.25,.5,.75,1.0) )
+# scale these to [0,1]
+zr<- range( c(z))
+x<- (x-zr[1])/ (zr[2] - zr[1])
+
+coltab<- designer.colors(256,rainbow(5), x)
+image( z, col= coltab )
+# see image.plot for adding all kinds of legends
+
+# some random color values
+set.seed(123)
+z<- rnorm(100)
+hex.codes<- color.scale(z, col=two.colors())
+N<-length( hex.codes)
+# take a look at the coded values
+# or equivalently create some Xmas wrapping paper!
+image( 1:N, N, matrix(1:N, N,1) , col=hex.codes, axes=FALSE,
+ xlab="", ylab="")
+
+set.panel()
+
+}
+\keyword{ aplot}
diff --git a/man/transformx.Rd b/man/transformx.Rd
new file mode 100644
index 0000000..d4104a5
--- /dev/null
+++ b/man/transformx.Rd
@@ -0,0 +1,72 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{transformx}
+\alias{transformx}
+\title{
+ Linear transformation
+}
+\description{
+Linear transformation of each column of a matrix. There are several
+choices of the type of centering and scaling.
+}
+\usage{
+transformx (x, scale.type = "unit.sd", x.center, x.scale)
+}
+\arguments{
+\item{x}{
+Matrix with columns to be transformed.
+}
+\item{scale.type}{
+Type of transformation the default is "unit.sd": subtract the mean and
+divide by the standard deviation.
+Other choices are "unscaled" (do nothing), "range" (transform to
+[0,1]),"user" (subtract a supplied location and divide by a scale).
+}
+\item{x.center}{
+A vector of centering values to subtract from each column.
+}
+\item{x.scale}{
+A vector of scaling values to subtract from each column.
+}
+}
+\value{
+A matrix whose columns have between transformed.
+This matrix also has the attributes: scale.type, x.center and y.center
+with the transformation information.
+}
+\details{
+After deciding what the centering and scaling values should be for each
+column of x, this function just calls the standard utility scale.
+This function was created partly to attach the transformation information
+as attributes to the transformed matrix. It is used in Krig, cover.design,
+krig.image etc. to transform the independent variables.
+}
+\seealso{
+scale
+}
+\examples{
+#
+newx<-transformx( ChicagoO3$x, scale.type="range")
+}
+\keyword{manip}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/vgram.Rd b/man/vgram.Rd
new file mode 100644
index 0000000..b03a71c
--- /dev/null
+++ b/man/vgram.Rd
@@ -0,0 +1,195 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{vgram}
+\alias{vgram}
+\alias{crossCoVGram}
+\alias{plot.vgram}
+\alias{boxplotVGram}
+\alias{getVGMean}
+\title{
+ Traditional or robust variogram methods for spatial data
+}
+\description{
+\code{vgram} computes pairwise squared differences as a function of distance.
+Returns an S3 object of class "vgram" with either raw values or statistics from
+binning. \code{crossCoVGram} is the same as \code{vgram} but differences are
+taken across different variables rather than the same variable.
+
+\code{plot.vgram} and \code{boxplotVGram} create lineplots and boxplots of
+vgram objects output by the \code{vgram} function. \code{boxplotVGram} plots
+the base R boxplot function, and plots estimates of the mean over the boxplot.
+
+The \code{getVGMean} function returns the bin centers and means of the \code{vgram}
+object based on the bin breaks provided by the user.
+}
+\usage{
+vgram(loc, y, id = NULL, d = NULL, lon.lat = FALSE,
+ dmax = NULL, N = NULL, breaks = NULL,
+ type=c("variogram", "covariogram", "correlogram"))
+
+crossCoVGram(loc1, loc2, y1, y2, id = NULL, d = NULL, lon.lat = FALSE,
+ dmax = NULL, N = NULL, breaks = NULL,
+ type=c("cross-covariogram", "cross-correlogram"))
+
+boxplotVGram(x, N=10, breaks = pretty(x$d, N, eps.correct = 1), plot=TRUE, plot.args, ...)
+
+\method{plot}{vgram}(x, N=10, breaks = pretty(x$d, N, eps.correct = 1), add=FALSE, ...)
+
+getVGMean(x, N = 10, breaks = pretty(x$d, N, eps.correct = 1))
+}
+\arguments{
+\item{loc}{
+Matrix where each row is the coordinates of an observed point
+of the field
+}
+\item{y}{
+Value of the field at locations
+}
+\item{loc1}{
+Matrix where each row is the coordinates of an observed point
+of field 1
+}
+\item{loc2}{
+Matrix where each row is the coordinates of an observed point
+of field 2
+}
+\item{y1}{
+Value of field 1 at locations
+}
+\item{y2}{
+Value of field 2 at locations
+}
+\item{id}{
+A 2 column matrix that specifies which variogram differnces to find.
+If omitted all possible pairing are found.
+This can used if the data has an additional covariate that determines
+proximity, for example a time window.
+}
+\item{d}{
+Distances among pairs indexed by id. If not included distances from from
+directly from loc.
+}
+\item{lon.lat }{
+If true, locations are assumed to be longitudes and latitudes
+and distances found are great circle distances (in miles see
+\link{rdist.earth}). Default is FALSE.
+}
+\item{dmax}{
+Maximum distance to compute variogram.
+}
+\item{N}{
+Number of bins to use. The break points are found by the \code{pretty} function and so ther may not be exactly N bins. Specify the breaks explicity if you want excalty N bins.
+}
+\item{breaks}{
+Bin boundaries for binning variogram values. Need not be equally spaced
+but must be ordered.
+}
+\item{x}{
+An object of class "vgram" (an object returned by \code{vgram})
+}
+\item{add}{
+If \code{TRUE}, adds empirical variogram lineplot to current plot. Otherwise
+creates new plot with empirical variogram lineplot.
+}
+\item{plot}{
+If \code{TRUE}, creates a plot, otherwise returns variogram statistics output by
+\code{bplot.xy}.
+}
+\item{plot.args}{
+Additional arguments to be passed to \code{plot.vgram}.
+}
+\item{type}{
+One of "variogram", "covariogram", "correlogram", "cross-covariogram", and
+"cross-correlogram". \code{vgram} supports the first three of these and
+\code{crossCoVGram} supports the last two.
+}
+\item{...}{
+Additional argument passed to \code{plot} for \code{plot.vgram} or to
+\code{bplot.xy} for \code{boxplotVGram}.
+}
+}
+\value{
+\code{vgram} and \code{crossCoVGram} return a "vgram" object containing the
+following values:
+\item{vgram}{Variogram or covariogram values}
+\item{d}{Pairwise distances}
+\item{call}{Calling string}
+\item{stats}{Matrix of statistics for values in each bin.
+ Rows are the summaries returned by the stats function or describe.
+ If not either breaks or N arguments are not supplied then this
+ component is not computed.}
+\item{centers}{Bin centers.}
+
+If \code{boxplotVGram} is called with \code{plot=FALSE}, it returns a
+list with the same components as returned by \code{bplot.xy}
+}
+
+\section{References}{
+See any standard reference on spatial statistics. For example
+Cressie, Spatial Statistics
+}
+\author{John Paige, Doug Nychka}
+\seealso{
+\link{vgram.matrix}, \link{bplot.xy}, \link{bplot}
+}
+\examples{
+#
+# compute variogram for the midwest ozone field day 16
+# (BTW this looks a bit strange!)
+#
+data( ozone2)
+good<- !is.na(ozone2$y[16,])
+x<- ozone2$lon.lat[good,]
+y<- ozone2$y[16,good]
+
+look<-vgram( x,y, N=15, lon.lat=TRUE) # locations are in lon/lat so use right
+#distance
+# take a look:
+plot(look, pch=19)
+#lines(look$centers, look$stats["mean",], col=4)
+
+brk<- seq( 0, 250,, (25 + 1) ) # will give 25 bins.
+
+## or some boxplot bin summaries
+
+boxplotVGram(look, breaks=brk, plot.args=list(type="o"))
+plot(look, add=TRUE, breaks=brk, col=4)
+
+#
+# compute equivalent covariogram, but leave out the boxplots
+#
+look<-vgram( x,y, N=15, lon.lat=TRUE, type="covariogram")
+plot(look, breaks=brk, col=4)
+
+#
+# compute equivalent cross-covariogram of the data with itself
+#(it should look almost exactly the same as the covariogram of
+#the original data, except with a few more points in the
+#smallest distance boxplot and points are double counted)
+#
+look = crossCoVGram(x, x, y, y, N=15, lon.lat=TRUE, type="cross-covariogram")
+plot(look, breaks=brk, col=4)
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/vgram.matrix.Rd b/man/vgram.matrix.Rd
new file mode 100644
index 0000000..9f09d1b
--- /dev/null
+++ b/man/vgram.matrix.Rd
@@ -0,0 +1,123 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{vgram.matrix}
+\alias{vgram.matrix}
+\alias{plot.vgram.matrix}
+\title{
+ Computes a variogram from an image
+}
+\description{
+Computes a variogram for an image taking into account different directions
+and returning summary information about the differences in each of these
+directions.
+}
+\usage{
+vgram.matrix(dat, R=5, dx = 1,dy = 1 )
+
+\method{plot}{vgram.matrix}(x,...)
+}
+\arguments{
+\item{dat}{
+A matrix spacing of rows and columns are assumed to have the same
+distance.
+}
+\item{R}{
+Maximum radius for finding variogram differences assuming that the grid
+points are spaced one unit a part. Default is go out to a
+radius of 5.
+}
+
+\item{dx}{
+The spacing of grid points on the X axis. This is used to calculate the
+correct distance between grid points. If dx is not equal to dy then the
+collapse argument must be FALSE.
+}
+\item{dy}{ The spacing of grid points on the Y axis.
+See additional notes for dx.}
+
+\item{x}{ Returned list from vgram.matrix}
+
+\item{\dots}{ Arguments for image.plot}
+
+}
+\value{
+
+An object of class vgram.matrix with the following components: d, a vector of distances for
+the differences,
+and vgram, the variogram values. This is the traditional variogram
+ignoring direction.
+
+d.full, a vector of distances for all possible shifts up distance R,
+ ind, a two column matrix giving the x and y increment used to compute
+the shifts, and vgram.full, the variogram at each of these
+separations. Also computed is vgram.robust, Cressie's version of a robust
+variogram statistic.
+
+Also returned is the component N the number of differences found for each
+separation csae.
+
+}
+
+\details{
+For the "full" case
+the statistics can summarize departures from
+isotropy by separating the variogram differences according to
+orientation. For small R this runs efficiently because the differences are
+found by sub-setting the image matrix.
+
+For example, suppose that a row of the ind matrix is
+(2,3). The variogram value associated with this row is the mean of
+the differences (1/2)*(X(i,j)- X( i+2,j+3))**2 for all i and j.
+(Here X(.,.) are the values for the spatial field.) In this example
+d= sqrt(13) and there will be another entry with the same distance
+but
+corresponding to the direction (3,2).
+plot.vgram.matrix attempts to organize all the different directions into a
+coherent image plot.
+}
+\seealso{
+\code{\link{vgram}}
+}
+\examples{
+# variogram for Lennon image.
+data(lennon)
+out<-vgram.matrix( lennon)
+
+plot( out$d, out$vgram, xlab="separation distance", ylab="variogram")
+# image plot of vgram values by direction.
+
+# look at different directions
+out<-vgram.matrix( lennon, R=8)
+
+plot( out$d, out$vgram)
+# add in different orientations
+points( out$d.full, out$vgram.full, col="red")
+
+#image plot of variogram values for different directions.
+set.panel(1,1)
+plot.vgram.matrix( out)
+# John Lennon appears remarkably isotropic!
+
+}
+\keyword{spatial}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/world.Rd b/man/world.Rd
new file mode 100644
index 0000000..b0e6bc3
--- /dev/null
+++ b/man/world.Rd
@@ -0,0 +1,64 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{world}
+\alias{world}
+\alias{world.color}
+\alias{in.land.grid}
+\alias{world.land}
+\title{Plot of the world}
+\description{
+Plots quickly, medium resolution outlines of large land masses. This
+is a simple wrapper for the map function from the maps package.
+}
+\usage{
+world(...)
+world.land( ...)
+world.color( ... )
+in.land.grid(...)
+
+}
+\arguments{
+\item{\dots}{Same arguments used by the \code{map} function from the
+maps package.}
+}
+\details{
+See the longstanding \code{maps} package for documentation on this
+function. The functions world.land, world.color and in.land.grid have
+been depreciated but can be recovered from versions of fields 6.7.1 or
+older.
+ }
+\seealso{US, in.poly, in.poly.grid}
+\examples{
+\dontrun{
+world()
+# add the US
+US( add=TRUE,col="blue")
+
+world( fill=TRUE) # land filled in black
+
+## Western Europe
+world( xlim=c(-10,18),ylim=c(36,60),fill=TRUE, col="darkgreen",
+border="green1")
+}
+}
+\keyword{hplot}
+% docclass is function
diff --git a/man/xline.Rd b/man/xline.Rd
new file mode 100644
index 0000000..8c9f9cb
--- /dev/null
+++ b/man/xline.Rd
@@ -0,0 +1,56 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{xline}
+\alias{xline}
+\title{
+ Draw a vertical line
+}
+\description{
+Adds vertical lines in the plot region.
+}
+\usage{
+xline(x, ...)
+}
+\arguments{
+\item{x}{
+Values on x axis specifying location of vertical lines.
+}
+\item{\dots}{
+Any ploting options for abline.
+}
+}
+\seealso{
+ yline, abline
+}
+\examples{
+
+plot( 1:10)
+xline( 6.5, col=2)
+
+world( col=3)
+yline( seq( -80,80,10),col=4, lty=2)
+xline( seq( -180,180,10),col=4,lty=2)
+yline( 0, lwd=2, col=4)
+}
+\keyword{aplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/man/yline.Rd b/man/yline.Rd
new file mode 100644
index 0000000..142bcbc
--- /dev/null
+++ b/man/yline.Rd
@@ -0,0 +1,52 @@
+%# fields is a package for analysis of spatial data written for
+%# the R software environment .
+%# Copyright (C) 2017
+%# University Corporation for Atmospheric Research (UCAR)
+%# Contact: Douglas Nychka, nychka at ucar.edu,
+%# National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+%#
+%# This program is free software; you can redistribute it and/or modify
+%# it under the terms of the GNU General Public License as published by
+%# the Free Software Foundation; either version 2 of the License, or
+%# (at your option) any later version.
+%# This program is distributed in the hope that it will be useful,
+%# but WITHOUT ANY WARRANTY; without even the implied warranty of
+%# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+%# GNU General Public License for more details.
+%#
+%# You should have received a copy of the GNU General Public License
+%# along with the R software environment if not, write to the Free Software
+%# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+%# or see http://www.r-project.org/Licenses/GPL-2
+
+\name{yline}
+\alias{yline}
+\title{
+ Draw horizontal lines
+}
+\description{
+Adds horizontal lines in the plot region.
+}
+\usage{
+yline(y, ...)
+}
+\arguments{
+\item{y}{
+Values on y axis specifying location of vertical lines.
+}
+\item{\dots}{
+Any ploting options for abline.
+}
+}
+\seealso{
+ xline, abline
+}
+\examples{
+world( col=3)
+yline( seq( -80,80,10),col=4, lty=2)
+xline( seq( -180,180,10),col=4,lty=2)
+yline( 0, lwd=2, col=4)
+}
+\keyword{aplot}
+% docclass is function
+% Converted by Sd2Rd version 1.21.
diff --git a/src/ExponentialUpperC.c b/src/ExponentialUpperC.c
new file mode 100644
index 0000000..c23f9ea
--- /dev/null
+++ b/src/ExponentialUpperC.c
@@ -0,0 +1,55 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research,
+c**** # PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+
+SEXP ExponentialUpperC(SEXP distMat, SEXP n, SEXP alpha)
+{
+ int In, i, j;
+ double dAlpha;
+ double *dMat, *cans;
+
+ //cast R variables to C variables
+ In = INTEGER(n)[0];
+ dAlpha = REAL(alpha)[0];
+ dMat = REAL(distMat);
+ SEXP ans = PROTECT(allocMatrix(REALSXP, In, In));
+ cans = REAL(ans);
+
+ //intialize entire array to zero DWN May-4-2016
+ for(i = 0; i < (In*In); i++) {
+ cans[i]= 0.0;
+ }
+
+ //set upper triangle of output matrix
+ for(i = 0; i < In; i++) {
+ for(j=0; j<= i; j++) {
+ if(i == j)
+ cans[i*In+j] = 1.0;
+ else
+ cans[i*In+j] = exp(-1*dMat[i*In+j]*dAlpha);
+ }
+ }
+
+ UNPROTECT(1);
+ return ans;
+}
diff --git a/src/addToDiagC.c b/src/addToDiagC.c
new file mode 100644
index 0000000..b83df8b
--- /dev/null
+++ b/src/addToDiagC.c
@@ -0,0 +1,39 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+SEXP addToDiagC(SEXP mat, SEXP numsToAdd, SEXP n)
+{
+ int In, i;
+ double *cMat, *addVals;
+
+ //cast R variables to C variables
+ In = INTEGER(n)[0];
+ cMat = REAL(mat);
+ addVals = REAL(numsToAdd);
+
+ //Add number to diagonal
+ for(i = 0; i < In; i++) {
+ cMat[i*In+i] = cMat[i*In+i] + addVals[i];
+ }
+
+ return R_NilValue;
+}
diff --git a/src/compactToMatC.c b/src/compactToMatC.c
new file mode 100644
index 0000000..97980d7
--- /dev/null
+++ b/src/compactToMatC.c
@@ -0,0 +1,69 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+SEXP compactToMatC(SEXP compactMat, SEXP len, SEXP n, SEXP diagVal, SEXP lowerTri, SEXP upperTri)
+{
+ int In, lTri, uTri, i, j, index;
+ double dVal;
+ double *cMat, *cans;
+
+ //cast R variables to C variables
+ In = INTEGER(n)[0];
+ lTri = INTEGER(lowerTri)[0];
+ uTri = INTEGER(upperTri)[0];
+ dVal = REAL(diagVal)[0];
+ cMat = REAL(compactMat);
+ SEXP ans = PROTECT(allocMatrix(REALSXP, In, In));
+ cans = REAL(ans);
+ //intialize entire array to zero DWN May-1-2016
+ for(i = 0; i < (In*In); i++) {
+ cans[i]= 0.0;
+ }
+
+ //set upper or lower triangle of output matrix
+ index = 0;
+ if(lTri) {
+ for(i = 0; i < In; i++) {
+ for(j=i+1; j < In; j++) {
+ cans[i*In+j] = cMat[index];
+ index++;
+ }
+ }
+ }
+ index = 0;
+ if(uTri) {
+ for(i = 0; i < In; i++) {
+ for(j=i+1; j < In; j++) {
+ cans[j*In+i] = cMat[index];
+ index++;
+ }
+ }
+ }
+
+ //set diagonal values of output matrix
+ for(i = 0; i < In; i++) {
+ cans[i*In + i] = dVal;
+ }
+
+ UNPROTECT(1);
+ return ans;
+}
diff --git a/src/compactToMatCOLD.c b/src/compactToMatCOLD.c
new file mode 100644
index 0000000..4478515
--- /dev/null
+++ b/src/compactToMatCOLD.c
@@ -0,0 +1,65 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+SEXP compactToMatCOLD(SEXP compactMat, SEXP len, SEXP n, SEXP diagVal, SEXP lowerTri, SEXP upperTri)
+{
+ int In, lTri, uTri, i, j, index;
+ double dVal;
+ double *cMat, *cans;
+
+ //cast R variables to C variables
+ In = INTEGER(n)[0];
+ lTri = INTEGER(lowerTri)[0];
+ uTri = INTEGER(upperTri)[0];
+ dVal = REAL(diagVal)[0];
+ cMat = REAL(compactMat);
+ SEXP ans = PROTECT(allocMatrix(REALSXP, In, In));
+ cans = REAL(ans);
+
+ //set upper or lower triangle of output matrix
+ index = 0;
+ if(lTri) {
+ for(i = 0; i < In; i++) {
+ for(j=i+1; j < In; j++) {
+ cans[i*In+j] = cMat[index];
+ index++;
+ }
+ }
+ }
+ index = 0;
+ if(uTri) {
+ for(i = 0; i < In; i++) {
+ for(j=i+1; j < In; j++) {
+ cans[j*In+i] = cMat[index];
+ index++;
+ }
+ }
+ }
+
+ //set diagonal values of output matrix
+ for(i = 0; i < In; i++) {
+ cans[i*In + i] = dVal;
+ }
+
+ UNPROTECT(1);
+ return ans;
+}
diff --git a/src/css.f b/src/css.f
new file mode 100644
index 0000000..902b4cd
--- /dev/null
+++ b/src/css.f
@@ -0,0 +1,360 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+
+
+ subroutine css(h,npoint,x,y,wght,sy,trace,diag,vlam,
+ + ngrid,xg,yg,job,ideriv,ierr)
+
+C COMPUTES A CUBIC SMOOTHING SPLINE FIT TO A SET OF DATA GIVEN
+C NPOINT(=NUMBER OF DATA VALUES) AND LAMBDA(=VALUE OF
+C THE SMOOTHING parameter). THE CODE IS ADAPTED FROM A
+C PROGRAM IN DEBOOR,C.(1978).A PRACTICAL GUIDE TO SPLINES.
+C SPRINGER-VERLAG : NEW YORK. AN O(NPOINT) ALGORITHM
+C SUGGESTED BY HUTCHINSON AND DEHOOG IS USED TO COMPUTE
+C LVRAGE VALUES AND CONSTRUCT CONFIDENCE INTERVALS.
+c Adapted from Randy Eubank Texas A&M
+c
+c
+c this subroutine solves the problem:
+c
+c minimize (1/n) sum(i=1,n) [ (y(i) - f(x(i)))/wght(i) ]**2 + lambda*J(f)
+c over f
+c The solution will always be a piecewise cubic polynomial with join
+c points at the x values. Natural boundary conditions are assumed: at the
+c x(1) and x(npoints) the second and third derivatives of the slotuion
+c will be zero
+c All matrix calculations are done in double precision
+c
+c Arguments of evss:
+c h : natural log of lambda ( more convenient scale whepassing a
+c real*4)
+c if h is passed with a value less than or equal -1000 no smoothing will
+c be done and the spline will interploate the data points
+c npoint: number of observations
+c (x,y) : pairs of data points to be smoothed
+c sy : on return predicted values of f at x
+c wght : weights used in sum of squares. If the y have unequal
+c variance then an appropriate choice for wght is the standard deviation
+c (These weights are not normalized so multiplying by a constant
+c will imply solving the minimization problem with a different smoothing
+c parameter)
+c trace: in matrix from Yhat= A(lambda)Y with A(lambda) an nxn matrix
+c trace= tr(A(lambda)) = " effective number of paramters"
+c diag: diagonal elements of A(lambda) ( this is the mostt
+c computationally intetnsive opertation in this subroutine.)
+c vlam: value of the generalized cross-validation functyion (Used to
+c select an appropriate value for lambda based on the data.)
+c ngrid,xg,yg : on return, the ith deriv of the spline estimate is
+c evaluated on a grid, xg, with ngrid points. The
+c values are returned in yg
+c
+c ideriv: 0 = evaluate the function according to the job code
+c 1 = evaluate the first derivative of the function according
+c to the job code
+c 2 = evaluate the second derivative of the function according
+c to the job code
+c
+c job: is a vector of three integers
+c (a,b,c) (a=igcv, b=igrid, c=sorting)
+c a=0 evaluate spline at x values, return predicted values in sy
+c a=1 same as a=0 plus returning values of trace, diag and vlam
+c a=2 do no smoothing, interpolate the data
+c a=3 same as a=1 but use the passed value invlam argument as
+c a cost an offset factors in the diag vector
+c
+c b=0 do not evaluate the spline at any grid points
+c b=1 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at the (unique) sorted,data points, xg, return yg
+c b=2 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at ngrid equally spaced points between x(1)
+c and x(npoints)
+c b=3 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at ngrid points on grid passed in xg.
+c NOTE: extrapolation of the estimate beyond the range of
+c the x's will just be a linear function.
+c
+c
+c
+c c=0 X's may not be in sorted order
+c c=1 Assume that the X's are in sorted order
+c c=2 Do not sort the X's use the current set of keys
+c Should only be used on a second call to smooth
+c same design
+c
+c ierr : on return ierr>0 indicates all is not well :-(
+c
+ parameter (NMAX=20000)
+ implicit double precision (a-h,o-z)
+ double precision h,trace,vlam
+ double precision wght(npoint),X(npoint),Y(npoint)
+ double precision sy(npoint),diag(npoint)
+ double precision xg(ngrid),yg(ngrid)
+ double precision A(NMAX,4),V(NMAX,7)
+ double precision P,SIXP,SIX1MP,cost
+ double precision ux(NMAX),uy(NMAX), uw(NMAX),ud(NMAX),utr
+ integer imx(NMAX)
+ integer idx(NMAX)
+ integer npoint,igcv,igrid, isort, job(3)
+
+ eps= 1d-7
+ cost=1.0
+ offset= 0
+ igcv= job(1)
+ igrid=job(2)
+ isort=job(3)
+
+c
+ if( npoint.gt.NMAX) then
+ ierr=1
+ return
+ endif
+c initialize unique vector and two pointers
+
+ do 5 k=1,npoint
+ ux(k)=x(k)
+
+ idx(k)=k
+ 5 continue
+
+c sort vector X along with the keys in imx
+c
+c initialize the indices imx
+c
+ if( isort.le.1) then
+
+ do 6 k=1,npoint
+ imx(k)=k
+ 6 continue
+ endif
+c
+c sort the X's permuting the indices
+c
+ if(isort.eq.0) then
+ call sortm( ux, imx,npoint)
+c the rank of the value x( imx(k)) is k
+ endif
+c put y and the weights in the sorted order
+c
+C**** construct vector consisting of unique observations.
+
+ jj=1
+ ind= imx(1)
+ ux(jj)= x(ind)
+ uy(jj)= y(ind)
+ uw(jj)= wght(ind)
+ idx(1)=jj
+c normalize eps by the range of the X values
+ eps= eps/( x( imx(npoint)) - x( imx(1)) )
+c
+c
+ do 10 k=2,npoint
+c we are looping through ranks but this is not how the
+c order of the X are stored. The location of the kth smallest
+c is at idx(k)
+ kshuf= imx(k)
+ if( abs( ux(jj)-x(kshuf)).lt.eps) then
+c**** we have a repeat observation, update weight and the weighted
+c***** average at this point
+ temp1= 1.0d0/uw(jj)**2
+ temp2= 1.0d0/wght(kshuf)**2
+ temp3 = (temp1 + temp2)
+ uy(jj)= ( uy(jj)*temp1 + y(kshuf)*temp2)/temp3
+ uw(jj)= 1.0d0/dsqrt(temp3)
+ else
+ jj= jj+1
+ ux(jj)= x(kshuf)
+ uy(jj)= y(kshuf)
+ uw(jj)= wght(kshuf)
+ endif
+c save the value that indexes unique values to repliacted ones.
+c x(k) corresponds to the unique X at idx(k)
+ idx(k)=jj
+ 10 continue
+ nunq= jj
+
+ itp=0
+ if(igcv.eq.2) itp=1
+
+c handle condition for interpolation
+
+ if(itp.eq.0) then
+ P=1.d0/(npoint*dexp(h)+ 1.d0)
+ else
+ P=1.d0
+ endif
+
+
+ call dSETUP(uX,uW,uY,nunq,V,A(1,4),NMAX,itp,ierr)
+C**** check for duplicate X's if so exit
+ if(ierr.gt.0) then
+ return
+ endif
+
+ call dCHOLD(P,V,A(1,4),nunq,A(1,3),A(1,1),NMAX)
+
+c compute predicted values
+
+ SIX1MP=6.d0*(1.d0-P)
+ if(itp.eq.0) then
+ DO 61 I=1,Nunq
+ a(i,1)=uY(I) - SIX1MP*(uW(I)**2)*A(I,1)
+ 61 continue
+
+c fill in predicted values taking into account repeated data
+ do 70 k=1,npoint
+ jj= idx(k)
+ sytemp= a(jj,1)
+c
+c unscramble the smoothed Y's
+ kshuf= imx(k)
+
+ sy(kshuf)=sytemp
+ 70 continue
+ else
+ do 60 i=1,nunq
+ a(i,1)=uy(i)
+ 60 continue
+ endif
+
+c return estimates on unique x's if igrid =1
+ if(igrid.eq.1) then
+ do 65 i=1,nunq
+ xg(i)=ux(i)
+ yg(i)=a(i,1)
+ 65 continue
+ ngrid=nunq
+ endif
+ if(igrid.ge.1) then
+c
+c********* evaluate spline on grid
+C piecewise cubic COEFFICIENTS ARE STORED IN A(.,2-4).
+
+ SIXP=6.d0*P
+ DO 62 I=1,nunq
+ A(I,3)=A(I,3)*SIXP
+ 62 continue
+ NPM1=nunq - 1
+ DO 63 I=1,NPM1
+ A(I,4)=(A(I+1,3)-A(I,3))/V(I,4)
+ A(I,2)=(A(I+1,1)-A(I,1))/V(I,4)
+ * -(A(I,3)+A(I,4)/3.*V(I,4))/2.*V(I,4)
+ 63 continue
+c
+c create equally spaced x's if asked for ( igrid=2)
+c
+ if( igrid.eq.2) then
+ step= (ux(nunq)-ux(1))/(ngrid-1)
+ xg(1)=ux(1)
+ do 190 j=2,ngrid-1
+ xg(j)= xg(j-1)+step
+ 190 continue
+ xg(ngrid)=ux(nunq)
+ endif
+
+
+ uxmin= ux(1)
+ uxmax= ux(nunq)
+
+ a1= a(1,1)
+ an= a(nunq,1)
+
+ b1= a(1,2)
+
+ d= ux(nunq)- ux(nunq-1)
+ ind= nunq-1
+ bn= a(ind,2) + a(ind,3)*d + a(ind,4)*(d**2)/2.d0
+
+c evalute spline by finding the interval containing the evaluation
+c point and applying the cubic formula for the curve estiamte
+c finding the interval such that ux(ind) <=xg(j) < ux( ind+1)
+c is done using a bisection search
+
+ do 195 j=1,ngrid
+ lint= ifind(xg(j),ux,nunq)
+ if( lint.eq.0) then
+ d= xg(j)-uxmin
+
+ if (ideriv .eq. 0)
+ - yg(j)= a1 + b1*d
+ if (ideriv .eq. 1)
+ - yg(j)= b1
+ if (ideriv .eq. 2)
+ - yg(j)= 0.0
+ endif
+ if( lint.eq.nunq) then
+ d= xg(j)-uxmax
+
+ if (ideriv .eq. 0)
+ - yg(j)= an + bn*d
+ if (ideriv .eq. 1)
+ - yg(j)= bn
+ if (ideriv .eq. 2)
+ - yg(j)= 0.0
+ endif
+ if( ((lint.ge.1 ) .and.( lint.lt.nunq))) then
+ ind=lint
+c a1=a(ind,1)
+c a2=a(ind,2)
+c b=a(ind,3)/2.d0
+c c=a(ind,4)/6.d0
+c
+ d= xg(j)-ux(ind)
+
+ if (ideriv .eq. 0)
+ - yg(j)= a(ind,1) + a(ind,2)*d + a(ind,3)*(d**2)/2.d0
+ - + a(ind,4)*(d**3)/6.d0
+ if (ideriv .eq. 1)
+ - yg(j)= a(ind,2) + a(ind,3)*d + a(ind,4)*(d**2)/2.d0
+ if (ideriv .eq. 2)
+ - yg(j)= a(ind,3) + a(ind,4)*d
+ endif
+c
+ 195 continue
+ endif
+c****end of evaluation block
+
+ if((igcv.eq.1).or.( igcv.eq.3)) then
+ if( igcv.eq.3) then
+ cost= diag(1)
+ offset=diag(2)
+ endif
+c***** begin computing gcv and trace
+C COMPUTE LVRAGE VALUES ,THE VARIANCE ESTIMATE
+C SGHAT2 AND CONFIDENCE INTERVALS.
+c
+ call dLV(nunq,V,uw,SIX1MP,utr,ud,NMAX)
+
+ rss=0.d0
+ trace=0.d0
+ vlam=0.d0
+
+ do 100 i=1,nunq
+c rss= rss + ((uy(i)-a(i,1))/uw(i))**2
+ trace= trace +ud(i)
+ 100 continue
+
+ do 110 k=1,npoint
+ kshuf= imx(k)
+ jj= idx(k)
+ diag(kshuf)= ud(jj)
+ rss= rss + ( (y(kshuf)- sy(kshuf))/wght(kshuf) )**2
+ 110 continue
+ ctrace= 2+ cost*( trace-2)
+ if( (npoint -ctrace -offset) .gt. 0.d0) then
+ vlam= (rss/npoint)/( 1- (ctrace-offset)/npoint)**2
+ else
+ vlam=1e20
+ endif
+
+ endif
+
+
+
+ return
+
+ END
diff --git a/src/csstr.f b/src/csstr.f
new file mode 100644
index 0000000..95ff50d
--- /dev/null
+++ b/src/csstr.f
@@ -0,0 +1,25 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ subroutine csstr(h,nobs,x,y,wght,c,offset,trace,vlam,work,ierr)
+ parameter(mxM=20000)
+ implicit double precision (a-h,o-z)
+ double precision h,trace, vlam,c,offset
+ double precision x(nobs),y(nobs),wght(nobs)
+ double precision work(nobs),diag(mxM),dumm1(1),dumm2(1)
+ integer job(3),ideriv,ierr, ndum
+ data ideriv/0/
+ job(1)=3
+ job(2)=0
+ job(3)=0
+ diag(1)=c
+ diag(2)=offset
+ ndum=1
+ call css(h,nobs,x,y,wght,work,trace,diag,vlam,ndum,dumm1,dumm2,
+ - job,ideriv,ierr)
+
+ return
+ end
diff --git a/src/cvrf.f b/src/cvrf.f
new file mode 100644
index 0000000..bdf8cd9
--- /dev/null
+++ b/src/cvrf.f
@@ -0,0 +1,27 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+c** finds value of h minimizing the generalized cross-validation
+
+ subroutine cvrf
+ * (h,nobs,x,y,wt,sy,trace,diag,din,dout,cv,ierr)
+ implicit double precision (a-h,o-z)
+ double precision h,trace,cv
+ double precision x(nobs),y(nobs),wt(nobs)
+ double precision sy(nobs),diag(nobs),dumm1(1),dumm2(1)
+ double precision din(10), dout(10)
+ integer ngrid, ierr, job(3),ideriv
+ data job/3,0,0/
+ data ideriv,ngrid/0,1/
+ call rcss(h,nobs,x,y,wt,sy,trace,diag,cv,
+ + ngrid,dumm1,dumm2,job,ideriv,din,dout,ierr)
+ nit= int( dout(1))
+ trace=dout(3)
+
+c
+ return
+ end
diff --git a/src/dchold.f b/src/dchold.f
new file mode 100644
index 0000000..2964d28
--- /dev/null
+++ b/src/dchold.f
@@ -0,0 +1,77 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+
+
+ SUBROUTINE dCHOLD(P,V,QTY,NPOINT,U,QU,NMAX)
+c CONSTRUCTS THE UPPER THREE DIAGONALS IN V(I,J),I=2,
+C NPOINT-1,J=1,3 OF THE MATRIX 6*(1-P)*Q-TRANSP*
+C (D**2)*Q + P*R . (HERE R IS THE MATRIX PROPORTIONAL
+C TO Q-TRANSP*KSUBN*Q , WHERE KSUBN IS THE MATRIX
+C WITH ELEMENTS K(X(I),X(J)) AND K IS THE USUAL
+C KERNEL ASSOCIATED WITH CUBIC SPLINE SMOOTHING.)
+C THE CHOLESKY DECOMPOSITION OF THIS MATRIX IS COMPUTED
+C AND STORED IN V(.,1-3) AND THE EQUATION SYSTEM FOR
+C THE QUADRATIC COEFFICIENTS OF THE SPLINE IN ITS
+C PIECEWISE POLYNOMIAL REPRESENTATION IS SOLVED . THE
+C SOLUTION IS RETURNED IN U.
+c
+ double precision P,QTY(NMAX),QU(NMAX),U(NMAX),V(NMAX,7)
+ double precision SIX1MP,TWOP,RATIO,PREV
+ INTEGER NPOINT,I,NPM1,NPM2
+c
+ NPM1=NPOINT - 1
+C**** CONSTRUCT 6*(1-P)*Q-TRANSP.*(D**2)*Q + P*R
+ SIX1MP=6.d0*(1.d0 - P)
+ TWOP=2.d0*P
+ DO 2 I=2,NPM1
+ V(I,1)=SIX1MP*V(I,5) + TWOP*(V(I-1,4)+V(I,4))
+ V(I,2)=SIX1MP*V(I,6) + P*V(I,4)
+ V(I,3)=SIX1MP*V(I,7)
+ 2 continue
+ NPM2=NPOINT - 2
+ IF(NPM2 .GE. 2)GO TO 10
+ U(1)=0.d0
+ U(2)=QTY(2)/V(2,1)
+ U(3)=0.d0
+ GO TO 41
+C FACTORIZATION : FACTORIZE THE MATRIX AS L*B-INV*
+C L-TRANSP , WHERE B IS A DIAGONAL MATRIX AND L
+C IS UPPER TRIANGULAR.
+ 10 DO 20 I=2,NPM2
+ RATIO=V(I,2)/V(I,1)
+ V(I+1,1)=V(I+1,1) - RATIO*V(I,2)
+ V(I+1,2)=V(I+1,2) - RATIO*V(I,3)
+ V(I,2)=RATIO
+ RATIO=V(I,3)/V(I,1)
+ V(I+2,1)=V(I+2,1) - RATIO*V(I,3)
+ V(I,3)=RATIO
+ 20 continue
+C FORWARD SUBSTITUTION
+ U(1)=0.d0
+ V(1,3)=0.d0
+ U(2)=QTY(2)
+ DO 30 I=2,NPM2
+ U(I+1)=QTY(I+1) - V(I,2)*U(I) -V(I-1,3)*U(I-1)
+ 30 continue
+C BACK SUBSTITUTION
+ U(NPOINT)=0.d0
+ U(NPM1)=U(NPM1)/V(NPM1,1)
+ DO 40 I=NPM2,2,-1
+ U(I)=U(I)/V(I,1) - U(I+1)*V(I,2) - U(I+2)*V(I,3)
+ 40 continue
+C CONSTRUCT Q*U
+ 41 PREV=0.d0
+ DO 50 I=2,NPOINT
+ QU(I)=(U(I)-U(I-1))/V(I-1,4)
+ QU(I-1)=QU(I) - PREV
+ PREV=QU(I)
+ 50 continue
+ QU(NPOINT)=-QU(NPOINT)
+c
+ RETURN
+ END
diff --git a/src/ddfind.f b/src/ddfind.f
new file mode 100644
index 0000000..1503318
--- /dev/null
+++ b/src/ddfind.f
@@ -0,0 +1,52 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+c
+ subroutine ddfind( nd,x1,n1, x2,n2, D0,ind,rd,Nmax, iflag)
+
+ integer nd,n1,n2, ind(nmax,2)
+ integer kk, i,j, ic
+
+ double precision x1(n1,nd), x2(n2,nd), D0, rd(Nmax), D02, dtemp
+
+c**** counter for accumulating close points
+ kk=0
+ D02= D0**2
+ do 15 i= 1, n1
+ do 10 j =1,n2
+
+c
+c** accumulate squared differences
+c
+ dtemp= 0.0
+ do 5 ic= 1, nd
+ dtemp= dtemp + (x1(i,ic) - x2(j,ic))**2
+ if( dtemp.gt.D02) goto 10
+ 5 continue
+
+c**** dtemp is less than D0 so save it as a close point
+
+ kk=kk+1
+
+c**** check if there is still space
+ if( kk .gt. Nmax) then
+ iflag= -1
+ goto 20
+ else
+ ind(kk,1)= i
+ ind(kk,2)= j
+ rd(kk)= sqrt( dtemp)
+ endif
+
+
+ 10 continue
+ 15 continue
+
+ Nmax=kk
+20 continue
+
+
+ return
+ end
diff --git a/src/dlv.f b/src/dlv.f
new file mode 100644
index 0000000..92908f6
--- /dev/null
+++ b/src/dlv.f
@@ -0,0 +1,71 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+
+
+ SUBROUTINE dLV(NPOINT,V,WGHT,SIX1MP,TR,LEV,NMAX)
+c CONSTRUCTS THE UPPER THREE DIAGONALS OF (6*(1-P)*
+C Q-TRANSP*(D**2)*Q + P*R)-INV USING THE RECURSION
+C FORMULA IN HUTCHINSON,M.F. AND DEHOOG,F.R.(1985).
+C NUMER. MATH. 47,99-106, AND STORES THEM IN V(.,5-7).
+C THESE ARE USED IN POST AND PRE MULTIPLICATION BY
+C Q-TRANSP AND Q TO OBTAIN THE DIAGONAL ELEMENTS OF
+C THE HAT MATRIX WHICH ARE STORED IN THE VECTOR LEV.
+C THE TRACE OF THE HAT MATRIX IS RETURNED IN TR.
+c
+ double precision V(NMAX,7),TR,W1,W2,W3,SIX1MP
+ double precision wght(NMAX)
+ double precision LEV(npoint)
+ INTEGER NPM1,NPM2,NPM3,NPOINT
+c
+ NPM1=NPOINT - 1
+ NPM2=NPOINT - 2
+ NPM3=NPOINT - 3
+C RECURSION FOR DIAGONALS OF INVERSE MATRIX
+ V(NPM1,5)=1/V(NPM1,1)
+ V(NPM2,6)=-V(NPM2,2)*V(NPM1,5)
+ V(NPM2,5)=(1/V(NPM2,1)) - V(NPM2,6)*V(NPM2,2)
+ DO 10 I=NPM3,2,-1
+ V(I,7)=-V(I,2)*V(I+1,6) - V(I,3)*V(I+2,5)
+ V(I,6)=-V(I,2)*V(I+1,5) - V(I,3)*V(I+1,6)
+ V(I,5)=(1/V(I,1))- V(I,2)*V(I,6) - V(I,3)*V(I,7)
+ 10 CONTINUE
+C POSTMULTIPLY BY (D**2)*Q-TRANSP AND PREMULTIPLY BY Q TO
+C OBTAIN DIAGONALS OF MATRIX PROPORTIONAL TO THE
+C IDENTITY MINUS THE HAT MATRIX.
+ W1=1.d0/V(1,4)
+ W2= -1.d0/V(2,4) - 1.d0/V(1,4)
+ W3=1.d0/V(2,4)
+ V(1,1)=V(2,5)*W1
+ V(2,1)=W2*V(2,5) + W3*V(2,6)
+ V(2,2)=W2*V(2,6) + W3*V(3,5)
+ LEV(1)=1.d0 - (WGHT(1)**2)*SIX1MP*W1*V(1,1)
+ LEV(2)=1.d0 - (WGHT(2)**2)*SIX1MP*(W2*V(2,1) + W3*V(2,2))
+ TR=LEV(1) + LEV(2)
+ DO 20 I=4,NPM1
+ W1=1.d0/V(I-2,4)
+ W2= -1.d0/V(I-1,4) - 1.d0/V(I-2,4)
+ W3=1.d0/V(I-1,4)
+ V(I-1,1)=V(I-2,5)*W1 + V(I-2,6)*W2 + V(I-2,7)*W3
+ V(I-1,2)=V(I-2,6)*W1 + V(I-1,5)*W2 + V(I-1,6)*W3
+ V(I-1,3)=V(I-2,7)*W1 + V(I-1,6)*W2 + V(I,5)*W3
+ LEV(I-1)=1.d0 - (WGHT(I-1)**2)*SIX1MP*(W1*V(I-1,1)
+ . + W2*V(I-1,2) + W3*V(I-1,3))
+ TR= TR + LEV(I-1)
+ 20 CONTINUE
+ W1=1.d0/V(NPM2,4)
+ W2= -1.d0/V(NPM1,4) - 1.d0/V(NPM2,4)
+ W3=1.d0/V(NPM1,4)
+ V(NPOINT,1)=V(NPM1,5)*W3
+ V(NPM1,1)=V(NPM2,5)*W1 + V(NPM2,6)*W2
+ V(NPM1,2)=V(NPM2,6)*W1 + V(NPM1,5)*W2
+ LEV(NPM1)=1.d0 - (WGHT(NPM1)**2)*SIX1MP*(W1*V(NPM1,1)
+ . + W2*V(NPM1,2))
+ LEV(NPOINT)=1.d0 - (WGHT(NPOINT)**2)*SIX1MP*W3*V(NPOINT,1)
+ TR= TR + LEV(NPM1) + LEV(NPOINT)
+ RETURN
+ END
diff --git a/src/dmaket.f b/src/dmaket.f
new file mode 100644
index 0000000..97176af
--- /dev/null
+++ b/src/dmaket.f
@@ -0,0 +1,89 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ subroutine dmaket(m,n,dim,des,lddes,npoly,t,ldt,
+ * wptr,info,ptab,ldptab)
+ integer m,n,dim,lddes,npoly,ldt,wptr(dim),info,ptab(ldptab,dim)
+ double precision des(lddes,dim),t(ldt,npoly)
+c
+c Purpose: create t matrix and append s1 to it.
+c
+c On Entry:
+c m order of the derivatives in the penalty
+c n number of rows in des
+c dim number of columns in des
+c des(lddes,dim) variables to be splined
+c lddes leading dimension of des as declared in the
+c calling program
+c ldt leading dimension of t as declared in the
+c calling program
+c
+c npoly dimension of polynomial part of spline
+c On Exit:
+c t(ldt,npoly+ncov1) [t:s1]
+c info error indication
+c 0 : successful completion
+c 1 : error in creation of t
+c Work Arrays:
+c wptr(dim) integer work vector
+c
+c Subprograms Called Directly:
+c Other - mkpoly
+c
+c
+ integer i,j,k,kk,tt,nt,bptr,eptr
+c
+ info = 0
+c npoly = mkpoly(m,dim)
+ do 5 j=1,n
+ t(j,1)=1.0
+ 5 continue
+ nt = 1
+ if (npoly .gt. 1) then
+ do 10 j=1,dim
+ nt = j + 1
+ wptr(j) = nt
+ ptab(nt,j)= ptab(nt,j) +1
+ do 15 kk = 1, n
+ t(kk,nt)= des(kk,j)
+ 15 continue
+c call dcopy(n,des(1,j),1,t(1,nt),1)
+ 10 continue
+c
+c get cross products of x's in null space for m>2
+c
+c WARNING: do NOT change next do loop unless you fully understand:
+c This first gets x1*x1, x1*x2, x1*x3, then
+c x2*x2, x2*x3, and finally x3*x3 for dim=3,n=3
+c wptr(1) is always at the beginning of the current
+c level of cross products, hence the end of the
+c previous level which is used for the next.
+c wptr(j) is at the start of xj * (previous level)
+c
+ do 50 k=2,m-1
+ do 40 j=1,dim
+ bptr = wptr(j)
+ wptr(j) = nt + 1
+ eptr = wptr(1) - 1
+ do 30 tt=bptr,eptr
+ nt = nt + 1
+ do 21 jj= 1,dim
+ ptab(nt,jj)= ptab(tt,jj)
+ 21 continue
+ ptab( nt,j)= 1+ ptab( nt,j)
+ do 20 i=1,n
+ t(i,nt) = des(i,j) * t(i,tt)
+ 20 continue
+ 30 continue
+ 40 continue
+ 50 continue
+ if (nt .ne. npoly) then
+ info = 1
+ return
+ endif
+ endif
+c
+ end
diff --git a/src/drdfun.f b/src/drdfun.f
new file mode 100644
index 0000000..e29ab4f
--- /dev/null
+++ b/src/drdfun.f
@@ -0,0 +1,28 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+ subroutine drdfun(n, d2, par)
+ double precision d2(n), par(2), dtemp
+ integer n
+ if( int(par(2)).eq.0) then
+
+ do 5 k =1,n
+ d2(k)= par(1)*(d2(k))**( par(1)-1)
+ 5 continue
+ else
+ do 6 k=1,n
+ dtemp= d2(k)
+ if( dtemp.GE.1e-35) then
+c
+c NOTE factor of 2 adjusts for log being applied to
+c distance rather than squared distance
+ d2(k)= (par(1)*log(dtemp) +1)*(dtemp)**( par(1)-1)/2
+ else
+ d2(k)=0.0
+ endif
+ 6 continue
+ endif
+ return
+ end
diff --git a/src/dsetup.f b/src/dsetup.f
new file mode 100644
index 0000000..4fafecd
--- /dev/null
+++ b/src/dsetup.f
@@ -0,0 +1,73 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+ SUBROUTINE dSETUP(X,WGHT,Y,NPOINT,V,QTY,NMAX,itp,ierr)
+C PUT DELX=X(.+1)-X(.) INTO V(.,4)
+C PUT THE THREE BANDS OF THE MATRIX Q-TRANSP*D INTO
+C V(.,1-3)
+C PUT THE THREE BANDS OF (D*Q)-TRANSP*(D*Q) AT AND
+C ABOVE THE DIAGONAL INTO V(.,5-7)
+C HERE Q IS THE TRIDIAGONAL MATRIX OF ORDER (NPOINT
+C -2,NPOINT) THAT SATISFIES Q-TRANSP*T=0 AND WGHT
+C IS THE DIAGONAL MATRIX WHOSE DIAGONAL ENTRIES
+C ARE THE SQUARE ROOTS OF THE WEIGHTS USED IN THE
+C PENALIZED LEAST-SQUARES CRITERION
+c
+ implicit double precision (a-h,o-z)
+ double precision WGHT(NMAX),X(NMAX),y(NMAX)
+ double precision QTY(NMAX),V(NMAX,7)
+ double precision DIFF,PREV
+ INTEGER NPOINT,I,NPM1
+c
+ NPM1=NPOINT -1
+ V(1,4)=X(2)-X(1)
+ if(v(1,4).eq.0.d0) then
+ ierr=5
+ return
+ endif
+
+ DO 11 I=2,NPM1
+ V(I,4)=X(I+1) - X(I)
+ if(v(I,4).eq.0.d0) then
+ ierr=5
+ return
+ endif
+ if(itp.eq.0) then
+ V(I,1)=WGHT(I-1)/V(I-1,4)
+ V(I,2)=-WGHT(I)/V(I,4) - WGHT(I)/V(I-1,4)
+ V(I,3)=WGHT(I+1)/V(I,4)
+ else
+ V(I,1)=1.d0/V(I-1,4)
+ V(I,2)=-1.d0/V(I,4) - 1.0/V(I-1,4)
+ V(I,3)=1.d0/V(I,4)
+ endif
+ 11 continue
+c
+ V(NPOINT,1)=0.d0
+ DO 12 I=2,NPM1
+ V(I,5)=V(I,1)**2 + V(I,2)**2 + V(I,3)**2
+ 12 continue
+ IF(NPM1 .LT. 3)GO TO 14
+ DO 13 I=3,NPM1
+ V(I-1,6)=V(I-1,2)*V(I,1) + V(I-1,3)*V(I,2)
+ 13 continue
+ 14 V(NPM1,6)=0.d0
+ IF(NPM1 .LT. 4)GO TO 16
+ DO 15 I=4,NPM1
+ V(I-2,7)=V(I-2,3)*V(I,1)
+ 15 continue
+ 16 V(NPM1-1,7)=0.d0
+ V(NPM1,7)=0.d0
+c
+C CONSTRUCT Q-TRANSP. * Y IN QTY
+ PREV=(Y(2) - Y(1))/V(1,4)
+ DO 21 I=2,NPM1
+ DIFF=(Y(I+1)-Y(I))/V(I,4)
+ QTY(I)=DIFF - PREV
+ PREV=DIFF
+ 21 continue
+c
+ RETURN
+ END
diff --git a/src/evlpoly.f b/src/evlpoly.f
new file mode 100644
index 0000000..01243d3
--- /dev/null
+++ b/src/evlpoly.f
@@ -0,0 +1,34 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+ subroutine evlpoly(x,n,coef,j,result)
+
+c evaluates a polynomial: coef(1) + sum_i= 2,j coef(i)x**i
+
+ integer j,n, i
+ double precision x(n), result(n), coef(j)
+ double precision temp, tempx, temp2
+
+ do 10 i = 1,n
+
+ temp= coef(1)
+ tempx= x(i)
+ temp2= tempx
+
+c temp is set to constant now loop over powers
+
+ do 20 kk= 2, j
+ temp= temp + coef(kk)* temp2
+ temp2= temp2*tempx
+ 20 continue
+
+ result(i)= temp
+
+ 10 continue
+
+ return
+ end
diff --git a/src/evlpoly2.f b/src/evlpoly2.f
new file mode 100644
index 0000000..db5f918
--- /dev/null
+++ b/src/evlpoly2.f
@@ -0,0 +1,41 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+ subroutine evlpoly2(x,n,nd,ptab,j,coef,result)
+
+c evaluates a polynomial: coef(1) + sum_i= 2,j coef(i)x**i
+
+ integer j,n, i, nd
+ double precision x(n,nd), result(n), coef(j)
+ integer ptab(j,nd)
+ double precision temp, temp2
+
+ do 10 i = 1,n
+
+ temp= 0
+
+c for a given vector accumlate the polynomial terms
+
+ do 20 kk= 1, j
+ temp2 =1.0
+
+ do 30 l=1, nd
+ if( ptab(kk,l).ne.0) then
+ temp2= temp2* (x(i,l)**ptab(kk,l))
+ endif
+ 30 continue
+
+ temp = temp + temp2*coef(kk)
+
+ 20 continue
+
+ result(i)= temp
+
+ 10 continue
+
+ return
+ end
diff --git a/src/expfn.f b/src/expfn.f
new file mode 100644
index 0000000..0020c66
--- /dev/null
+++ b/src/expfn.f
@@ -0,0 +1,16 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ subroutine expfn(n,d2, par)
+ double precision d2(n), par(1)
+ integer n
+
+ do 5 k =1,n
+ d2(k)= exp(-1*d2(k)**(par( 1)/2))
+ 5 continue
+
+ return
+ end
diff --git a/src/expfnC.c b/src/expfnC.c
new file mode 100644
index 0000000..ba9d571
--- /dev/null
+++ b/src/expfnC.c
@@ -0,0 +1,41 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+
+SEXP expfnC(SEXP n, SEXP d2, SEXP par) {
+ int In, i;
+ double Dpar, par2;
+ double *Pd2;
+
+ //caste R variables to C variables, allocate answer vector
+ In = INTEGER(n)[0];
+ Dpar = REAL(par)[0];
+ par2 = Dpar/2;
+ Pd2 = REAL(d2);
+
+ for(i = 0; i < In; i++) {
+ Pd2[i] = exp(-1*pow(Pd2[i], par2));
+ }
+
+ return(R_NilValue);
+}
diff --git a/src/ifind.f b/src/ifind.f
new file mode 100644
index 0000000..2ad50aa
--- /dev/null
+++ b/src/ifind.f
@@ -0,0 +1,33 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ INTEGER FUNCTION IFIND(X,XK,N)
+C FIND I SUCH THAT XK(I) LE X LT XK(I+1)
+C IFIND=0 IF X LT XK(1)
+C IFIND=N IF X GT XK(N)
+C J F MONAHAN JAN 1982 DEPT OF STAT, N C S U, RALEIGH, NC 27650
+ double precision X,XK(N)
+ IFIND=0
+ IF(X.LT.XK(1)) RETURN
+ IFIND=N
+ IF(X.GE.XK(N)) RETURN
+ IL=1
+ IU=N
+ 1 IF(IU-IL.LE.1) GO TO 4
+ I=(IU+IL)/2
+C IF(X-XK(I)) 2,5,3
+ IF( (X-XK(I)).eq.0) go to 5
+ IF( (X-XK(I)).gt.0) go to 3
+C following used to have line number 2
+ IU=I
+ GO TO 1
+ 3 IL=I
+ GO TO 1
+ 4 IFIND=IL
+ RETURN
+ 5 IFIND=I
+ RETURN
+ END
diff --git a/src/igpoly.f b/src/igpoly.f
new file mode 100644
index 0000000..f24a316
--- /dev/null
+++ b/src/igpoly.f
@@ -0,0 +1,82 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ subroutine igpoly(nx, xg,ny,yg,np,xp,yp,ind)
+!----------------------------------------------------------------------
+! This subroutine determines whether or not an 2-d point (xg(i),yg(j))
+! element is inside a (closed) set of points xp,yp that
+! are assumed to form polygon.
+! result is an matrix indicating comparision for all
+! combinations of xg and yg
+!----------------------------------------------------------------------
+
+ integer np
+! # of points in polygon
+ integer nx,ny
+! # points to check
+ real xg(nx)
+! x grid values to check
+ real yg(ny)
+! y grid values to check
+ real xp(np)
+! 2d-locations of polygon
+ real yp(np)
+ real x1, x2, y1,y2
+! min and max of x and y
+ real temp, xt, yt
+ integer ind(nx,ny)
+! THE ANSWER : ind(i)=1 if point xd(i),yd(i) is
+! in polygon 0 otherwise
+ integer in
+ x1= xp(1)
+ x2= xp(2)
+ y1= yp(1)
+ y2= yp(1)
+!
+! find the minima and maxima of the polygon coordinates
+! i.e. the smallest rectangle containing the polygon.
+ do j = 1,np
+
+ temp= xp(j)
+ if( temp.lt.x1) then
+ x1 = temp
+ endif
+ if( temp.gt.x2) then
+ x2 = temp
+ endif
+
+ temp= yp(j)
+ if( temp.lt.y1) then
+ y1 = temp
+ endif
+ if( temp.gt.y2) then
+ y2 = temp
+ endif
+ enddo
+
+! loop over all
+! grid points
+
+ do i = 1, nx
+ do j = 1,ny
+ xt= xg(i)
+ yt= yg(j)
+! quick test that point is inside the bounding rectangle
+! of the polygon
+
+ if( (xt.le. x2).and. (xt.ge.x1).and.
+ * (yt.le.y2).and.(yt.ge.y1) ) then
+ call inpoly2(xt,yt,np,xp,yp, in)
+ ind(i,j)=in
+ else
+ ind(i,j)= 0
+ endif
+
+ enddo
+ enddo
+
+ return
+ end
diff --git a/src/init.c b/src/init.c
new file mode 100644
index 0000000..f4e5467
--- /dev/null
+++ b/src/init.c
@@ -0,0 +1,77 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <stdlib.h> // for NULL
+#include <R_ext/Rdynload.h>
+
+/* FIXME:
+ Check these declarations against the C/Fortran source code.
+*/
+
+/* .Call calls */
+extern SEXP addToDiagC(SEXP, SEXP, SEXP);
+extern SEXP compactToMatC(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP);
+extern SEXP ExponentialUpperC(SEXP, SEXP, SEXP);
+extern SEXP multebC(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP, SEXP, SEXP);
+extern SEXP RdistC(SEXP, SEXP);
+
+/* .Fortran calls */
+extern void F77_NAME(css)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(ddfind)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(dmaket)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(evlpoly)(void *, void *, void *, void *, void *);
+extern void F77_NAME(evlpoly2)(void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(igpoly)(void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(inpoly)(void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(mltdrb)(void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(multrb)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(multwendlandg)(void *, void *, void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(radbas)(void *, void *, void *, void *, void *, void *, void *);
+extern void F77_NAME(rcss)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *);
+
+static const R_CallMethodDef CallEntries[] = {
+ {"addToDiagC", (DL_FUNC) &addToDiagC, 3},
+ {"compactToMatC", (DL_FUNC) &compactToMatC, 6},
+ {"ExponentialUpperC", (DL_FUNC) &ExponentialUpperC, 3},
+ {"multebC", (DL_FUNC) &multebC, 8},
+ {"RdistC", (DL_FUNC) &RdistC, 2},
+ {NULL, NULL, 0}
+};
+
+static const R_FortranMethodDef FortranEntries[] = {
+ {"css", (DL_FUNC) &F77_NAME(css), 15},
+ {"ddfind", (DL_FUNC) &F77_NAME(ddfind), 10},
+ {"dmaket", (DL_FUNC) &F77_NAME(dmaket), 12},
+ {"evlpoly", (DL_FUNC) &F77_NAME(evlpoly), 5},
+ {"evlpoly2", (DL_FUNC) &F77_NAME(evlpoly2), 7},
+ {"igpoly", (DL_FUNC) &F77_NAME(igpoly), 8},
+ {"inpoly", (DL_FUNC) &F77_NAME(inpoly), 7},
+ {"mltdrb", (DL_FUNC) &F77_NAME(mltdrb), 9},
+ {"multrb", (DL_FUNC) &F77_NAME(multrb), 10},
+ {"multwendlandg", (DL_FUNC) &F77_NAME(multwendlandg), 9},
+ {"radbas", (DL_FUNC) &F77_NAME(radbas), 7},
+ {"rcss", (DL_FUNC) &F77_NAME(rcss), 17},
+ {NULL, NULL, 0}
+};
+
+void R_init_fields(DllInfo *dll)
+{
+ R_registerRoutines(dll, NULL, CallEntries, FortranEntries, NULL);
+ R_useDynamicSymbols(dll, FALSE);
+}
diff --git a/src/inpoly.f b/src/inpoly.f
new file mode 100644
index 0000000..e38d128
--- /dev/null
+++ b/src/inpoly.f
@@ -0,0 +1,163 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ subroutine inpoly(nd, xd,yd,np,xp,yp,ind)
+!----------------------------------------------------------------------
+! This subroutine determines whether or not an 2-d point (xd(j),yd(j))
+! element is inside a (closed) set of points xp,yp that
+! are assumed to form polygon.
+!----------------------------------------------------------------------
+
+ integer np
+! # of points in polygon
+ integer nd
+! # points to check
+ real xd(nd)
+! 2d-locations to check
+ real yd(nd)
+ real xp(np)
+! 2d-locations of polygon
+ real yp(np)
+ real x1, x2, y1,y2
+! min and max of x and y
+ real temp, xt, yt
+ integer ind(nd)
+! THE ANSWER : ind(i)=1 if point xd(i),yd(i) is
+! in polygon 0 otherwise
+ integer in
+ x1= xp(1)
+ x2= xp(2)
+ y1= yp(1)
+ y2= yp(1)
+!
+! find the minima and maxima of the polygon coordinates
+! i.e. the smallest rectangle containing the polygon.
+ do j = 1,np
+
+ temp= xp(j)
+ if( temp.lt.x1) then
+ x1 = temp
+ endif
+ if( temp.gt.x2) then
+ x2 = temp
+ endif
+
+ temp= yp(j)
+ if( temp.lt.y1) then
+ y1 = temp
+ endif
+ if( temp.gt.y2) then
+ y2 = temp
+ endif
+ enddo
+
+ do j = 1,nd
+ xt= xd(j)
+ yt= yd(j)
+! quick test that point is inside the bounding rectangle
+! if not it is not inside polygon
+ if( (xt.le. x2).and. (xt.ge.x1).and.
+ * (yt.le.y2).and.(yt.ge.y1) ) then
+ call inpoly2(xt,yt,np,xp,yp, in)
+ ind(j)=in
+ else
+ ind(j)= 0
+ endif
+
+ enddo
+
+ return
+ end
+
+ subroutine inpoly2(xpnt,ypnt,np,xp,yp,in)
+C
+ parameter (pi=3.14159265358979,ttpi=2.*pi)
+C
+ dimension xp(np),yp(np)
+ real xpnt, ypnt
+ integer in
+C
+C----------------------------------------------------------------------
+C
+C THE VALUE OF THIS FUNCTION IS NON-ZERO IF AND ONLY IF (XPNT,YPNT)
+C IS INSIDE OR *ON* THE POLYGON DEFINED BY THE POINTS (XP(I),YP(I)),
+C FOR I FROM 1 TO NP.
+C
+C THE INPUT POLYGON DOES NOT HAVE TO BE A CLOSED POLYGON, THAT IS
+C IT DOES NOT HAVE TO BE THAT (XP(1),YP(1) = (XP(NP,YP(NP)).
+C
+C----------------------------------------------------------------------
+C
+C DETERMINE THE NUMBER OF POINTS TO LOOK AT (DEPENDING ON WHETHER THE
+C CALLER MADE THE LAST POINT A DUPLICATE OF THE FIRST OR NOT).
+C
+ if (xp(np).eq.xp(1) .and. yp(np).eq.yp(1)) then
+ npts = np-1
+ else
+ npts = np
+ end if
+
+ in = 0
+! ASSUME POINT IS OUTSIDE
+
+C --- ------------------------------------------------------------------
+C --- CHECK TO SEE IF THE POINT IS ON THE POLYGON.
+C --- ------------------------------------------------------------------
+
+ do ipnt = 1,npts
+ if (xpnt .eq. xp(ipnt) .and. ypnt .eq. yp(ipnt) ) then
+ in = 1
+ goto 999
+! EARLY EXIT
+ endif
+ enddo
+
+C --- ------------------------------------------------------------------
+C --- COMPUTE THE TOTAL ANGULAR CHANGE DESCRIBED BY A RAY EMANATING
+C --- FROM THE POINT (XPNT,YPNT) AND PASSING THROUGH A POINT THAT
+C --- MOVES AROUND THE POLYGON.
+C --- ------------------------------------------------------------------
+
+ anch = 0.
+
+ inxt = npts
+ xnxt = xp(npts)
+ ynxt = yp(npts)
+ anxt = atan2(ynxt-ypnt, xnxt-xpnt)
+
+ do 100 ipnt=1,npts
+
+ ilst = inxt
+ xlst = xnxt
+ ylst = ynxt
+ alst = anxt
+
+ inxt = ipnt
+ xnxt = xp(inxt)
+ ynxt = yp(inxt)
+ anxt = atan2(ynxt-ypnt, xnxt-xpnt)
+
+ adif = anxt-alst
+
+ if (abs(adif) .gt. pi) adif = adif - sign(ttpi,adif)
+
+ anch = anch + adif
+
+ 100 continue
+
+C --- ------------------------------------------------------------------
+C --- IF THE POINT IS OUTSIDE THE POLYGON, THE TOTAL ANGULAR CHANGE
+C --- SHOULD BE EXACTLY ZERO, WHILE IF THE POINT IS INSIDE THE POLYGON,
+C --- THE TOTAL ANGULAR CHANGE SHOULD BE EXACTLY PLUS OR MINUS TWO PI.
+C --- WE JUST TEST FOR THE ABSOLUTE VALUE OF THE CHANGE BEING LESS
+C --- THAN OR EQUAL TO PI.
+C --- ------------------------------------------------------------------
+
+ if (abs(anch) .ge. pi) in = 1
+
+ 999 continue
+ return
+ end
diff --git a/src/mltdrb.f b/src/mltdrb.f
new file mode 100644
index 0000000..8df267c
--- /dev/null
+++ b/src/mltdrb.f
@@ -0,0 +1,50 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+C** evaluates partial derivatives of radial basis functions with
+c** nodes at x2 and at the points x1
+c
+ subroutine mltdrb( nd,x1,n1, x2, n2, par, c,h,work)
+ implicit double precision (a-h,o-z)
+ integer nd,n1,n2,ic, ivar, ir, j
+ double precision par(2), x1(n1,nd)
+ double precision x2(n2,nd), c(n2), h(n1, nd)
+ double precision sum, sum2
+ double precision work( n2)
+c double precision ddot
+ do 1000 ivar=1, nd
+c****** work aray must be dimensioned to size n2
+c **** loop through columns of output matrix K
+c*** outermost loop over columns of x1 and x2 should
+c*** help to access adjacent values in memory.
+ do 5 ir= 1, n1
+c evaluate all basis functions at x1(j,.)
+ do 10 j =1,n2
+c zero out sum accumulator
+ sum=0.0
+ do 15 ic=1,nd
+c** accumulate squared differences
+ sum= sum+ (x1(ir,ic)- x2(j,ic))**2
+ 15 continue
+ work(j)=sum
+ 10 continue
+C**** evaluate squared distances with basis functions.
+ call drdfun( n2,work(1) ,par(1) )
+ do 11 j= 1, n2
+ work( j)= 2.0*work(j)*(x1(ir,ivar)- x2(j,ivar))
+ 11 continue
+c*****now the dot product you have all been waiting for!
+ sum2= 0.0
+ do 12 j = 1, n2
+ sum2 = sum2 + work(j)*c(j)
+ 12 continue
+c h(ir,ivar)= ddot( n2, work(1), 1, c(1),1)
+ h(ir,ivar) = sum2
+ 5 continue
+ 1000 continue
+ return
+ end
diff --git a/src/mltdtd.f b/src/mltdtd.f
new file mode 100644
index 0000000..e2875ba
--- /dev/null
+++ b/src/mltdtd.f
@@ -0,0 +1,61 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+ subroutine mltdtd( nd,x1,n1,np,ptab, d,h)
+ implicit double precision (a-h,o-z)
+ integer nd,n1,np, ivar
+
+ double precision x1(n1,nd)
+ double precision d(np), h(n1, nd)
+ double precision work, prod, xp,xx
+ integer ptab(np,nd)
+c outer most loop is over the variables w/r partial derivative
+
+ do 1000 ivar=1, nd
+c****** work aray must be dimensioned to size np
+
+ do 5 ir= 1, n1
+c next loop is over rows of x1
+
+c
+
+c evaluate all partials of polynomials at x1(j,.)
+c take ddot product of this vector with d
+c this is the element to return in h(ir,ivar)
+ work=0.0
+ do 10 j =1,np
+ prod=0.0
+ ipv= ptab( j,ivar)
+ if( ipv.gt.0) then
+ prod=1.0
+ do 11 k= 1, nd
+ ip= ptab(j,k)
+c ip is the power of the kth variable in the jth poly
+ if( ip.eq.0) goto 11
+ xx= x1(ir,k)
+c**
+ if( k.eq.ivar) then
+ if( ip.eq.1) then
+ xp=1.0
+ else
+ xp= (ip)* xx**(ip-1)
+ endif
+ else
+ xp= xx**(ip)
+ endif
+ prod=prod*xp
+ 11 continue
+ endif
+ work= work + prod* d(j)
+10 continue
+
+c
+ h(ir,ivar)=work
+ 5 continue
+ 1000 continue
+ return
+ end
diff --git a/src/multW.f b/src/multW.f
new file mode 100644
index 0000000..5a79273
--- /dev/null
+++ b/src/multW.f
@@ -0,0 +1,41 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+ subroutine multWendlandG( mx, my, deltaX, deltaY,
+ * nc, center, coef, h, flag)
+ integer mx, my, nc, flag
+ double precision deltaX, deltaY, center(nc,2), coef(nc)
+ double precision h(mx,my)
+c
+ integer j, k, l, m1, m2, n1, n2
+ double precision kstar, lstar, d, wendlandFunction
+ do j = 1, nc
+ kStar= center(j,1)
+ lStar= center(j,2)
+ m1 = max( ceiling(-deltaX + kStar), 1)
+ m2 = min( floor( deltaX + kStar), mx)
+ n1 = max( ceiling(-deltaY + lStar), 1)
+ n2 = min( floor( deltaY + lStar), my)
+ do l = n1, n2
+ do k = m1, m2
+ d = dsqrt( ((k-kStar)/deltaX)**2 + ((l- lStar)/deltaY)**2)
+ h(k,l) = h(k,l) + wendlandFunction( d)* coef(j)
+c h(k,l) = h(k,l) + 2
+ enddo
+ enddo
+ enddo
+ flag=0
+ return
+ end
+
+ double precision function wendlandFunction(d)
+ double precision d
+ if( d.GE.1) then
+ wendlandFunction = 0
+ else
+ wendlandFunction = ((1-d)**6) * (35*d**2 + 18*d + 3)/3
+ endif
+ return
+ end
diff --git a/src/multebC.c b/src/multebC.c
new file mode 100644
index 0000000..edb4806
--- /dev/null
+++ b/src/multebC.c
@@ -0,0 +1,81 @@
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+#include <math.h>
+#include <R_ext/BLAS.h>
+#include <R_ext/Print.h>
+#include <unistd.h>
+
+SEXP multebC(SEXP nd, SEXP x1, SEXP n1, SEXP x2, SEXP n2, SEXP par, SEXP c, SEXP work) {
+ // NOTE: in original multeb function in fields, "h" was passed but not used and returned as an answer.
+ // h is not an argument to this function, but "ans" is equivalent to h and is allocated in C and returned.
+ int Ind, In1, In2, r1, r2, d;
+ double sum;
+ double *Px1, *Px2, *Pc, *Pwork, *cans;
+ SEXP ans;
+ void expfnC( SEXP, SEXP, SEXP);
+ // cast R variables to C variables
+ Ind = INTEGER(nd)[0];
+ In1 = INTEGER(n1)[0];
+ In2 = INTEGER(n2)[0];
+ Px1 = REAL(x1);
+ Px2 = REAL(x2);
+ Pc = REAL(c);
+ Pwork = REAL(work);
+ // const int tmp = 1;
+ const int cn2 = In2;
+
+ // allocate answer vector (corresponds to h in fields' multeb)
+ ans = PROTECT(allocVector(REALSXP, In1));
+ cans = REAL(ans);
+
+ // work aray must be dimensioned to size n2
+ // outer most loop over columns of x1 and x2 should reduce paging
+ for(r1 = 0; r1 < In1; r1++) {
+ // evaluate all basis functions at x1[r2,.]
+
+ for(r2 = 0; r2 < In2; r2++) {
+ // zero out sum accumulator
+ sum=0.0;
+
+ for(d = 0; d < Ind; d++) {
+ sum += pow(fabs(Px1[In1*d+r1] - Px2[In2*d+r2]), 2.0);
+ }
+
+ Pwork[r2]=sum;
+ }
+
+ // evaluate squared distances with basis functions.
+ expfnC(n2, work, par);
+
+ // now the dot product you have all been waiting for!
+ sum=0.0;
+ for(d = 0; d < cn2; d++) {
+ sum += Pwork[d]*Pc[d] ;
+ }
+ // cans[r1] = ddot_(&cn2, Pwork, &tmp, Pc, &tmp);
+ cans[r1] = sum;
+ }
+
+ UNPROTECT(1);
+ return(ans);
+}
diff --git a/src/multrb.f b/src/multrb.f
new file mode 100644
index 0000000..5d49da1
--- /dev/null
+++ b/src/multrb.f
@@ -0,0 +1,45 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+C** evaluates radial basis functions
+c**** K_ij= radfun( distance( x1_i, x2_j))
+c**** and does the multplication h= Kc
+c**** K is n1Xn2
+c**** h is n1Xn3
+c***** c is n2xn3
+
+
+ subroutine multrb( nd,x1,n1, x2,n2, par, c,n3,h,work)
+ implicit double precision (a-h,o-z)
+ integer nd,n1,n2,n3,ic, jc,j
+ double precision par(2),x1(n1,nd), x2(n2,nd), c(n2,n3), h(n1,n3)
+ double precision work( n2)
+c double precision ddot
+ double precision sum, sum2
+ double precision radfun
+
+c****** work aray must be dimensioned to size n1
+c **** loop through columns of output matrix K
+ do 5 ir= 1, n1
+ do 10 j =1,n2
+ sum=0.0
+ do 15 ic=1,nd
+c** accumulate squared differences
+ sum= sum+ (x1(ir,ic)- x2(j,ic))**2
+ 15 continue
+ work(j)=radfun( sum, par(1), par(2))
+ 10 continue
+c***** dot product for matrix multiplication
+ do 30 jc=1,n3
+ sum2= 0.0
+ do 12 j = 1, n2
+ sum2 = sum2 + work(j)*c(j, jc)
+ 12 continue
+ h(ir,jc) = sum2
+c h(ir,jc)= ddot( n2, work, 1, c(1,jc),1)
+ 30 continue
+ 5 continue
+ return
+ end
diff --git a/src/radbas.f b/src/radbas.f
new file mode 100644
index 0000000..75a6809
--- /dev/null
+++ b/src/radbas.f
@@ -0,0 +1,33 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+c**** subroutine to fill in the omega ( or K) matrix for
+c**** ridge regression S funcion
+c**** K_ij= radfun( distance( x1_i, x2_j))
+c
+ subroutine radbas( nd,x1,n1, x2,n2, par, k)
+ integer nd,n1,n2,ic
+ double precision par(2), x1(n1,nd), x2(n2,nd), k(n1,n2)
+ double precision xtemp, radfun
+c **** loop through columns of output matrix K
+c*** outer most loop over columns of x1 and x2 should reduce memory swaps
+ do ic= 1, nd
+ do j =1,n2
+ xtemp= x2(j,ic)
+ do i= 1, n1
+c** accumulate squared differences
+ k(i,j)= (x1(i,ic)- xtemp)**2 + k(i,j)
+ enddo
+ enddo
+ enddo
+c**** at this point k( i,j) is the squared distance between x1_i and x2_j
+c*** now evaluate radial basis functions
+ do j =1,n2
+ do i= 1, n1
+ k(i,j)= radfun( k(i,j), par(1), par(2) )
+ enddo
+ enddo
+ return
+ end
diff --git a/src/radfun.f b/src/radfun.f
new file mode 100644
index 0000000..f934322
--- /dev/null
+++ b/src/radfun.f
@@ -0,0 +1,22 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+c evaluates thin plate spline radial basis function
+ double precision function radfun(d2, par1, par2)
+ double precision d2, par1, par2
+ if( d2.lt.1e-20) then
+ d2= 1e-20
+ endif
+ if( int(par2).eq.0) then
+ radfun= (d2)**( par1)
+ else
+c note: d2 is squared distance
+c divide by 2 to have log evaluated on distance
+c as opposed to squared distance.
+ radfun= (log(d2)/2) * ((d2)**( par1))
+ endif
+ return
+ end
+
diff --git a/src/rcss.f b/src/rcss.f
new file mode 100644
index 0000000..b4daae8
--- /dev/null
+++ b/src/rcss.f
@@ -0,0 +1,238 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+
+
+
+
+ subroutine rcss(h,npoint,x,y,wt,sy,trace,diag,cv,
+ + ngrid,xg,yg,job,ideriv,din,dout,ierr)
+c This a program to compute a robust univariate spline according to the
+c model:
+c minimize (1/n) sum(i=1,n)[rho( y(i)-f(x(i) )] + lambda*J(f)
+c over f
+c definition of the rho function and its derivative are in rcsswt
+c and rcssr
+c
+c One way of speeding convergence is to use the results from a previous
+c estimate as the starting values for the another estimate. This is
+c particulary appropriate when lambda or a parameter in the rho function
+c is being varied. Moderate changes in lambda will often yield similar
+c estimates. The way to take advantage of this is to pass the weights
+c from the previous fit as teh starting values for the next estimate
+c
+c Arguments of rcss:
+c h : natural log of lambda
+c
+c if h is passed with a value less than or equal -1000 no smoothing will
+c be done and the spline will interploate the data points
+c npoint: number of observations
+c (x,y) : pairs of data points to be smoothed
+c x(i) are assumed to be increasing. Repeated
+c observations at the same x are not handled by this routine.
+c sy : on return predicted values of f at x
+c wt : weights used in the iterivatively reweighted least
+c squares algorithm to compute robust spline. Vector
+c passed are used as the starting values. Subsequent
+c iterations compute the weights by a call to the
+c subroutine rcsswt
+c
+c that is the linear approximation of teh estimator at
+c convergence.
+c trace= tr(A(lambda)) = " effective number of paramters"
+c diag: diagonal elements of A(lambda) ( this is the most
+c computationally intetnsive opertation in this subroutine.)
+c cv: approximate cross-validation function
+c using the linear approximation at convergence
+c
+c ngrid,xg,yg : on return, the ith deriv of the spline estimate is
+c evaluated on a grid, xg, with ngrid points. The
+c values are returned in yg
+c
+c ideriv: 0 = evaluate the function according to the job code
+c 1 = evaluate the first derivative of the function according
+c to the job code
+c 2 = evaluate the second derivative of the function according
+c to the job code
+c
+c din: Vector of input parameters
+c din(1)= cost for cv
+c din(2)= offset for cv
+c din(3)= max number of iterations
+c din(4)= tolerance criterion for convergence
+c
+c din(5)= C scale parameter in robust function (transition from
+c quadratic to linear.
+c din(6)= alpha 1/2 slope of rho function for large, positive
+c residuals slope for residuals <0 : is 1/2 (1-alpha)
+c see comments in rcsswt for defintion of the rho and psi
+c functions
+c
+c job: in decimal job=(a,b,c) (a=igcv, b=igrid)
+c a=0 evaluate spline at x values, return predicted values in sy
+c a=1 same as a=0 plus returning values of trace, diag and cv
+c a=2 do no smoothing, interpolate the data
+c a=3 same as a=1 but use the passed values in din(1) din(2)
+c for computing cv function
+c
+c b=0 do not evaluate the spline at any grid points
+c b=1 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at the (unique) sorted,data points, xg, return yg
+c b=2 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at ngrid equally spaced points between x(1)
+c and x(npoints)
+c b=3 evaluate the spline (ideriv=0) or the ith deriv (i=ideriv)
+c of the spline at ngrid points on grid passed in xg.
+c NOTE: extrapolation of the estimate beyond the range of
+c the x's will just be a linear function.
+c
+c
+c
+c c=1 Assume that the X's are in sorted order
+c c=2 Do not sort the X's use the current set of keys
+c Should only be used on a second call to smooth
+c same design
+
+c Arguments of subroutine:
+c dout(1)= numit
+c dout(2)=tstop
+c dout(3) = trace
+c dout(4)= cv
+c numit: actual number of iterations for convergence
+c tstop: value of convergence criterion at finish.
+c
+c ierr: if ierr>0 something bad has happened
+c ierr>10 indicates problems in the call to the cubic spline
+c routine.
+c
+
+ parameter(NMAX=20000)
+ implicit double precision (a-h,o-z)
+ double precision h,trace,cv
+ double precision wt(npoint),X(npoint),Y(npoint)
+ double precision sy(npoint),diag(npoint)
+ double precision xg(ngrid),yg(ngrid)
+ double precision din(10), dout(10),cost, offset, dum1, dum2
+
+ integer npoint,ngrid ,itj(3), job(3)
+
+
+ if(npoint.gt.NMAX) then
+ ierr=1
+ return
+ endif
+
+ maxit= int(din(3))
+ tstop=din(4)
+ ybar=0.0
+ ysd=0.0
+ do 10 k=1,npoint
+ diag(k) = y(k)
+ ybar= ybar+ y(k)
+ ysd= ysd+ y(k)**2
+ 10 continue
+ ybar= ybar/npoint
+ ysd= sqrt( ysd/npoint - ybar**2)
+c Start iterating
+ test=0.0
+
+ itj(1)= 0
+ itj(2)=0
+ itj(3)=0
+ do 500 it=1,maxit
+ if( it.gt.1) then
+ itj(3)=2
+ endif
+c fit a weighted least squares cubic smoothing spline
+ call css(h,npoint,x,y,wt,sy,
+ * dum1,diag,dum2,ngrid,xg,yg,
+ * itj,ideriv,ierr)
+c check for an error returned by spline routine
+ if(ierr.gt.0) then
+c add 10 so these can be distinguished from errors in this routine
+ ierr= 10 + ierr
+ return
+ endif
+
+c compute convergence criterion
+c The intent of this criterion is to find out when successive iterations
+c produce changes that are small
+
+
+ do 25 i=1,npoint
+ test=(diag(i)-sy(i))**2 + test
+ diag(i)= sy(i)
+ 25 continue
+
+ test=sqrt(test/npoint)/ysd
+ if( test.lt.tstop ) then
+c * exit loop *
+ numit=it
+ goto 1000
+ endif
+
+c
+c make up new set of weights
+c
+ call rcsswt( npoint, y,sy,wt, din(5))
+c reinitialize test criterion for convergence
+c
+ test=0.0
+500 continue
+
+ numit= maxit
+ 1000 continue
+c One last call if job code is not 0
+ if( (job(1).ne.0).or.(job(2).ne.0)) then
+c
+ call css(h,npoint,x,y,wt,sy,
+ * trace,diag,cv,ngrid,xg,yg,
+ * job,ideriv,ierr)
+
+ ia= job(1)
+ ig= job(2)
+c if(ig.gt.0) then
+c endif
+c calculate cv value if asked for
+ if( (ia.eq.1) .or.( ia.eq.3) ) then
+ if(ia.eq.3) then
+ cost= din(1)
+ offset= din(2)/npoint
+ else
+ cost=1
+ offset= 0
+ endif
+
+ cv=0.0
+ do 1500 k=1,npoint
+c compute approx. cross-validated residual
+
+
+c plug cv residual into rho function, din(5) is the begining of parameter
+c vector for the rho function (scale and alpha)
+c
+c but only do this if the leverage is away from one.
+c this prevents the numerical problems with quantile splein of a zero
+c residual and
+c a zero denominator.
+ if(1- diag(k).gt.1e-7) then
+ resid= (y(k)- sy(k))/( 1- cost*(diag(k)+offset))
+ cv= cv + rcssr(resid, din(5))
+ endif
+
+ 1500 continue
+ cv= cv/npoint
+ endif
+ endif
+
+ dout(1)=numit
+ dout(2)=test
+ dout(3)=trace
+ dout(4)=cv
+
+ return
+ end
diff --git a/src/rcssr.f b/src/rcssr.f
new file mode 100644
index 0000000..f83ae07
--- /dev/null
+++ b/src/rcssr.f
@@ -0,0 +1,38 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+ double precision function rcssr(r,par)
+c
+c robust rho function:
+c This is a peicewise polynomial with knots at -C , 0 and C
+c the function is quadratic for -C<u<0 and 0<u<C
+c the function is linear for u<-C and u>C
+c rho is continuous for all u aqnd differentiable for all points
+c except u=0 when a != 1/2
+c
+c
+c rho(u) = 2*a*u - a*c for u>C
+c a*u**2/C for 0<u< C
+c (1-a)*u**2/C for -C<u<0
+c 2*(1-a)*u - (1-a)*C for u< -C
+c
+c Note a= par(1), C= par(2)
+ implicit double precision (a-h, o-z)
+ double precision r, par(2),c,a
+ c= par(1)
+ if( r.gt.0 ) then
+ a=par(2)
+ else
+ a =(1-par(2))
+ r= -r
+ endif
+ if( r.le.c) then
+ rcssr= a*r*r/c
+ else
+ rcssr= 2*a*(r) - a*c
+ endif
+ return
+ end
diff --git a/src/rcsswt.f b/src/rcsswt.f
new file mode 100644
index 0000000..1c87077
--- /dev/null
+++ b/src/rcsswt.f
@@ -0,0 +1,48 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+c**********
+ subroutine rcsswt(n,y, sy, wt, par)
+ implicit double precision (a-h, o-z)
+ double precision y(n), sy(n), wt(n),psi,a,am1,c
+ double precision par(2)
+c
+c psi(u) is the derivative of rho(u) defined in rcssr above
+c
+c It is composed of peicewise linear and peicewise constant segements
+c and will be continuous except at u for a!=.5.
+c
+c
+ a= par(2)
+ am1 = (1-par(2))
+ c= par(1)
+ do 100 k=1, n
+c find scaled residual
+ r= (y(k)- sy(k))/c
+ if( (r.gt. 0)) then
+ if( r.lt. 1) then
+ psi= 2*a*r
+
+ else
+ psi= 2*a
+ endif
+ else
+ if( r.gt.-1) then
+ psi= 2*am1*r
+
+ else
+ psi= -2*am1
+ endif
+
+ endif
+c
+c note weights supplied to cubic spline routine follow the convention that
+c they are in terms of standard deviations. ( The more common form is
+c as reciprocal variances
+c
+ wt(k) = dsqrt( 2*r/psi)
+ 100 continue
+ return
+ end
diff --git a/src/rdist.f b/src/rdist.f
new file mode 100644
index 0000000..c7114b7
--- /dev/null
+++ b/src/rdist.f
@@ -0,0 +1,81 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+ subroutine rdist( nd,x1,n1,x2,n2, k)
+ integer nd,n1,n2,ic
+ double precision x1(n1,nd), x2(n2,nd), k(n1,n2)
+ double precision xtemp
+
+ do j =1,n2
+ xtemp= x2(j,1)
+ do i= 1, n1
+c** accumulate squared differences
+ k(i,j)= (x1(i,1)- xtemp)**2
+ enddo
+ enddo
+c **** loop through columns of output matrix K
+c*** outer most loop over columns of x1 and x2 should reduce memory swaps
+ if( nd.ge.2) then
+ do ic= 2, nd
+ do j =1,n2
+ xtemp= x2(j,ic)
+ do i= 1, n1
+c** accumulate squared differences
+ k(i,j)= (x1(i,ic)- xtemp)**2 + k(i,j)
+ enddo
+ enddo
+ enddo
+ endif
+c**** at this point k( i,j) is the squared distance between x1_i and x2_j
+ do j =1,n2
+ do i= 1, n1
+ k(i,j)= sqrt( k(i,j))
+ enddo
+ enddo
+ return
+ end
+
+
+
+ subroutine rdist1( nd,x1,n1,k)
+ integer nd,n1,ic
+ double precision x1(n1,nd), k(n1,n1)
+ double precision xtemp, dtemp
+
+ do j =1,n1
+ xtemp= x1(j,1)
+ do i= 1, j
+c** accumulate squared differences
+ k(i,j)= (x1(i,1)- xtemp)**2
+ enddo
+ enddo
+c **** loop through columns of output matrix K
+c*** outer most loop over columns of x1 and x2 should reduce memory swaps
+ if( nd.ge.2) then
+ do ic= 2, nd
+ do j =1,n1
+ xtemp= x1(j,ic)
+ do i= 1, j
+c** accumulate squared differences
+ k(i,j)= (x1(i,ic)- xtemp)**2 + k(i,j)
+ enddo
+ enddo
+ enddo
+ endif
+c**** at this point k( i,j) is the squared distance between x1_i and x2_j
+c**** for the upper triangle of matrix
+ do j = 1,n1
+ do i= 1, j
+ dtemp = sqrt( k(i,j))
+ k(i,j)= dtemp
+c
+c filling in lower part takes a substantial time and is omitted
+c This means the returned matrix k has indeterminant vlaues in the
+c lower triangle.
+c k(j,i)= dtemp
+ enddo
+ enddo
+ return
+ end
diff --git a/src/rdistC.c b/src/rdistC.c
new file mode 100644
index 0000000..9991e3a
--- /dev/null
+++ b/src/rdistC.c
@@ -0,0 +1,48 @@
+
+/*
+c**** # fields is a package for analysis of spatial data written for
+c**** # the R software environment .
+c**** # Copyright (C) 2017
+c**** # University Corporation for Atmospheric Research (UCAR)
+c**** # Contact: Douglas Nychka, nychka at ucar.edu,
+c**** # National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307-3000
+c**** #
+c**** # This program is free software; you can redistribute it and/or modify
+c**** # it under the terms of the GNU General Public License as published by
+c**** # the Free Software Foundation; either version 2 of the License, or
+c**** # (at your option) any later version.
+c**** # This program is distributed in the hope that it will be useful,
+c**** # but WITHOUT ANY WARRANTY; without even the implied warranty of
+c**** # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+c**** # GNU General Public License for more details.
+*/
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/Arith.h>
+#include <Rmath.h>
+#include <float.h>
+SEXP Rdist1C(SEXP x)
+ {
+ int nx = nrows(x);
+ int dim = ncols(x);
+ void F77_CALL(rdist1)(int *, double *, int *, double *);
+ SEXP ans = PROTECT(allocMatrix(REALSXP, nx, nx));
+ double *rx = REAL(x), *rans = REAL(ans);
+/* rdist1_( &dim, rx, &nx, rans); */
+ F77_CALL(rdist1)( &dim, rx, &nx, rans);
+ UNPROTECT(1);
+return ans;
+}
+
+SEXP RdistC(SEXP x1, SEXP x2)
+ {
+ int n1 = nrows(x1);
+ int n2 = nrows(x2);
+ int dim = ncols(x1);
+ void F77_CALL(rdist)(int *, double *, int *, double *, int *, double *);
+ SEXP ans = PROTECT(allocMatrix(REALSXP, n1, n2));
+ double *rx1 = REAL(x1),*rx2 = REAL(x2), *rans = REAL(ans);
+ F77_CALL(rdist)( &dim, rx1, &n1, rx2, &n2, rans);
+ UNPROTECT(1);
+return ans;
+}
diff --git a/src/sortm.f b/src/sortm.f
new file mode 100644
index 0000000..a9b168b
--- /dev/null
+++ b/src/sortm.f
@@ -0,0 +1,93 @@
+c fields, Tools for spatial data
+c Copyright (C) 2017, Institute for Mathematics Applied Geosciences
+c University Corporation for Atmospheric Research
+c Licensed under the GPL -- www.gpl.org/licenses/gpl.html
+
+
+
+c OLD routine with line numbers and computed if
+c SUBROUTINE SORTM0(K,ki,N)
+C HEAPSORT ALGORITHM FOR SORTING ON VECTOR OF KEYS K OF LENGTH N
+C J F MONAHAN TRANSCRIBED FROM KNUTH, VOL 2, PP 146-7.
+C integer array ki is permuted along with K
+
+c double precision K(N),KK
+c integer ki(N),kki
+c INTEGER R
+c IF(N.LE.1) RETURN
+c L=N/2+1
+c R=N
+c 2 IF(L.GT.1) GO TO 1
+c KK=K(R)
+c kki= ki(R)
+c K(R)=K(1)
+c ki(R)=ki(1)
+c R=R-1
+c IF(R.EQ.1) GO TO 9
+c GO TO 3
+c 1 L=L-1
+c KK=K(L)
+c kki=ki(L)
+c 3 J=L
+c 4 I=J
+c J=2*J
+c IF(J-R) 5,6,8
+c 5 IF(K(J).LT.K(J+1)) J=J+1
+c 6 IF(KK.GT.K(J)) GO TO 8
+c 7 K(I)=K(J)
+c ki(I)=ki(J)
+c GO TO 4
+c 8 K(I)=KK
+c ki(I)=kki
+c GO TO 2
+c 9 K(1)=KK
+c ki(1)=kki
+c RETURN
+c END
+
+ SUBROUTINE SORTM(X,ki,N)
+C HEAPSORT ALGORITHM FOR SORTING ON VECTOR OF KEYS X OF LENGTH N
+C J F MONAHAN TRANSCRIBED FROM KNUTH, VOL 2, PP 146-7.
+C integer array ki is permuted along with X
+ double precision X(N),XX
+ integer N
+ integer ki(N),kki
+ INTEGER R, L,I, J
+ IF(N.LE.1) RETURN
+ L=N/2+1
+ R=N
+ 2 IF(L.GT.1) GO TO 1
+ XX=X(R)
+ kki= ki(R)
+ X(R)=X(1)
+ ki(R)=ki(1)
+ R=R-1
+ IF(R.EQ.1) GO TO 9
+ GO TO 3
+ 1 L=L-1
+ XX=X(L)
+ kki=ki(L)
+ 3 J=L
+ 4 I=J
+ J=2*J
+C IF(J-R) 5,6,8
+c < goes here
+ if( (J-R).LT.0) then
+ IF(X(J).LT.X(J+1)) J=J+1
+ endif
+c <= go to here
+ if( (J-R).LE.0) then
+ IF(XX.GT.X(J)) GO TO 8
+ X(I)=X(J)
+ ki(I)=ki(J)
+ GO TO 4
+ endif
+c > goes to here
+ 8 X(I)=XX
+ ki(I)=kki
+ GO TO 2
+ 9 X(1)=XX
+ ki(1)=kki
+ RETURN
+ END
+
diff --git a/tests/Krig.Z.test.R b/tests/Krig.Z.test.R
new file mode 100644
index 0000000..02d8b28
--- /dev/null
+++ b/tests/Krig.Z.test.R
@@ -0,0 +1,285 @@
+# fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library(fields)
+#
+#
+# test of fixed lambda case
+# Check against linear algebra
+#
+
+options( echo=FALSE)
+
+test.for.zero.flag<-1
+
+#cat("A very nasty case with knots and weights",fill=TRUE)
+
+set.seed(123)
+x<- matrix( runif( 30), 15,2)
+Z<- matrix( rnorm(30), 15,2)
+y<- rnorm( 15)*.01 + 5*(x[,1]**3 + (x[,2]-.5)**2) + (Z[,1] +Z[,2])*.001
+knots<- x[1:5,]
+#weights<- runif(15)*10
+
+# first without knots compare default to fixed
+
+Krig( x,y,Z=Z, cov.function=Exp.cov, give.warnings=FALSE)-> out.new
+
+Krig( x,y,Z=Z, cov.function=Exp.cov,lambda=1)-> out.new2
+
+
+##########
+## compute test using linear algebra
+
+K<- Exp.cov( x,x)
+lambda<-1
+M<- (lambda* diag(nrow( x)) + K)
+T<- cbind( rep(1,15), x, Z)
+temp.d<- c(solve( t(T) %*% solve( M)%*%T) %*% t(T)%*% solve( M) %*% y)
+temp.c<- solve( M)%*% ( y - T%*% temp.d)
+
+# test for d coefficients
+test.for.zero( out.new2$d, temp.d, tag=" d coef")
+# test for c coefficents
+test.for.zero( out.new2$c, temp.c, tag="c coef" )
+
+
+####### testing predict function
+hold2<- predict( out.new2, x=x, Z=Z, just.fixed=TRUE)
+hold3<- predict( out.new2, x=x, Z=Z, drop.Z=TRUE)
+hold4<- predict( out.new2, x=x, Z=Z, drop.Z=TRUE, just.fixed=TRUE)
+
+hold<-T%*%temp.d
+test.for.zero( hold, hold2, tag="predict for null fixed" )
+
+hold<-T[,1:3]%*%temp.d[1:3] + K %*% temp.c
+test.for.zero( hold, hold3, tag="predict for null spatial" )
+
+hold<-T[,1:3]%*%temp.d[1:3]
+test.for.zero( hold, hold4, tag="predict for null drift" )
+
+######tests where coefficients are recomputed from object
+hold2<- predict( out.new,y=y, lambda=1.0, x=x, Z=Z, just.fixed=TRUE)
+hold3<- predict( out.new,y=y, lambda=1.0, x=x, Z=Z, drop.Z=TRUE)
+hold4<- predict( out.new, y=y, lambda=1.0, x=x, Z=Z,
+ drop.Z=TRUE, just.fixed=TRUE)
+
+hold<-T%*%temp.d
+test.for.zero( hold, hold2, tag="predict for null fixed" )
+
+hold<-T[,1:3]%*%temp.d[1:3] + K %*% temp.c
+test.for.zero( hold, hold3, tag="predict for null spatial" )
+
+hold<-T[,1:3]%*%temp.d[1:3]
+test.for.zero( hold, hold4, tag="predict for null drift " )
+
+
+
+###knots case *****************************
+
+
+
+set.seed(123)
+x<- matrix( runif( 30), 15,2)
+Z<- matrix( rnorm(30), 15,2)
+y<- rnorm( 15)*.01 + 5*(x[,1]**3 +
+ (x[,2]-.5)**2) + (Z[,1] +Z[,2])*.001
+knots<- x[1:5,]
+weights<- runif(15)*10
+y[5] <- y[5] + 3 # avoids GCV warning
+
+# compare to
+Krig( x,y,Z=Z, knots=knots, cov.function=Exp.cov,weights=weights,
+verbose=FALSE, give.warnings=FALSE)-> out.new
+
+Krig( x,y,Z=Z, knots=knots, cov.function=Exp.cov,weights=weights,
+ lambda=1)-> out.new2
+
+# compare to each other
+Krig.coef( out.new, lambda=1)-> look
+# test for d coefficients
+test.for.zero( out.new2$d, look$d, tag=" knots/weights fixed/default d coef")
+# test for c coefficents
+test.for.zero( out.new2$c, look$c, tag="knots/weights fixed/default c coef" )
+
+
+# compute test using linear algebra
+
+K<- Exp.cov( knots, knots)
+
+T<- cbind( rep(1,15), x, Z)
+X<- cbind( T, Exp.cov( x, knots))
+lambda<-1.0
+NN<- ncol( X)
+H<- matrix( 0, NN, NN)
+H[(1:5)+5, (1:5)+5] <- K
+
+c( solve(t(X)%*%(weights*X) + lambda*H)%*% t(X)%*% (weights*y) )-> temp
+temp.c<- temp[6:10]
+temp.d<- temp[1:5]
+
+# test for d coefficients
+test.for.zero( out.new2$d, temp.d, tag=" knots d coef")
+# test for c coefficents
+test.for.zero( out.new2$c, temp.c, tag="knots c coef" )
+
+
+####### testing predict function
+hold1<- predict( out.new2, x=x, Z=Z, y=y)
+hold2<- predict( out.new2, x=x, Z=Z, just.fixed=TRUE,y=y)
+hold3<- predict( out.new2, x=x, Z=Z, drop.Z=TRUE,y=y)
+hold4<- predict( out.new2, x=x, Z=Z, drop.Z=TRUE, just.fixed=TRUE,y=y)
+
+
+hold<- X%*% temp
+# X%*% temp - X[,4:5]%*% temp[c(4,5)]
+
+test.for.zero( hold, hold1, tag="knots predict for null" )
+
+hold<-T%*%temp.d
+test.for.zero( hold, hold2, tag="knots predict for null" )
+
+hold<-X%*%temp - X[,4:5] %*% temp[4:5]
+test.for.zero( hold, hold3, tag="knots predict w/o Z" )
+
+hold<-T[,1:3]%*%temp.d[1:3]
+test.for.zero( hold, hold4, tag="knots predict for drift" )
+
+######tests where coefficients are recomputed from object
+hold1<- predict( out.new,y=y, lambda=1.0, x=x, Z=Z)
+hold2<- predict( out.new,y=y, lambda=1.0, x=x, Z=Z, just.fixed=TRUE)
+hold3<- predict( out.new, y=y, lambda=1.0, x=x, Z=Z, drop.Z=TRUE)
+hold4<- predict( out.new, y=y, lambda=1.0, x=x, Z=Z,
+ drop.Z=TRUE, just.fixed=TRUE)
+
+hold<-X%*%temp
+test.for.zero( hold, hold1, tag="predict for null" )
+
+hold<-T%*%temp.d
+test.for.zero( hold, hold2, tag="predict for null" )
+
+hold<-X[,1:3] %*%temp.d[1:3] + X[,6:10] %*% temp.c
+test.for.zero( hold, hold3, tag="predict for null" )
+
+hold<-T[,1:3]%*%temp.d[1:3]
+test.for.zero( hold, hold4, tag="predict for null" )
+
+
+####### tests using predict.se
+ x<- ChicagoO3$x
+ y<- ChicagoO3$y
+ Zcov<- x[,1]**3 + x[,2]**3
+
+
+tps.fit<-Tps( x,y, scale.type="unscaled", Z= Zcov)
+
+# here is lazy way to get a grid.list
+ fields.x.to.grid( x, nx=20,ny=20)-> gridlist
+
+ xg<- make.surface.grid(gridlist)
+ Zcov.grid<- xg[,1]**3 + xg[,2]**3
+
+########### tests on just predict have been commented out to
+########### indicate that they are redundant given
+########### previous tests however, they could be useful for
+########### future debugging ...
+
+# full surface with covariate
+# curv.mean1 <- predictSurface(tps.fit, grid.list = gridlist, extrap = TRUE,
+## Z =Zcov.grid, drop.Z = FALSE)$z
+
+# just the spline surface
+# curv.mean2 <- predictSurface(tps.fit, grid.list = gridlist,
+# extrap = TRUE,drop.Z=TRUE)$z
+
+# explicitly here is the difference surface of curv.mean1 and curv.mean2
+# curv.mean0<- as.surface( gridlist, Zcov.grid* tps.fit$d[4])$z
+# test.for.zero( curv.mean1- curv.mean2, curv.mean0)
+
+## new tests
+
+ predictSurfaceSE( tps.fit, grid.list=gridlist, extrap=TRUE,
+ drop.Z=TRUE)$z-> curv.var1
+
+ predictSE( tps.fit, xg, drop.Z=TRUE)-> curv.var2
+ test.for.zero( curv.var1, curv.var2)
+
+# SE with covariates included
+ predictSE( tps.fit, xg, Z=Zcov.grid, drop.Z=FALSE)**2-> curv.var1
+# as.surface( gridlist, curv.var1)$z-> curv.var1
+
+# SE for just the spline part
+ predictSE( tps.fit, xg, drop.Z=TRUE)**2-> curv.var2
+# as.surface( gridlist, curv.var2)$z-> curv.var2
+
+# SE for just the fixed part
+ predictSE( tps.fit, xg,Z=Zcov.grid, drop.Z=FALSE,
+ just.fixed=TRUE )**2-> curv.var3
+# as.surface( gridlist, curv.var3)$z-> curv.var3
+
+
+# calculating from more basic functions
+## these tests assume that Krig.Amatrix is working correctly!
+
+out<- tps.fit
+
+A<- Krig.Amatrix( tps.fit,x= xg, drop.Z=TRUE)
+Sigma<- out$rhohat*Rad.cov( out$x, out$x, p=2)
+S0<- out$rhohat*Rad.cov(xg, xg, p=2)
+S1<- out$rhohat*Rad.cov(out$x, xg, p=2)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+ look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+ look<- diag( look)
+ test.for.zero(curv.var2 ,look,tag="SE w/o covariate")
+
+
+A<- Krig.Amatrix( tps.fit,x= xg, drop.Z=FALSE,Z=Zcov.grid)
+# see tps.fit$args for value of p
+Sigma<- out$rhohat*Rad.cov( out$x, out$x, p=2)
+S0<- out$rhohat*Rad.cov(xg, xg, p=2)
+S1<- out$rhohat*Rad.cov(out$x, xg, p=2)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+ look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+ look<- diag( look)
+ test.for.zero(curv.var1 ,look,tag="SE with covariate")
+
+
+A<- Krig.Amatrix( tps.fit,x= xg, drop.Z=FALSE,Z=Zcov.grid, just.fixed=TRUE)
+# see tps.fit$args for value of p
+Sigma<- out$rhohat*Rad.cov( out$x, out$x, p=2)
+S0<- out$rhohat*Rad.cov(xg, xg, p=2)
+S1<- out$rhohat*Rad.cov(out$x, xg, p=2)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+ look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+ look<- diag( look)
+ test.for.zero(curv.var3 ,look, tag="SE for fixed part")
+
+cat("All done with Z tests and Krig/Tps including predict and predictSE !",
+ fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Krig.Z.test.Rout.save b/tests/Krig.Z.test.Rout.save
new file mode 100644
index 0000000..9b565d5
--- /dev/null
+++ b/tests/Krig.Z.test.Rout.save
@@ -0,0 +1,109 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> #
+> #
+> # test of fixed lambda case
+> # Check against linear algebra
+> #
+>
+> options( echo=FALSE)
+Testing: d coef
+PASSED test at tolerance 1e-08
+Testing: c coef
+PASSED test at tolerance 1e-08
+Testing: predict for null fixed
+PASSED test at tolerance 1e-08
+Testing: predict for null spatial
+PASSED test at tolerance 1e-08
+Testing: predict for null drift
+PASSED test at tolerance 1e-08
+Testing: predict for null fixed
+PASSED test at tolerance 1e-08
+Testing: predict for null spatial
+PASSED test at tolerance 1e-08
+Testing: predict for null drift
+PASSED test at tolerance 1e-08
+Testing: knots/weights fixed/default d coef
+PASSED test at tolerance 1e-08
+Testing: knots/weights fixed/default c coef
+PASSED test at tolerance 1e-08
+Testing: knots d coef
+PASSED test at tolerance 1e-08
+Testing: knots c coef
+PASSED test at tolerance 1e-08
+Testing: knots predict for null
+PASSED test at tolerance 1e-08
+Testing: knots predict for null
+PASSED test at tolerance 1e-08
+Testing: knots predict w/o Z
+PASSED test at tolerance 1e-08
+Testing: knots predict for drift
+PASSED test at tolerance 1e-08
+Testing: predict for null
+PASSED test at tolerance 1e-08
+Testing: predict for null
+PASSED test at tolerance 1e-08
+Testing: predict for null
+PASSED test at tolerance 1e-08
+Testing: predict for null
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: SE w/o covariate
+PASSED test at tolerance 1e-08
+Testing: SE with covariate
+PASSED test at tolerance 1e-08
+Testing: SE for fixed part
+PASSED test at tolerance 1e-08
+All done with Z tests and Krig/Tps including predict and predictSE !
+>
+> proc.time()
+ user system elapsed
+ 1.508 0.053 1.552
diff --git a/tests/Krig.se.W.R b/tests/Krig.se.W.R
new file mode 100644
index 0000000..002f82b
--- /dev/null
+++ b/tests/Krig.se.W.R
@@ -0,0 +1,88 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+# tests of predictSE using
+# off diag weight matrix for obs (W)
+
+options( echo=FALSE)
+
+test.for.zero.flag<- 1
+
+# a nasty example with off diagonal weights.
+
+set.seed(123)
+
+N<- 50
+x<- matrix( runif( N*2), N,2)
+y<- rnorm( N)*.2 + 2*x[,1]**2 + x[,2]**2
+
+
+weights<- runif(N)*10
+x0<- cbind( c(.1,.2,.6,.65,.8), c(.05,.5,.73,.9,.95))
+
+
+
+temp.wght<- function(x, alpha=.3){
+ Exp.cov( x, theta=.1) }
+
+Krig( x,y, cov.function=Exp.cov,weights=weights,
+ wght.function= "temp.wght")-> out
+Krig( x,y, cov.function=Exp.cov,weights=weights,W= out$W)-> out2
+
+
+# direct calculation test for A matrix
+#
+
+Krig.Amatrix( out, x=x0)-> A
+test.for.zero( A%*%y, predict( out, x0),tag="Amatrix vs. predict")
+
+# now find se.
+
+W2<-out$W2
+W<- out$W
+
+Sigma<- out$rhohat*Exp.cov( out$x,out$x)
+temp0<- out$rhohat*(Exp.cov( x0, x0))
+S1<- out$rhohat*Exp.cov( out$x, x0)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+Sigma.obs<- Krig.make.Wi( out)$Wi
+Sigma.obs <- Sigma.obs* (out$shat.MLE**2)
+
+temp1<- A%*%S1
+temp2<- A%*% ( Sigma.obs+ Sigma)%*% t(A)
+look<- temp0 - t(temp1) - temp1 + temp2
+
+
+#compare to
+# diagonal elements
+
+test<- predictSE( out, x= x0)
+test.for.zero( sqrt(diag( look)), test,tag="Marginal predictSE")
+
+
+test<- predictSE( out, x= x0, cov=TRUE)
+test2<- predictSE( out2, x= x0, cov=TRUE)
+test.for.zero( look, test,tag="Full covariance predictSE")
+test.for.zero( look, test2,tag="Full covariance predictSE explicit W")
+
+cat( "all done", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Krig.se.W.Rout.save b/tests/Krig.se.W.Rout.save
new file mode 100644
index 0000000..cb7b35d
--- /dev/null
+++ b/tests/Krig.se.W.Rout.save
@@ -0,0 +1,68 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> # tests of predictSE using
+> # off diag weight matrix for obs (W)
+>
+> options( echo=FALSE)
+Testing: Amatrix vs. predict
+PASSED test at tolerance 1e-08
+Testing: Marginal predictSE
+PASSED test at tolerance 1e-08
+Testing: Full covariance predictSE
+PASSED test at tolerance 1e-08
+Testing: Full covariance predictSE explicit W
+PASSED test at tolerance 1e-08
+all done
+>
+> proc.time()
+ user system elapsed
+ 1.270 0.048 1.311
diff --git a/tests/Krig.se.grid.test.R b/tests/Krig.se.grid.test.R
new file mode 100644
index 0000000..1b9fd21
--- /dev/null
+++ b/tests/Krig.se.grid.test.R
@@ -0,0 +1,94 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+# tests of predict.se
+# using approximations for conditional simulation on a grid.
+#
+options( echo=FALSE)
+test.for.zero.flag<-1
+
+long.test.flag<- FALSE
+
+data( ozone2)
+as.image(ozone2$y[16,], x= ozone2$lon.lat, ny=24, nx=20,
+ na.rm=TRUE)-> dtemp
+#
+# A useful discretized version of ozone2 data
+
+x<- cbind(dtemp$x[dtemp$ind[,1]], dtemp$y[dtemp$ind[,2]])
+y<- dtemp$z[ dtemp$ind]
+weights<- dtemp$weights[ dtemp$ind]
+
+Krig( x, y, Covariance="Matern",
+ theta=1.0, smoothness=1.0, weights=weights) -> out
+
+
+if(long.test.flag){
+
+# the grid ...
+
+glist<- list( x= dtemp$x, y=dtemp$y)
+
+set.seed( 233)
+sim.Krig.approx( out, grid= glist, M=200, extrap=TRUE)-> look
+
+predict.surface.se( out, grid=glist, extrap=TRUE)-> test
+
+look2<- matrix( NA, 20,24)
+
+for( k in 1:24){
+for ( j in 1:20){
+look2[j,k] <- sqrt(var( look$z[j,k,], na.rm=TRUE))
+}
+}
+
+
+test.for.zero( mean( abs(look2- test$z)/test$z), 0, relative=FALSE,
+tol=.05, tag="Conditional simulation marginal se for grid")
+
+#
+# test for covariances
+
+
+ind0<- expand.grid( c(1,4,5,10), c(3,4,5,10, 15))
+
+x0<- cbind( glist$x[ind0[,1]], glist$y[ind0[,2]])
+look2<- matrix( NA, 200,20)
+for( k in 1:20){
+look2[,k] <- look$z[ ind0[k,1], ind0[k,2],]}
+
+predict.se( out, x0, cov=TRUE)-> test2
+ds<- 1/sqrt(diag(test2))
+test3<- diag(ds)%*% test2 %*% diag(ds)
+
+#check plot( diag( test2), diag( var( look2)))
+
+# Another plot to look at plot( c(test3), c(cor(look2)))
+
+hold<-cor(look2)
+upper<- col(hold)> row( hold)
+dd<- (c(hold)- c(test3))[upper]
+
+test.for.zero( mean( abs(dd)) ,0, relative=FALSE,
+tol=.05, tag="Conditional simulation correlations for grid (RMSE) ")
+
+} # end long test block
+
+cat( "all done with grid based se tests", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Krig.se.grid.test.Rout.save b/tests/Krig.se.grid.test.Rout.save
new file mode 100644
index 0000000..62a6f5e
--- /dev/null
+++ b/tests/Krig.se.grid.test.Rout.save
@@ -0,0 +1,60 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> # tests of predict.se
+> # using approximations for conditional simulation on a grid.
+> #
+> options( echo=FALSE)
+all done with grid based se tests
+>
+> proc.time()
+ user system elapsed
+ 0.764 0.043 0.800
diff --git a/tests/Krig.se.test.R b/tests/Krig.se.test.R
new file mode 100644
index 0000000..90796b2
--- /dev/null
+++ b/tests/Krig.se.test.R
@@ -0,0 +1,194 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+
+# tests of predictSE
+# against direct linear algebra
+
+#options( echo=FALSE)
+
+test.for.zero.flag<- 1
+
+x0<- expand.grid( c(-8,-4,0,20,30), c(10,8,4,0))
+
+
+Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50)-> out
+
+
+# direct calculation
+Krig.Amatrix( out, x=x0)-> A
+test.for.zero( A%*%ChicagoO3$y, predict( out, x0),tag="Amatrix vs. predict")
+
+Sigma<- out$rhohat*Exp.cov( ChicagoO3$x, ChicagoO3$x, theta=50)
+S0<- out$rhohat*c(Exp.cov( x0, x0, theta=50))
+S1<- out$rhohat*Exp.cov( out$x, x0, theta=50)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+#
+#compare to
+# diagonal elements
+
+
+test2<- predictSE( out, x= x0)
+test.for.zero( sqrt(diag( look)), test2,tag="Marginal predictSE")
+
+out2<- Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50,
+ lambda=out$lambda)
+
+test2<- predictSE( out2, x= x0)
+test.for.zero( sqrt(diag( look)), test2,tag="Marginal predictSE fixed ")
+
+test<- predictSE( out, x= x0, cov=TRUE)
+test.for.zero( look, test,tag="Full covariance predictSE")
+
+
+# simulation based.
+
+set.seed( 333)
+
+sim.Krig( out, x0,M=4e3)-> test
+
+var(test)-> look
+
+predictSE( out, x=x0)-> test2
+mean( diag( look)/ test2**2)-> look2
+test.for.zero(look2, 1.0, tol=1.5e-2, tag="Marginal standard Cond. Sim.")
+
+predictSE( out, x=x0, cov=TRUE)-> test2
+
+# multiply simulated values by inverse square root of covariance
+# to make them white
+
+eigen( test2, symmetric=TRUE)-> hold
+hold$vectors%*% diag( 1/sqrt( hold$values))%*% t( hold$vectors)-> hold
+cor(test%*% hold)-> hold2
+# off diagonal elements of correlations -- expected values are zero.
+
+abs(hold2[ col(hold2)> row( hold2)])-> hold3
+
+test.for.zero( mean(hold3), 0, relative=FALSE, tol=.02,
+ tag="Full covariance standard Cond. Sim.")
+
+
+# test of sim.Krig.approx.R
+#
+# first create and check a gridded test case.
+
+
+data( ozone2)
+as.image(ozone2$y[16,], x= ozone2$lon.lat, ny=24, nx=20,
+ na.rm=TRUE)-> dtemp
+#
+# A useful disctrtized version of ozone2 data
+
+x<- dtemp$xd
+y<- dtemp$z[ dtemp$ind]
+weights<- dtemp$weights[ dtemp$ind]
+
+Krig( x, y, Covariance="Matern",
+ theta=1.0, smoothness=1.0, weights=weights) -> out
+
+
+
+ set.seed(234)
+ ind0<- cbind( sample( 1:20, 5), sample( 1:24, 5))
+
+ x0<- cbind( dtemp$x[ind0[,1]], dtemp$y[ind0[,2]])
+
+# an inline check plot(out$x, cex=2); points( x0, col="red", pch="+",cex=2)
+
+# direct calculation as backup ( also checks weighted case)
+
+Krig.Amatrix( out, x=x0)-> A
+test.for.zero( A%*%out$yM, predict( out, x0),tag="Amatrix vs. predict")
+
+Sigma<- out$rhohat*stationary.cov(
+out$xM, out$xM, theta=1.0,smoothness=1.0, Covariance="Matern")
+
+S0<- out$rhohat*stationary.cov(
+x0, x0, theta=1.0,smoothness=1.0, Covariance="Matern")
+
+S1<- out$rhohat*stationary.cov(
+out$xM, x0, theta=1.0,smoothness=1.0, Covariance="Matern")
+
+
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM) )%*% t(A)
+
+test<- predictSE( out, x0, cov=TRUE)
+
+test.for.zero( c( look), c( test), tag="Weighted case and exact for ozone2 full
+cov", tol=1e-8)
+
+########################################################################
+######### redo test with smaller grid to speed things up
+#cat("Conditional simulation test -- this takes some time", fill=TRUE)
+
+# redo data set to smaller grid size
+##D N1<-4
+##D N2<-5
+##D as.image(ozone2$y[16,], x= ozone2$lon.lat, ny=N2, nx=N1,
+##D na.rm=TRUE)-> dtemp
+#
+# A useful discretized version of ozone2 data
+
+##D xd<- dtemp$xd
+##D y<- dtemp$z[ dtemp$ind]
+##D weights<- dtemp$weights[ dtemp$ind]
+
+##D Krig( xd, y, Covariance="Matern",
+##D theta=1.0, smoothness=1.0, weights=weights) -> out
+
+
+##D xr<- range( dtemp$x)
+##D yr<- range( dtemp$y)
+##D M1<-N1
+##D M2<- N2
+##D glist<- list( x=seq( xr[1], xr[2],,M1) , y=seq( yr[1], yr[2],,M2))
+
+##D set.seed( 233)
+# with extrap TRUE this finesses problems with
+# how NAs are handled in var below
+
+##D sim.Krig.approx( out, grid= glist, M=3000, extrap=TRUE)-> look
+
+##D predictSE( out, make.surface.grid( glist))-> test
+
+
+##D look2<- matrix( NA, M1,M2)
+
+##D for( k in 1:M2){
+##D for ( j in 1:M1){
+##D look2[j,k] <- sqrt(var( look$z[j,k,], na.rm=TRUE)) }
+##D }
+
+
+##D test.for.zero( 1-mean(c(look2/test), na.rm=TRUE), 0, relative=FALSE,
+##D tol=.001, tag="Conditional simulation marginal se for grid")
+
+cat("all done testing predictSE ", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Krig.se.test.Rout.save b/tests/Krig.se.test.Rout.save
new file mode 100644
index 0000000..2a4bfb5
--- /dev/null
+++ b/tests/Krig.se.test.Rout.save
@@ -0,0 +1,248 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+>
+> # tests of predictSE
+> # against direct linear algebra
+>
+> #options( echo=FALSE)
+>
+> test.for.zero.flag<- 1
+>
+> x0<- expand.grid( c(-8,-4,0,20,30), c(10,8,4,0))
+>
+>
+> Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50)-> out
+>
+>
+> # direct calculation
+> Krig.Amatrix( out, x=x0)-> A
+> test.for.zero( A%*%ChicagoO3$y, predict( out, x0),tag="Amatrix vs. predict")
+Testing: Amatrix vs. predict
+PASSED test at tolerance 1e-08
+>
+> Sigma<- out$rhohat*Exp.cov( ChicagoO3$x, ChicagoO3$x, theta=50)
+> S0<- out$rhohat*c(Exp.cov( x0, x0, theta=50))
+> S1<- out$rhohat*Exp.cov( out$x, x0, theta=50)
+>
+> #yhat= Ay
+> #var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+>
+> look<- S0 - t(S1)%*% t(A) - A%*%S1 +
++ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+> #
+> #compare to
+> # diagonal elements
+>
+>
+> test2<- predictSE( out, x= x0)
+> test.for.zero( sqrt(diag( look)), test2,tag="Marginal predictSE")
+Testing: Marginal predictSE
+PASSED test at tolerance 1e-08
+>
+> out2<- Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50,
++ lambda=out$lambda)
+>
+> test2<- predictSE( out2, x= x0)
+> test.for.zero( sqrt(diag( look)), test2,tag="Marginal predictSE fixed ")
+Testing: Marginal predictSE fixed
+PASSED test at tolerance 1e-08
+>
+> test<- predictSE( out, x= x0, cov=TRUE)
+> test.for.zero( look, test,tag="Full covariance predictSE")
+Testing: Full covariance predictSE
+PASSED test at tolerance 1e-08
+>
+>
+> # simulation based.
+>
+> set.seed( 333)
+>
+> sim.Krig( out, x0,M=4e3)-> test
+>
+> var(test)-> look
+>
+> predictSE( out, x=x0)-> test2
+> mean( diag( look)/ test2**2)-> look2
+> test.for.zero(look2, 1.0, tol=1.5e-2, tag="Marginal standard Cond. Sim.")
+Testing: Marginal standard Cond. Sim.
+PASSED test at tolerance 0.015
+>
+> predictSE( out, x=x0, cov=TRUE)-> test2
+>
+> # multiply simulated values by inverse square root of covariance
+> # to make them white
+>
+> eigen( test2, symmetric=TRUE)-> hold
+> hold$vectors%*% diag( 1/sqrt( hold$values))%*% t( hold$vectors)-> hold
+> cor(test%*% hold)-> hold2
+> # off diagonal elements of correlations -- expected values are zero.
+>
+> abs(hold2[ col(hold2)> row( hold2)])-> hold3
+>
+> test.for.zero( mean(hold3), 0, relative=FALSE, tol=.02,
++ tag="Full covariance standard Cond. Sim.")
+Testing: Full covariance standard Cond. Sim.
+PASSED test at tolerance 0.02
+>
+>
+> # test of sim.Krig.approx.R
+> #
+> # first create and check a gridded test case.
+>
+>
+> data( ozone2)
+> as.image(ozone2$y[16,], x= ozone2$lon.lat, ny=24, nx=20,
++ na.rm=TRUE)-> dtemp
+> #
+> # A useful disctrtized version of ozone2 data
+>
+> x<- dtemp$xd
+> y<- dtemp$z[ dtemp$ind]
+> weights<- dtemp$weights[ dtemp$ind]
+>
+> Krig( x, y, Covariance="Matern",
++ theta=1.0, smoothness=1.0, weights=weights) -> out
+>
+>
+>
+> set.seed(234)
+> ind0<- cbind( sample( 1:20, 5), sample( 1:24, 5))
+>
+> x0<- cbind( dtemp$x[ind0[,1]], dtemp$y[ind0[,2]])
+>
+> # an inline check plot(out$x, cex=2); points( x0, col="red", pch="+",cex=2)
+>
+> # direct calculation as backup ( also checks weighted case)
+>
+> Krig.Amatrix( out, x=x0)-> A
+> test.for.zero( A%*%out$yM, predict( out, x0),tag="Amatrix vs. predict")
+Testing: Amatrix vs. predict
+PASSED test at tolerance 1e-08
+>
+> Sigma<- out$rhohat*stationary.cov(
++ out$xM, out$xM, theta=1.0,smoothness=1.0, Covariance="Matern")
+>
+> S0<- out$rhohat*stationary.cov(
++ x0, x0, theta=1.0,smoothness=1.0, Covariance="Matern")
+>
+> S1<- out$rhohat*stationary.cov(
++ out$xM, x0, theta=1.0,smoothness=1.0, Covariance="Matern")
+>
+>
+>
+> #yhat= Ay
+> #var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+>
+> look<- S0 - t(S1)%*% t(A) - A%*%S1 +
++ A%*% ( Sigma + diag(out$shat.MLE**2/out$weightsM) )%*% t(A)
+>
+> test<- predictSE( out, x0, cov=TRUE)
+>
+> test.for.zero( c( look), c( test), tag="Weighted case and exact for ozone2 full
++ cov", tol=1e-8)
+Testing: Weighted case and exact for ozone2 full
+cov
+PASSED test at tolerance 1e-08
+>
+> ########################################################################
+> ######### redo test with smaller grid to speed things up
+> #cat("Conditional simulation test -- this takes some time", fill=TRUE)
+>
+> # redo data set to smaller grid size
+> ##D N1<-4
+> ##D N2<-5
+> ##D as.image(ozone2$y[16,], x= ozone2$lon.lat, ny=N2, nx=N1,
+> ##D na.rm=TRUE)-> dtemp
+> #
+> # A useful discretized version of ozone2 data
+>
+> ##D xd<- dtemp$xd
+> ##D y<- dtemp$z[ dtemp$ind]
+> ##D weights<- dtemp$weights[ dtemp$ind]
+>
+> ##D Krig( xd, y, Covariance="Matern",
+> ##D theta=1.0, smoothness=1.0, weights=weights) -> out
+>
+>
+> ##D xr<- range( dtemp$x)
+> ##D yr<- range( dtemp$y)
+> ##D M1<-N1
+> ##D M2<- N2
+> ##D glist<- list( x=seq( xr[1], xr[2],,M1) , y=seq( yr[1], yr[2],,M2))
+>
+> ##D set.seed( 233)
+> # with extrap TRUE this finesses problems with
+> # how NAs are handled in var below
+>
+> ##D sim.Krig.approx( out, grid= glist, M=3000, extrap=TRUE)-> look
+>
+> ##D predictSE( out, make.surface.grid( glist))-> test
+>
+>
+> ##D look2<- matrix( NA, M1,M2)
+>
+> ##D for( k in 1:M2){
+> ##D for ( j in 1:M1){
+> ##D look2[j,k] <- sqrt(var( look$z[j,k,], na.rm=TRUE)) }
+> ##D }
+>
+>
+> ##D test.for.zero( 1-mean(c(look2/test), na.rm=TRUE), 0, relative=FALSE,
+> ##D tol=.001, tag="Conditional simulation marginal se for grid")
+>
+> cat("all done testing predictSE ", fill=TRUE)
+all done testing predictSE
+> options( echo=TRUE)
+>
+> proc.time()
+ user system elapsed
+ 4.008 0.074 4.080
diff --git a/tests/Krig.test.R b/tests/Krig.test.R
new file mode 100644
index 0000000..b3c962f
--- /dev/null
+++ b/tests/Krig.test.R
@@ -0,0 +1,359 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library(fields)
+#
+#
+# test of fixed lambda case
+# Check against linear algebra
+#
+
+options( echo=FALSE)
+test.for.zero.flag<-1
+
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50)-> fit
+
+x<- ChicagoO3$x
+K<- Exp.cov(x, x,theta=50)
+T<- fields.mkpoly(x, 2)
+W<- diag( 20)
+ lambda<- fit$lambda
+M<- (lambda* diag(20) + K)
+###########################
+test.d<- c(solve( t(T) %*% solve( M)%*%T) %*% t(T)%*% solve( M) %*% fit$yM)
+test.c<- solve( M)%*% ( fit$yM - T%*% test.d)
+
+#compare to fit$d
+test.for.zero( test.d, fit$d, tag="Compare d coef" )
+#compare to fit$d
+test.for.zero( test.c, fit$c,tag="Compare c coef" )
+
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50,lambda= fit$lambda)-> fit2
+#compare to fit$d
+test.for.zero( test.d, fit2$d, tag="Compare d coef fixed lambda" )
+#compare to fit$d
+test.for.zero( test.c, fit2$c,tag="Compare c coef fixed lambda" )
+
+# test of Krig.coef
+
+Krig.coef( fit)->test
+test.for.zero( test.d, test$d, tag="d coef Krig.coef" )
+test.for.zero( test.c, test$c, tag= "c coef Krig.coef" )
+
+Krig.coef( fit2)->test
+test.for.zero( test.d, test$d,tag="d coef Krig.coef fixed" )
+test.for.zero( test.c, test$c, tag="c coef Krig.coef fixed" )
+# checking A matrix in the case of noreps
+
+set.seed( 222)
+weights<- 10+ runif( length(ChicagoO3$y))
+#weights<- rep( 1, 20)
+test2<- Krig( ChicagoO3$x, ChicagoO3$y, theta=50, weights= weights)
+Atest<- Krig.Amatrix( test2)
+K<-Exp.cov(ChicagoO3$x, ChicagoO3$x,theta=50)
+H<- matrix(0, 23,23)
+H[(1:20)+3 , (1:20)+3]<- K
+X<- cbind( fields.mkpoly( ChicagoO3$x, 2), K)
+lambda<- test2$lambda
+ Alam <- X%*%solve(
+ t(X)%*%diag(weights)%*%X + lambda*H
+ )%*% t(X)%*%diag(weights)
+ test.for.zero( Alam, Atest, tag="Amatrix no reps", tol=5e-8)
+
+# test for new y fixed case
+set.seed( 123)
+ynew<- rnorm( fit2$N)
+
+test.d<- c(solve( t(T) %*% solve( M)%*%T) %*% t(T)%*% solve( M) %*% ynew)
+test.c<- solve( M)%*% ( ynew - T%*% test.d)
+
+Krig.coef( fit, y= ynew)->test
+test.for.zero( test.d, test$d, tag= "d coef new y" )
+test.for.zero( test.c, test$c, tag="c coef new y" )
+
+
+Krig.coef( fit2, y= ynew)->test
+test.for.zero( test.d, test$d, tag= "d coef new y fixed" )
+test.for.zero( test.c, test$c, tag=" c coef new y fixed" )
+
+# test for multiple new y's
+Krig.coef( fit2, y= cbind( ynew+ rnorm(fit2$N), ynew))->test2
+test.for.zero( test.d, test2$d[,2], tag= "d coef several new y fixed" )
+test.for.zero( test.c, test2$c[,2], tag=" c coef several new y fixed" )
+
+
+#cat("done with simple Krig data", fill=TRUE)
+
+
+# These tests are about whether decompositions
+# handle just a fixed lambda or are more general
+
+# checking passing lambda or df to Krig
+
+Tps( ChicagoO3$x, ChicagoO3$y,lambda=.001 )-> out
+predict( out, lambda=.001)-> out2
+test.for.zero( out2, predict( out), tag="Tps with fixed lam")
+
+Tps( ChicagoO3$x, ChicagoO3$y, df=5)-> out
+predict( out, df=5)-> out2
+test.for.zero( out2, predict( out), tag="Tps with fixed df")
+
+# same for Krig
+
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50,lambda=.5)-> out0
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50,lambda=.5,GCV=TRUE)-> out
+test.for.zero(
+ predict(out0), predict( out), tag="Krig with fixed lam argument")
+
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50)-> out0
+Krig( ChicagoO3$x, ChicagoO3$y, theta=50, df=6,GCV=TRUE)-> out
+predict( out0, df=6)-> out2
+test.for.zero( out2, predict( out), tag="Krig with fixed lam argument")
+
+
+#cat("A very nasty case with knots and weights",fill=TRUE)
+
+set.seed(123)
+x<- matrix( runif( 30), 15,2)
+y<- rnorm( 15)*.01 + x[,1]**2 + x[,2]**2
+knots<- x[1:5,]
+weights<- runif(15)*10
+
+# compare to
+Krig( x,y, knots=knots, cov.function=Exp.cov,weights=weights)-> out.new
+Krig( x,y, knots=knots, cov.function=Exp.cov,weights=weights,
+ lambda=1)-> out.new2
+
+# compute test using linear algebra
+
+K<- Exp.cov( knots, knots)
+H<- matrix(0, 8,8)
+H[4:8, 4:8]<- K
+X<- cbind( fields.mkpoly( x, 2), Exp.cov( x, knots))
+lambda<-1
+
+
+c( solve(t(X)%*%(weights*X) + lambda*H)%*% t(X)%*% (weights*y) )-> temp
+temp.c<- temp[4:8]
+temp.d<- temp[1:3]
+
+
+# test for d coefficients
+test.for.zero( out.new2$d, temp.d, tag=" d coef")
+# test for c coefficents
+test.for.zero( out.new2$c, temp.c, tag="c coef" )
+
+
+# compare to
+Krig.coef( out.new, lambda=1)->test
+# and
+
+
+# test for d coefficients
+test.for.zero( temp.d, test$d, tag="d new y Krig.coef")
+# test for c coefficents
+test.for.zero( temp.c, test$c, tag="c new y Krig.coef" )
+
+
+# and
+Krig.coef( out.new2, lambda=1)-> test
+
+# test for d coefficients
+test.for.zero( temp.d, test$d, tag= "d fixed case")
+# test for c coefficents
+test.for.zero( temp.c, test$c, tag=" c fixed case" )
+
+
+
+#cat( "done with knots and weights case", fill=TRUE)
+
+#
+# test with new y
+#
+
+lam.test <- 1.0
+
+ynew<- 1:15
+
+Krig( x,y, knots=knots, cov.function=Exp.cov,weights=weights)-> out.new
+Krig( x,y, knots=knots, cov.function=Exp.cov,weights=weights,
+ lambda=lam.test)-> out.new2
+### compare to
+##Krig( x,ynew, knots=knots, cov.function=Exp.cov,weights=weights)-> out.new
+##Krig( x,ynew, knots=knots, cov.function=Exp.cov,weights=weights,
+## lambda=lam.test)-> out.new2
+
+c( solve(t(X)%*%(weights*X) + lam.test*H)%*% t(X)%*% (weights*ynew) )-> temp
+temp.d<- temp[1:3]
+temp.c<- temp[4:8]
+
+#compare
+Krig.coef( out.new,lambda=lam.test,y=ynew)-> test
+
+# test for d coefficients
+test.for.zero( temp.d, test$d, tag=" d new y")
+# test for c coefficents
+test.for.zero( temp.c, test$c,tag= "c new y" )
+
+
+Krig.coef( out.new2,y=ynew)-> test
+
+# test for d coefficients
+test.for.zero( temp.d, test$d, tag= "d new y fixed")
+# test for c coefficents
+test.for.zero( temp.c, test$c, tag= "c new y fixed" )
+
+
+
+#cat( "done with new y case for nasty data ", fill=TRUE)
+
+
+#
+#cat("test with reps" , fill=TRUE)
+#
+
+set.seed(133)
+x<- matrix( runif( 30), 15,2)*2
+x<- rbind( x,x, x[3:7,])
+y<- rnorm( nrow( x))*.05 + + x[,1]**2 + x[,2]**2
+# perturb so that this example does not generate (harmless) warnings in gcv search
+y[20] <- y[20] + 1
+weights<- runif( nrow( x))*10
+knots<- x[1:10,]
+
+Krig( x,y, knots=knots, weights=weights, cov.function=Exp.cov)-> out.new
+
+
+
+lambda<- 1.0
+NP<- out.new$np
+NK <- nrow( knots)
+K<- Exp.cov( knots, knots)
+H<- matrix(0, NP,NP)
+H[(1:NK)+3 , (1:NK)+3]<- K
+X<- cbind( fields.mkpoly( x, 2), Exp.cov( x, knots))
+
+# compare to
+test<- c( solve(t(X)%*%diag(weights)%*%X + lambda*H)%*%
+t(X)%*%diag(weights)%*% y )
+
+test[1:3]-> temp.d
+test[(1:NK)+3]-> temp.c
+
+Krig( x,y, knots=knots, weights=weights,lambda=lambda,
+ cov.function=Exp.cov)-> out.new
+
+# test for d coefficients
+test.for.zero( temp.d, out.new$d, tag=" d reps")
+# test for c coefficents
+test.for.zero( temp.c, out.new$c, tag="c reps" )
+
+
+Krig( x,y, knots=knots, weights=weights, cov.function=Exp.cov)-> out.new
+
+#compare to
+test<- sum(weights*
+ (y-X%*%solve(t(X)%*%diag(weights)%*%X) %*% t(X)%*%diag(weights)%*% y)**2
+ )
+
+test.for.zero(out.new$pure.ss, test, tag=" pure sums of squares")
+
+
+
+#cat("done with reps case", fill=TRUE)
+
+##################################
+#cat( "test A matrix",fill=TRUE)
+##################################
+
+set.seed(133)
+x<- matrix( runif( 30), 15,2)*2
+x<- rbind( x,x, x[3:7,])
+y<- rnorm( nrow( x))*.05 + + x[,1]**2 + x[,2]**2
+# perturb so that this example does not generate (harmless) warnings in gcv search
+y[20] <- y[20] + 1
+weights<- runif( nrow( x))*10
+knots<- x[1:10,]
+
+Krig( x,y, knots=knots, weights=weights, cov.function=Exp.cov)-> out.new
+
+NP<- out.new$np
+NK <- nrow( knots)
+K<- Exp.cov( knots, knots)
+H<- matrix(0, NP,NP)
+H[(1:NK)+3 , (1:NK)+3]<- K
+X<- cbind( fields.mkpoly( x, 2), Exp.cov( x, knots))
+
+
+
+lambda<- out.new$lambda
+ Alam= X%*%solve(t(X)%*%diag(weights)%*%X + lambda*H)%*% t(X)%*%diag(weights)
+
+test<- c(Alam%*% y)
+# compare to
+test2<-predict( out.new)
+
+test.for.zero( test,test2, tag="Amatrix prediction")
+
+#
+test<- sum( diag( Alam))
+test2<- out.new$eff.df
+
+test.for.zero( test,test2)
+
+Krig.Amatrix( out.new, lambda=lambda)-> Atest
+test.for.zero( sum( diag(Atest)),test2, tag=" trace from A matrix")
+
+test.for.zero( Atest%*%out.new$yM, predict(out.new))
+
+yjunk<- rnorm( 35)
+yMtemp<- Krig.ynew(out.new, yjunk)$yM
+test.for.zero( Atest%*%yMtemp, predict(out.new, y=yjunk),
+tag="A matrix predict with new y")
+
+test.for.zero( Atest%*%yMtemp, predict(out.new, yM= yMtemp),
+tag="A matrix predict compared to collapsed yM")
+
+
+test.pure.ss<- sum(weights*
+ (y-X%*%solve(t(X)%*%diag(weights)%*%X) %*% t(X)%*%diag(weights)%*% y)**2
+ )
+
+
+test.for.zero( out.new$pure.ss, test.pure.ss,tag="pure sums of squares")
+
+#cat("done with A matrix case", fill=TRUE)
+#
+# check of GCV etc.
+
+lambda<- out.new$lambda
+ Alam= X%*%solve(t(X)%*%diag(weights)%*%X + lambda*H)%*% t(X)%*%diag(weights)
+
+test<- c(Alam%*% y)
+# compare to
+test2<-predict( out.new)
+
+#test.for.zero( test,test2, tag="double check A matrix predict")
+
+
+N<- length( y)
+test<- sum( diag( Alam))
+# compare to
+test2<- out.new$eff.df
+
+test.for.zero( test,test2, tag=" check trace")
+
diff --git a/tests/Krig.test.Rout.save b/tests/Krig.test.Rout.save
new file mode 100644
index 0000000..e71d646
--- /dev/null
+++ b/tests/Krig.test.Rout.save
@@ -0,0 +1,158 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> #
+> #
+> # test of fixed lambda case
+> # Check against linear algebra
+> #
+>
+> options( echo=FALSE)
+Testing: Compare d coef
+PASSED test at tolerance 1e-08
+Testing: Compare c coef
+PASSED test at tolerance 1e-08
+Testing: Compare d coef fixed lambda
+PASSED test at tolerance 1e-08
+Testing: Compare c coef fixed lambda
+PASSED test at tolerance 1e-08
+Testing: d coef Krig.coef
+PASSED test at tolerance 1e-08
+Testing: c coef Krig.coef
+PASSED test at tolerance 1e-08
+Testing: d coef Krig.coef fixed
+PASSED test at tolerance 1e-08
+Testing: c coef Krig.coef fixed
+PASSED test at tolerance 1e-08
+Testing: Amatrix no reps
+PASSED test at tolerance 5e-08
+Testing: d coef new y
+PASSED test at tolerance 1e-08
+Testing: c coef new y
+PASSED test at tolerance 1e-08
+Testing: d coef new y fixed
+PASSED test at tolerance 1e-08
+Testing: c coef new y fixed
+PASSED test at tolerance 1e-08
+Testing: d coef several new y fixed
+PASSED test at tolerance 1e-08
+Testing: c coef several new y fixed
+PASSED test at tolerance 1e-08
+Testing: Tps with fixed lam
+PASSED test at tolerance 1e-08
+Testing: Tps with fixed df
+PASSED test at tolerance 1e-08
+Testing: Krig with fixed lam argument
+PASSED test at tolerance 1e-08
+Testing: Krig with fixed lam argument
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 2983.87 (eff. df= 3.001004 )
+Testing: d coef
+PASSED test at tolerance 1e-08
+Testing: c coef
+PASSED test at tolerance 1e-08
+Testing: d new y Krig.coef
+PASSED test at tolerance 1e-08
+Testing: c new y Krig.coef
+PASSED test at tolerance 1e-08
+Testing: d fixed case
+PASSED test at tolerance 1e-08
+Testing: c fixed case
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 2983.87 (eff. df= 3.001004 )
+Testing: d new y
+PASSED test at tolerance 1e-08
+Testing: c new y
+PASSED test at tolerance 1e-08
+Testing: d new y fixed
+PASSED test at tolerance 1e-08
+Testing: c new y fixed
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 42494.84 (eff. df= 3.00092 )
+Testing: d reps
+PASSED test at tolerance 1e-08
+Testing: c reps
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 42494.84 (eff. df= 3.00092 )
+Testing: pure sums of squares
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 42494.84 (eff. df= 3.00092 )
+Testing: Amatrix prediction
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: trace from A matrix
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: A matrix predict with new y
+PASSED test at tolerance 1e-08
+Testing: A matrix predict compared to collapsed yM
+PASSED test at tolerance 1e-08
+Testing: pure sums of squares
+PASSED test at tolerance 1e-08
+Testing: check trace
+PASSED test at tolerance 1e-08
+> proc.time()
+ user system elapsed
+ 1.542 0.050 1.586
diff --git a/tests/Krig.test.W.R b/tests/Krig.test.W.R
new file mode 100644
index 0000000..c8383b0
--- /dev/null
+++ b/tests/Krig.test.W.R
@@ -0,0 +1,124 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library(fields)
+options( echo=FALSE)
+test.for.zero.flag<- 1
+#
+#
+# test of off diagonal weight matrix for obs
+# Check against linear algebra
+#
+#cat("A very nasty case with off diagonal weights",fill=TRUE)
+
+set.seed(123)
+x<- matrix( runif( 30), 15,2)
+y<- rnorm( 15)*.01 + x[,1]**2 + x[,2]**2
+
+#weights<- rep( 1, 15)
+
+weights<- runif(15)*10
+
+
+# WBW
+# double check that just diagonals work.
+
+lambda.test<- .6
+Krig( x,y,cov.function=Exp.cov,weights=weights)-> out
+Krig( x,y,cov.function=Exp.cov,weights=weights, lambda=lambda.test)-> out2
+Krig.coef( out, lambda=lambda.test)-> test
+
+W<- diag( weights)
+W2<- diag( sqrt(weights))
+
+
+K<- Exp.cov(x,x)
+M<- (lambda.test*solve(W) + K);T<- fields.mkpoly(x, 2)
+temp.d<- c(solve( t(T) %*% solve( M)%*%T) %*% t(T)%*% solve( M) %*%y)
+temp.c<- solve( M)%*% (y - T%*% temp.d)
+#
+
+# test for d coefficients
+test.for.zero( test$d, out2$d, tag=" d coef diag W fixed lam")
+test.for.zero( temp.d, out2$d, tag=" d coef diag W")
+# test for c coefficents
+test.for.zero( test$c, out2$c, tag="c coef diag W fixed lam" )
+test.for.zero( temp.c, out2$c, tag="c coef diag W " )
+
+
+
+# the full monty
+
+temp.wght<- function(x, alpha=.1){
+ Exp.cov( x, theta=alpha) }
+
+Krig( x,y,
+ cov.function=Exp.cov,weights=weights, wght.function= temp.wght,
+ )-> out.new
+
+W2<-out.new$W2
+W<- out.new$W
+
+
+
+test.for.zero( c( W2%*%W2), c( W), tag=" sqrt of W")
+
+Krig( x,y, cov.function=Exp.cov,weights=weights, W= out.new$W)-> temp
+
+test.for.zero( predict(temp, y= y), predict(out.new,y=y),
+tag=" Test of passing W explicitly")
+
+
+
+K<- Exp.cov(x,x); lambda.test<- .6;
+M<- (lambda.test*solve(W) + K);T<- fields.mkpoly(x, 2)
+temp.d<- c(solve( t(T) %*% solve( M)%*%T) %*% t(T)%*% solve( M) %*%y)
+temp.c<- solve( M)%*% (y - T%*% temp.d)
+#
+Krig.coef( out.new,lambda=lambda.test )-> out2
+
+test.for.zero( temp.d, out2$d, tag=" d coef full W")
+# test for c coefficents
+test.for.zero( temp.c, out2$c, tag="c coef full W" )
+
+
+####
+### testing the GCV function
+
+lambda<- out.new$lambda
+
+Krig.Amatrix( out.new, lambda=lambda)-> Alam
+
+test.for.zero( Alam%*%y , predict(out.new), tag="A matrix")
+
+N<- length( y)
+test<- sum( diag( Alam))
+# compare to
+test2<- out.new$eff.df
+
+test.for.zero( test,test2, tag=" check trace of A")
+
+Krig.fgcv.one( lam=lambda, out.new)-> test
+# compare to
+test2<- (1/N)*sum(
+ (out.new$W2%*%(y - c(Alam%*% y) ))**2
+ ) / (1- sum(diag( Alam))/N)**2
+
+test.for.zero( test,test2,tol=.5e-7, tag="GCV one" )
+
+cat( "all done testing off diag W case", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Krig.test.W.Rout.save b/tests/Krig.test.W.Rout.save
new file mode 100644
index 0000000..ebbbbbe
--- /dev/null
+++ b/tests/Krig.test.W.Rout.save
@@ -0,0 +1,90 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 0.003053154 (eff. df= 14.25 )
+Testing: d coef diag W fixed lam
+PASSED test at tolerance 1e-08
+Testing: d coef diag W
+PASSED test at tolerance 1e-08
+Testing: c coef diag W fixed lam
+PASSED test at tolerance 1e-08
+Testing: c coef diag W
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 0.002794267 (eff. df= 14.24999 )
+Testing: sqrt of W
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 0.002794267 (eff. df= 14.24999 )
+Testing: Test of passing W explicitly
+PASSED test at tolerance 1e-08
+Testing: d coef full W
+PASSED test at tolerance 1e-08
+Testing: c coef full W
+PASSED test at tolerance 1e-08
+Testing: A matrix
+PASSED test at tolerance 1e-08
+Testing: check trace of A
+PASSED test at tolerance 1e-08
+Testing: GCV one
+PASSED test at tolerance 5e-08
+all done testing off diag W case
+>
+> proc.time()
+ user system elapsed
+ 1.138 0.045 1.178
diff --git a/tests/KrigGCVREML.test.R b/tests/KrigGCVREML.test.R
new file mode 100644
index 0000000..a7f50af
--- /dev/null
+++ b/tests/KrigGCVREML.test.R
@@ -0,0 +1,115 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library(fields)
+#
+#
+#
+
+options( echo=FALSE)
+test.for.zero.flag<-1
+
+############ various tests of GCV and REML
+set.seed(133)
+x0<- matrix( runif( 10*2), 10,2)*2
+x<- rbind( x0,x0, x0[3:7,])
+y<- rnorm( nrow( x))*.05 + x[,1]**2 + x[,2]**2
+weights<- 8 + runif( nrow( x))
+
+# x0 are the unique values.
+
+
+out.new<- Krig( x,y, weights=weights, cov.function=Exp.cov)
+n<- length(y)
+n0<- nrow( x0)
+NK <- nrow( x0)
+NP<- NK + 3
+K<- Exp.cov( x0, x0)
+H<- matrix(0, NP,NP)
+H[(1:NK)+3 , (1:NK)+3]<- K
+X<- cbind( fields.mkpoly( x, 2), Exp.cov( x, x0) )
+X0<- cbind( fields.mkpoly( x0, 2), Exp.cov( x0, x0) )
+Alam <- X%*%solve(
+ t(X)%*%diag(weights)%*%X + out.new$lambda*H
+ )%*% t(X)%*%diag(weights)
+# predict sanity check using replicates
+set.seed( 123)
+ynew<- rnorm(n)
+test.for.zero( Alam%*%ynew, predict( out.new, y=ynew), tag=" predict sanity check",tol=3e-8)
+
+# predict using unique obs
+ynew<- rnorm(nrow(x0))
+Alam0<- X0%*%solve(
+ t(X0)%*%diag(out.new$weightsM)%*%X0 + out.new$lambda*H
+ )%*% t(X0)%*%diag(out.new$weightsM)
+
+# Alam0 is the A matrix
+test.for.zero( Alam0%*%ynew, predict( out.new,x=x0, yM=ynew), tag="predict using direct linear algebra" )
+
+#
+test<- Krig.fgcv( lam=out.new$lambda, out.new)
+y0<- out.new$yM
+n0<- length(y0)
+# compare to
+#test2<- (1/n0)*sum( (y0 - c(Alam0%*% y0))**2 *out.new$weightsM) / (1- sum(diag( Alam0))/n0)**2
+NUM<- mean( (y0 - c(Alam0%*% y0))**2 *out.new$weightsM) + out.new$pure.ss/( n -n0 )
+DEN<- (1- sum(diag( Alam0))/n0)
+test2<- NUM/ DEN^2
+test.for.zero( test,test2, tag="GCV" )
+
+test<- Krig.fgcv.one( lam=out.new$lambda, out.new)
+N<- length(y)
+test2<- (1/N)*sum( (y - c(Alam%*% y))**2 *weights) /
+ (1- sum(diag( Alam))/N)**2
+test.for.zero( test,test2, tag="GCV one" )
+
+test<- Krig.fgcv.model( lam=out.new$lambda, out.new)
+y0<- out.new$yM
+n0<- length(y0)
+# compare to
+test2<- (1/n0)*sum( (y0 - c(Alam0%*% y0))**2 *out.new$weightsM) / (1- sum(diag( Alam0))/n0)**2 + out.new$shat.pure.error**2
+test.for.zero( test,test2,tag="GCV model")
+
+
+
+
+####### tests with higher level gcv.Krig
+
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+Tps( x,y)-> out
+gcv.Krig( out, tol=1e-10)-> out2
+
+test.for.zero(out$lambda.est[1,-6],
+ out2$lambda.est[1,-6],tol=5e-4, tag="Tps/gcv for ozone2")
+
+# try with "new" data (linear transform should give identical
+# results for GCV eff df
+
+gcv.Krig( out, y=(11*out$y + 5), tol=1e-10 )-> out3
+
+test.for.zero(out2$lambda.est[1,2],
+ out3$lambda.est[1,2],tol=1e-6, tag="Tps/gcv for ozone2 new data")
+
+#cat("done with GCV case", fill=TRUE)
+
+
+
+cat("done with GCV and REML tests", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/KrigGCVREML.test.Rout.save b/tests/KrigGCVREML.test.Rout.save
new file mode 100644
index 0000000..0855ebc
--- /dev/null
+++ b/tests/KrigGCVREML.test.Rout.save
@@ -0,0 +1,79 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> #
+> #
+> #
+>
+> options( echo=FALSE)
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 0.2767992 (eff. df= 9.499985 )
+Testing: predict sanity check
+PASSED test at tolerance 3e-08
+Testing: predict using direct linear algebra
+PASSED test at tolerance 1e-08
+Testing: GCV
+PASSED test at tolerance 1e-08
+Testing: GCV one
+PASSED test at tolerance 1e-08
+Testing: GCV model
+PASSED test at tolerance 1e-08
+Testing: Tps/gcv for ozone2
+PASSED test at tolerance 5e-04
+Testing: Tps/gcv for ozone2 new data
+PASSED test at tolerance 1e-06
+done with GCV and REML tests
+>
+> proc.time()
+ user system elapsed
+ 1.253 0.050 1.296
diff --git a/tests/Likelihood.test.R b/tests/Likelihood.test.R
new file mode 100644
index 0000000..eba5a66
--- /dev/null
+++ b/tests/Likelihood.test.R
@@ -0,0 +1,136 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+
+# this is a test script to verify the likelihood computations are
+# correct with the eigen decomposition format used in Krig
+# see Krig.flplike for the concise computation.
+#
+
+library(fields)
+
+options( echo=FALSE)
+test.for.zero.flag<- 1
+
+
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+is.good <- !is.na( y)
+x<- x[is.good,]
+y<- y[is.good]
+
+theta<- 2.0
+
+# check log likelihood calculation
+ nu<- 1.5
+ lambda<- .2
+ out<- mKrig( x,y, theta=theta,Covariance="Matern", smoothness=nu, lambda=lambda)
+
+# peg rho and sigma as MLEs from mKrig
+ rho <- out$rho.MLE
+ sigma2<- rho*lambda
+ N<- length( y)
+ dd<- rdist( x,x)
+ M<- rho* Matern( dd, range= theta, smoothness=nu) + sigma2* diag( 1, N)
+ X<- fields.mkpoly( x, 2)
+ Mi<- solve( M)
+ betahat<- solve(t(X)%*%Mi%*%X)%*% t(X)%*% Mi%*% y
+ res<- y - X%*%betahat
+ ccoef<- ( Mi%*% ( res))*rho
+
+# sanity check that estimates are the same
+ test.for.zero( ccoef, out$c, tag="check ccoef")
+
+# find full log likelihood
+ chol(M)-> cM
+ lLike<- -(N/2)*log(2*pi) - (1/2)* (2*sum( log( diag(cM)))) - (1/2)* t(res)%*% Mi %*% res
+# formula for full likelihood using peices from mKrig
+ lLike.test<- -(N/2)*log(2*pi) - (1/2)* out$lnDetCov - (1/2)*(N)*log( rho) - (1/2)*out$quad.form/rho
+
+ test.for.zero( lLike, lLike.test, tag="llike full verses rhohat")
+ test.for.zero( lLike, out$lnProfileLike, tag="llike profile from mKrig")
+
+# REML check
+ nu<- 1.5
+ theta<- .6
+ obj<- Krig( x,y, theta=theta,Covariance="Matern", smoothness=nu )
+
+# sanity check that c coefficients agree with Krig
+ rho<- 500
+ lambda<- .2
+ sigma2<- lambda*rho
+
+ hold<- REML.test( x,y,rho, sigma2, theta, nu=1.5)
+ ccoef2<- Krig.coef( obj, lambda)$c
+ test.for.zero( hold$ccoef, ccoef2, tag="ccoefs")
+
+# check RSS with Krig decomposition.
+ RSS1<- sum( (lambda*ccoef2)**2)
+ lD <- obj$matrices$D * lambda
+ RSS2 <- sum(((obj$matrices$u * lD)/(1 + lD))^2)
+ test.for.zero( RSS2, RSS1, tag=" RSS using matrices")
+
+# check quadratic form with Krig
+ D.temp<- obj$matrices$D[ obj$matrices$D>0]
+ A3test<- (1/lambda)* obj$matrices$V %*% diag((D.temp*lambda)/ (1 +D.temp*lambda) )%*% t( obj$matrices$V)
+ test.for.zero(solve(A3test), hold$A/rho, tol=5e-8)
+ Quad3<- sum( D.temp*(obj$matrices$u[obj$matrices$D>0])^2/(1+lambda*D.temp))
+
+ test.for.zero( hold$quad.form, Quad3/rho, tag="quad form")
+
+# test determinants
+ N2<- length( D.temp)
+ det4<- -sum( log(D.temp/(1 + D.temp*lambda)) )
+ det1<- sum( log(eigen( hold$A/rho)$values))
+ test.for.zero( det1, det4, tag="det" )
+
+# test REML Likelihood
+ lLikeREML.test<--1*( (N2/2)*log(2*pi) - (1/2)*(sum( log(D.temp/(1 + D.temp*lambda)) ) - N2*log(rho)) +
+ (1/2)*sum( lD*(obj$matrices$u)^2/(1+lD)) /(lambda*rho) )
+
+test.for.zero( hold$REML.like, lLikeREML.test, tag="REML using matrices")
+
+
+# profile likelihood
+
+# lnProfileLike <- (-np/2 - log(2*pi)*(np/2)
+# - (np/2)*log(rho.MLE) - (1/2) * lnDetCov)
+# test using full REML likelihood.
+ nu<- 1.5
+ rho<- 7000
+ lambda<- .02
+ sigma2<- lambda*rho
+ theta<- 2.0
+ obj<- Krig( x,y, theta=theta,Covariance="Matern", smoothness=nu )
+ hold<- REML.test(x,y,rho, sigma2, theta, nu=1.5)
+ np<- hold$N2
+ rho.MLE<- c(hold$rhohat)
+ lnDetCov<-sum( log(eigen( hold$A/rho)$values))
+
+ l0<- REML.test(x,y,rho.MLE, rho.MLE*lambda, theta, nu=1.5)$REML.like
+ l1<- (-np/2 - log(2*pi)*(np/2)- (np/2)*log(rho.MLE) - (1/2) * lnDetCov)
+ l2<- (-1)*Krig.flplike( lambda, obj)
+
+ test.for.zero( l0,l2, tag="REML profile flplike")
+ test.for.zero( l1,l2, tag="REML profile flplike")
+
+
+cat("all done with likelihood tests", fill=TRUE)
+options( echo=TRUE)
+
diff --git a/tests/Likelihood.test.Rout.save b/tests/Likelihood.test.Rout.save
new file mode 100644
index 0000000..a3855be
--- /dev/null
+++ b/tests/Likelihood.test.Rout.save
@@ -0,0 +1,86 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+>
+> # this is a test script to verify the likelihood computations are
+> # correct with the eigen decomposition format used in Krig
+> # see Krig.flplike for the concise computation.
+> #
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+>
+> options( echo=FALSE)
+Testing: check ccoef
+PASSED test at tolerance 1e-08
+Testing: llike full verses rhohat
+PASSED test at tolerance 1e-08
+Testing: llike profile from mKrig
+PASSED test at tolerance 1e-08
+Testing: ccoefs
+PASSED test at tolerance 1e-08
+Testing: RSS using matrices
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 5e-08
+Testing: quad form
+PASSED test at tolerance 1e-08
+Testing: det
+PASSED test at tolerance 1e-08
+Testing: REML using matrices
+PASSED test at tolerance 1e-08
+Testing: REML profile flplike
+PASSED test at tolerance 1e-08
+Testing: REML profile flplike
+PASSED test at tolerance 1e-08
+all done with likelihood tests
+>
+>
+> proc.time()
+ user system elapsed
+ 1.337 0.055 1.386
diff --git a/tests/REMLest.test.R b/tests/REMLest.test.R
new file mode 100644
index 0000000..8b846fe
--- /dev/null
+++ b/tests/REMLest.test.R
@@ -0,0 +1,131 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+
+############################################################################
+# Begin tests of Matern covaraince parameter estimate
+# Note that in all tests the smoothness (nu) is fixed
+# and only theta (range), sill ( rho) and nugget (sigma2) are considered.
+##########################################################################
+library(fields)
+
+options( echo=FALSE)
+test.for.zero.flag<-1
+
+# ozone data as a test case
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+is.good <- !is.na( y)
+x<- x[is.good,]
+y<- y[is.good]
+nu<- 1.5
+
+# reduce data set to speed calculations
+x<-x[1:75,]
+y<- y[1:75]
+
+# testing REML formula as used in gcv.Krig
+
+ loglmvn <- function(pars, nu, x, y) {
+ N <- length(y)
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ ys <- t(Q2) %*% y
+ N2 <- length(ys)
+ lrho = pars[1]
+ ltheta = pars[2]
+ lsig2 = pars[3]
+ d <- rdist(x, x)
+ A <- exp(lrho)*(Matern(d, range = exp(ltheta),
+ smoothness = nu) + exp(lsig2)/exp(lrho) * diag(N))
+ A <- t(Q2) %*% A %*% Q2
+ A <- chol(A)
+ w = backsolve(A, ys, transpose = TRUE)
+ ycept <- (N2/2) * log(2 * pi) + sum(log(diag(A))) + (1/2) *
+ t(w) %*% w
+
+ return( ycept)
+ }
+
+ logProfilemvn <- function(lambda, theta, nu, x, y) {
+ N <- length(y)
+ Tmatrix <- fields.mkpoly(x, 2)
+ qr.T <- qr(Tmatrix)
+ Q2 <- qr.yq2(qr.T, diag(1, N))
+ ys <- t(Q2) %*% y
+ N2 <- length(ys)
+ d <- rdist(x, x)
+ print( dim ( d))
+ print( dim (diag( 1, N) ))
+ A <- (Matern(d, range = theta,
+ smoothness = nu) + diag( 1, N)*lambda )
+ A <- t(Q2) %*% A %*% Q2
+ A <- chol(A)
+ lnDetCov<- sum( log(diag(A)))*2
+ w = backsolve(A, ys, transpose = TRUE)
+ rho.MLE<- sum( w^2)/N2
+ REMLLike<- -1 * (-N2/2 - log(2 * pi) * (N2/2) - (N2/2) * log(rho.MLE) -
+ (1/2) * lnDetCov)
+ return( REMLLike)
+ }
+
+out<- Krig( x,y, Covariance="Matern", smoothness= nu, theta= 2.0, method="REML" )
+pars<- c(log( out$rho.MLE), log( 2.0), log( out$shat.MLE^2) )
+ REML0<- out$lambda.est[6,5]
+ REML1<- loglmvn( pars,nu, x,y)
+ REML2<- logProfilemvn( out$lambda, 2.0, nu, x,y)
+test.for.zero( REML0, REML1, tol=2e-4, tag="sanity check 1 for REML from Krig")
+test.for.zero( REML0, REML2, tag= "sanity check 2 for REML from Krig")
+
+##D hold1<- MaternGLS.test( x,y, nu)
+##D hold2<- MaternGLSProfile.test( x,y,nu)
+##D test.for.zero( hold1$pars[1], hold2$pars[1], tol=2e-5, tag="check REML rho")
+##D test.for.zero( hold1$pars[2], hold2$pars[2], tol=2e-5, tag="check REML theta")
+##D test.for.zero( hold1$pars[3], hold2$pars[3], tol=5e-6, tag=" check REML sigma2")
+
+hold3<- MaternQR.test( x,y,nu)
+hold4<- MaternQRProfile.test( x,y,nu)
+test.for.zero( hold3$pars[1], hold4$pars[1], tol=1e-3, tag="check REML rho")
+test.for.zero( hold3$pars[2], hold4$pars[2], tol=1e-3, tag="check REML theta")
+test.for.zero( hold3$pars[3], hold4$pars[3], tol=.0002, tag=" check REML sigma2")
+
+nu<- hold3$smoothness
+out1<- Krig( x,y, Covariance="Matern", theta= hold3$pars[2],
+ smoothness=nu, method="REML")
+
+# evaluate Profile at full REML MLE
+lam<- hold3$pars[3]/hold3$pars[1]
+l1<-Krig.flplike( lam, out1)
+
+# evaluate Profile at full REML MLE
+out2<- Krig( x,y, Covariance="Matern", theta= hold4$pars[2],
+ smoothness=nu, method="REML")
+lam<- hold4$pars[3]/hold4$pars[1]
+l2<-Krig.flplike( lam, out2)
+
+test.for.zero( l1,l2, tag="Profile likelihoods from Krig and optim")
+
+hold5<- MLE.Matern( x,y,nu)
+test.for.zero( hold5$llike,l2, tag="Profile likelihoods from Krig and golden search")
+
+#hold6<- spatialProcess( x,y, smoothness=nu, theta= hold5$theta.MLE, REML=TRUE)
+
+cat("done with Matern REML estimator tests where smoothness is fixed", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/REMLest.test.Rout.save b/tests/REMLest.test.Rout.save
new file mode 100644
index 0000000..6ca4c9a
--- /dev/null
+++ b/tests/REMLest.test.Rout.save
@@ -0,0 +1,80 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+>
+> ############################################################################
+> # Begin tests of Matern covaraince parameter estimate
+> # Note that in all tests the smoothness (nu) is fixed
+> # and only theta (range), sill ( rho) and nugget (sigma2) are considered.
+> ##########################################################################
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+>
+> options( echo=FALSE)
+[1] 75 75
+[1] 75 75
+Testing: sanity check 1 for REML from Krig
+PASSED test at tolerance 2e-04
+Testing: sanity check 2 for REML from Krig
+PASSED test at tolerance 1e-08
+Testing: check REML rho
+PASSED test at tolerance 0.001
+Testing: check REML theta
+PASSED test at tolerance 0.001
+Testing: check REML sigma2
+PASSED test at tolerance 2e-04
+Testing: Profile likelihoods from Krig and optim
+PASSED test at tolerance 1e-08
+Testing: Profile likelihoods from Krig and golden search
+PASSED test at tolerance 1e-08
+done with Matern REML estimator tests where smoothness is fixed
+>
+> proc.time()
+ user system elapsed
+ 3.66 0.12 3.78
diff --git a/tests/Tps.test.R b/tests/Tps.test.R
new file mode 100644
index 0000000..2d5ff52
--- /dev/null
+++ b/tests/Tps.test.R
@@ -0,0 +1,165 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options(echo=FALSE)
+test.for.zero.flag<- 1
+
+data(ozone2)
+
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+
+temp<- Rad.cov( x,x, p=2)
+temp2<- RadialBasis( rdist( x,x), M=2, dimension=2)
+
+temp3<- rdist( x,x)
+temp3 <- ifelse( abs(temp3) < 1e-14, 0,log( temp3)*(temp3^2) )
+temp3<- radbas.constant( 2,2)*temp3
+
+test.for.zero( temp, temp2, tag="Tps radial basis function 2d")
+test.for.zero( temp, temp3, tag="Tps radial basis function 2d")
+test.for.zero( temp2,temp3, tag="Tps radial basis function 2d")
+
+
+set.seed( 123)
+xtemp<- matrix( runif( 40*3), ncol=3)
+temp<- Rad.cov( xtemp,xtemp, p= 2*4-3)
+temp2<- RadialBasis( rdist( xtemp,xtemp), M=4, dimension=3)
+
+temp3<- rdist( xtemp,xtemp)
+temp3 <- ifelse( abs(temp3) < 1e-14, 0, temp3^(2*4 -3) )
+temp3<- radbas.constant( 4,3)*temp3
+
+test.for.zero( temp, temp2, tag="Tps radial basis function 3d")
+test.for.zero( temp, temp3, tag="Tps radial basis function 3d")
+test.for.zero( temp2,temp3, tag="Tps radial basis function 3d")
+
+#### testing multiplication of a vector
+#### mainly to make the FORTRAN has been written correctly
+#### after replacing the ddot call with an explicit do loop
+set.seed( 123)
+C<- matrix( rnorm( 10*5),10,5 )
+x<- matrix( runif( 10*2), 10,2)
+temp3<- rdist( x,x)
+K<- ifelse( abs(temp3) < 1e-14, 0,log( temp3)*(temp3^2) )
+K<- K * radbas.constant( 2,2)
+test.for.zero( Rad.cov( x,x,m=2, C=C) , K%*%C, tol=1e-10)
+
+set.seed( 123)
+C<- matrix( rnorm( 10*5),10,5 )
+x<- matrix( runif( 10*3), 10,3)
+temp3<- rdist( x,x)
+K<- ifelse( abs(temp3) < 1e-14, 0,(temp3^(2*4-3)) )
+K<- K * radbas.constant( 4,3)
+test.for.zero( Rad.cov( x,x,m=4, C=C) , K%*%C,tol=1e-10)
+
+
+##### testing derivative formula
+
+set.seed( 123)
+C<- matrix( rnorm( 10*1),10,1 )
+x<- matrix( runif( 10*2), 10,2)
+temp0<- Rad.cov( x,x, p=4, derivative=1, C=C)
+
+eps<- 1e-6
+temp1<- (
+ Rad.cov( cbind(x[,1]+eps, x[,2]),x, p=4, derivative=0, C=C)
+ - Rad.cov( cbind(x[,1]-eps, x[,2]),x, p=4, derivative=0, C=C) )/ (2*eps)
+temp2<- (
+ Rad.cov( cbind(x[,1], x[,2]+eps),x, p=4, derivative=0, C=C)
+ - Rad.cov( cbind(x[,1], x[,2]-eps),x , p=4,derivative=0,C=C) )/ (2*eps)
+
+test.for.zero( temp0[,1], temp1, tag=" der of Rad.cov", tol=1e-6)
+test.for.zero( temp0[,2], temp2, tag=" der of Rad.cov", tol=1e-6)
+
+
+
+# comparing Rad.cov used by Tps with simpler function called
+# by stationary.cov
+set.seed( 222)
+x<- matrix( runif( 10*2), 10,2)
+C<- matrix( rnorm( 10*3),10,3 )
+temp<- Rad.cov( x,x, p=2, C=C)
+temp2<- RadialBasis( rdist( x,x), M=2, dimension=2)%*%C
+test.for.zero( temp, temp2)
+
+#### Basic matrix form for Tps as sanity check
+x<- ChicagoO3$x
+y<- ChicagoO3$y
+
+obj<-Tps( x,y, scale.type="unscaled", with.constant=FALSE)
+
+# now work out the matrix expressions explicitly
+lam.test<- obj$lambda
+N<-length(y)
+
+Tmatrix<- cbind( rep( 1,N), x)
+D<- rdist( x,x)
+R<- ifelse( D==0, 0, D**2 * log(D))
+A<- rbind(
+ cbind( R+diag(lam.test,N), Tmatrix),
+ cbind( t(Tmatrix), matrix(0,3,3)))
+
+ hold<-solve( A, c( y, rep(0,3)))
+ c.coef<- hold[1:N]
+ d.coef<- hold[ (1:3)+N]
+ zhat<- R%*%c.coef + Tmatrix%*% d.coef
+ test.for.zero( zhat, obj$fitted.values, tag="Tps 2-d m=2 sanity check")
+# out of sample prediction
+xnew<- rbind( c( 0,0),
+ c( 10,10)
+ )
+T1<- cbind( rep( 1,nrow(xnew)), xnew)
+D<- rdist( xnew,x)
+R1<- ifelse( D==0, 0, D**2 * log(D))
+z1<- R1%*%c.coef + T1%*% d.coef
+ test.for.zero( z1, predict( obj, x=xnew), tag="Tps 2-d m=2 sanity predict")
+
+#### test Tps verses Krig note scaling must be the same
+ out<- Tps( x,y)
+ out2<- Krig( x,y, Covariance="RadialBasis",
+ M=2, dimension=2, scale.type="range", method="GCV")
+ test.for.zero( predict(out), predict(out2), tag="Tps vs. Krig w/ GCV")
+
+# test for fixed lambda
+ test.for.zero(
+ predict(out,lambda=.1), predict(out2, lambda=.1),
+ tag="Tps vs. radial basis w Krig")
+
+#### testing derivative using predict function
+ set.seed( 233)
+ x<- matrix( (rnorm( 1000)*2 -1), ncol=2)
+ y<- (x[,1]**2 + 2*x[,1]*x[,2] - x[,2]**2)/2
+
+ out<- Tps( x, y, scale.type="unscaled")
+
+ xg<- make.surface.grid( list(x=seq(-.7,.7,,10), y=seq(-.7,.7,,10)) )
+ test<- cbind( xg[,1] + xg[,2], xg[,1] - xg[,2])
+# test<- xg
+ look<- predictDerivative.Krig( out, x= xg)
+ test.for.zero( look[,1], test[,1], tol=1e-3)
+ test.for.zero( look[,2], test[,2], tol=1e-3)
+
+# matplot( test, look, pch=1)
+
+options( echo=TRUE)
+cat("all done testing Tps", fill=TRUE)
+
+
+
diff --git a/tests/Tps.test.Rout.save b/tests/Tps.test.Rout.save
new file mode 100644
index 0000000..bd3d392
--- /dev/null
+++ b/tests/Tps.test.Rout.save
@@ -0,0 +1,94 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: Tps radial basis function 2d
+PASSED test at tolerance 1e-08
+Testing: Tps radial basis function 2d
+PASSED test at tolerance 1e-08
+Testing: Tps radial basis function 2d
+PASSED test at tolerance 1e-08
+Testing: Tps radial basis function 3d
+PASSED test at tolerance 1e-08
+Testing: Tps radial basis function 3d
+PASSED test at tolerance 1e-08
+Testing: Tps radial basis function 3d
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-10
+PASSED test at tolerance 1e-10
+Testing: der of Rad.cov
+PASSED test at tolerance 1e-06
+Testing: der of Rad.cov
+PASSED test at tolerance 1e-06
+PASSED test at tolerance 1e-08
+Testing: Tps 2-d m=2 sanity check
+PASSED test at tolerance 1e-08
+Testing: Tps 2-d m=2 sanity predict
+PASSED test at tolerance 1e-08
+Testing: Tps vs. Krig w/ GCV
+PASSED test at tolerance 1e-08
+Testing: Tps vs. radial basis w Krig
+PASSED test at tolerance 1e-08
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (GCV) Generalized Cross-Validation
+ minimum at right endpoint lambda = 6.735369e-05 (eff. df= 475 )
+PASSED test at tolerance 0.001
+PASSED test at tolerance 0.001
+> cat("all done testing Tps", fill=TRUE)
+all done testing Tps
+>
+>
+>
+>
+> proc.time()
+ user system elapsed
+ 1.603 0.056 1.653
diff --git a/tests/Wend.test.R b/tests/Wend.test.R
new file mode 100644
index 0000000..5742297
--- /dev/null
+++ b/tests/Wend.test.R
@@ -0,0 +1,97 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+# test of Wendland covariance and stationary.taper.cov
+
+library( fields)
+options( echo=FALSE)
+ test.for.zero.flag<- 1
+
+set.seed(123)
+x1<- matrix( runif(2*20), ncol=2)
+x2<- matrix( runif(2*10), ncol=2)
+
+fields.rdist.near( x1,x2, delta=.75)-> look
+
+temp<- matrix( NA, nrow(x1),nrow(x2))
+temp[ look$ind] <- look$ra
+
+temp2<- rdist( x1, x2)
+temp2[ temp2> .75] <- NA
+#set.panel( 2,1) ; image.plot( temp); image.plot( temp2)
+
+temp[ is.na( temp)]<- 0
+temp2[ is.na( temp2)]<- 0
+test.for.zero( temp, temp2)
+
+
+# test of constructing covariance matrix
+# and also versions of Wendland function
+# default taper is wendland k=2.
+DD<- rdist( x1,x2)
+temp<- Wendland2.2(DD, theta=.8)
+temp2<- Wendland( DD, theta=.8, k=2, dimension=2)
+
+test.for.zero( temp, temp2)
+
+
+
+
+stationary.taper.cov( x1,x2, Taper="Wendland2.2",
+ Taper.args= list( theta=.8), spam.format=FALSE )-> look
+temp0<- look
+
+stationary.taper.cov( x1,x2, Taper="Wendland2.2",
+ Taper.args= list( theta=.8), spam.format=TRUE )-> look
+temp1<- spam2full( look)
+
+test.for.zero( temp1, temp0)
+
+stationary.taper.cov( x1,x2, Taper="Wendland",
+ Taper.args= list( theta=.8, k=2, dimension=2),
+ spam.format=TRUE )-> look
+temp1b<- spam2full( look)
+
+temp2<- Wendland2.2(DD, theta=.8) * Exponential(DD)
+temp3<- wendland.cov(x1,x2, k=2, theta=.8) * Exponential(DD)
+temp4<- Wendland(DD, k=2, dimension=2, theta=.8)* Exponential(DD)
+
+
+test.for.zero( temp1, temp0, rel=FALSE)
+test.for.zero( temp1b, temp0, rel=FALSE)
+test.for.zero( temp2, temp0, rel=FALSE)
+
+test.for.zero( temp2, temp3,rel=FALSE)
+test.for.zero( temp2, temp4,rel=FALSE)
+
+set.seed( 256)
+rv<- runif( nrow(x2))
+
+# test of multiply
+stationary.taper.cov( x1, x2, C= rv)-> look
+temp2<-stationary.taper.cov( x1,x2)
+
+(as.matrix(temp2))%*%(rv)-> look2
+test.for.zero( look, look2)
+
+temp2%*%(rv)-> look2
+test.for.zero( look, look2)
+
+
+cat( "Done with testing Wendland family", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/Wend.test.Rout.save b/tests/Wend.test.Rout.save
new file mode 100644
index 0000000..4e49e8d
--- /dev/null
+++ b/tests/Wend.test.Rout.save
@@ -0,0 +1,69 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> # test of Wendland covariance and stationary.taper.cov
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Done with testing Wendland family
+>
+> proc.time()
+ user system elapsed
+ 0.692 0.039 0.723
diff --git a/tests/cov.test.R b/tests/cov.test.R
new file mode 100644
index 0000000..e9412ec
--- /dev/null
+++ b/tests/cov.test.R
@@ -0,0 +1,157 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options( echo=FALSE)
+test.for.zero.flag<- 1
+data(ozone2)
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+#
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+x1<- x[1:5,]
+x2<- x[6:11,]
+
+look<- exp(-1*rdist(x1,x2)/4)
+look2<- stationary.cov( x1,x2, theta=4)
+look3<- Exp.cov( x1, x2, theta=4.0)
+test.for.zero( look, look2)
+test.for.zero( look, look3)
+
+set.seed(122)
+C<- rnorm( nrow(x2))
+look<- exp(-1*rdist(x1,x2)/4)%*%C
+look2<- stationary.cov( x1,x2, theta=4, C=C)
+look3<- Exp.cov( x1, x2, theta=4.0, C=C)
+test.for.zero( look, look2)
+test.for.zero( look, look3)
+
+#### check tranformation of coordinates
+V<- matrix( c(2,1,0,4), 2,2)
+Vi<- solve( V)
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+look<- exp(-1*rdist(u1,u2))
+look2<- stationary.cov( x1,x2, V= V)
+test.for.zero( look, look2)
+
+look<- Wendland(rdist(u1,u2), k=3, dimension=2)
+look2<- stationary.cov( x1,x2, V= V, Covariance = "Wendland",
+ k=3, dimension=2)
+test.for.zero( look, look2)
+
+### check tapering of covariances
+x1<- x[1:5,]
+x2<- x[2:6,]
+V<- matrix( c(2,1,0,4), 2,2)
+Vi<- solve( V)
+
+u1<- x1
+u2<- x2
+
+look1a<- exp(-1*rdist(u1,u2))
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2, theta=1,
+ Taper.args=list( theta=1,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+
+look1a<- exp(-1*rdist(u1,u2))
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1.5)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2,V=V,
+ Taper.args=list( theta=1.5,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+
+look1a<- Matern(rdist(u1,u2), smoothness=1.5)
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1.5)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2,V=V,Covariance=Matern, smoothness=1.5,
+ Taper.args=list( theta=1.5,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+# some tests of great circle distance
+
+
+stationary.taper.cov( x[1:3,],x[1:10,] , theta=200, Taper.args=
+ list(k=2,theta=300, dimension=2),
+ Dist.args=list( method="greatcircle") )-> temp
+
+# temp is now a tapered 3X10 cross covariance matrix in sparse format.
+# should be identical to
+# the direct matrix product
+
+temp2<- Exponential( rdist.earth(x[1:3,],x[1:10,]), range=200) *
+ Wendland(rdist.earth(x[1:3,],x[1:10,]), theta= 300, k=2, dimension=2)
+
+test.for.zero( as.matrix(temp), temp2, tol=1e-6, tag="taper with great circle")
+
+# example of calling the taper version directly
+# Note that default covariance is exponential and default taper is
+# Wendland (k=2).
+
+stationary.taper.cov( x[1:3,],x[1:10,] , theta=1.5, Taper.args=
+ list(k=2,theta=2.0, dimension=2) )-> temp
+# temp is now a tapered 5X10 cross covariance matrix in sparse format.
+# should be identical to
+# the direct matrix product
+
+temp2<- Exp.cov( x[1:3,],x[1:10,], theta=1.5) *
+ Wendland(rdist(x[1:3,],x[1:10,]),
+ theta= 2.0, k=2, dimension=2)
+
+test.for.zero( as.matrix(temp), temp2, tag= "high level test of taper cov")
+
+stationary.taper.cov( x[1:3,],x[1:10,] , range=1.5,
+ Taper.args= list(k=2,theta=2.0,
+ dimension=2) )-> temp
+
+test.for.zero( as.matrix(temp), temp2, tag= "high level test of taper cov")
+
+cat("end tests of V argument in covariances", fill=TRUE)
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tests/cov.test.Rout.save b/tests/cov.test.Rout.save
new file mode 100644
index 0000000..a0b87ca
--- /dev/null
+++ b/tests/cov.test.Rout.save
@@ -0,0 +1,71 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: taper with great circle
+PASSED test at tolerance 1e-06
+Testing: high level test of taper cov
+PASSED test at tolerance 1e-08
+Testing: high level test of taper cov
+PASSED test at tolerance 1e-08
+end tests of V argument in covariances
+> proc.time()
+ user system elapsed
+ 0.766 0.051 0.816
diff --git a/tests/cov.test2.R b/tests/cov.test2.R
new file mode 100644
index 0000000..646f691
--- /dev/null
+++ b/tests/cov.test2.R
@@ -0,0 +1,300 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options( echo=FALSE)
+test.for.zero.flag<- 1
+data(ozone2)
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+#
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+x1<- x[1:20,]
+x2<- x[1:10,]
+
+look<- exp(-1*rdist(x1,x2)/4)
+look2<- stationary.cov( x1,x2, theta=4)
+test.for.zero( look, look2)
+
+V<- matrix( c(2,1,0,4), 2,2)
+Vi<- solve( V)
+
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+
+look<- exp(-1*rdist(u1,u2))
+look2<- stationary.cov( x1,x2, V= V)
+test.for.zero( look, look2)
+
+look<- Wendland(rdist(u1,u2), k=3, dimension=2)
+look2<- stationary.cov( x1,x2, V= V, Covariance = "Wendland",
+ k=3, dimension=2)
+
+
+test.for.zero( look, look2)
+
+
+x1<- x[1:5,]
+x2<- x[2:6,]
+V<- matrix( c(2,1,0,4), 2,2)
+Vi<- solve( V)
+
+u1<- x1
+u2<- x2
+
+look1a<- exp(-1*rdist(u1,u2))
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2, theta=1,
+ Taper.args=list( theta=1,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+
+look1a<- exp(-1*rdist(u1,u2))
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1.5)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2,V=V,
+ Taper.args=list( theta=1.5,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+u1<- t(Vi%*% t(x1))
+u2<- t(Vi%*% t(x2))
+
+
+look1a<- Matern(rdist(u1,u2), smoothness=1.5)
+look1b<- Wendland(rdist(u1,u2),
+ k=3, dimension=2, theta= 1.5)
+look1<- look1a*look1b
+look2<- stationary.taper.cov( x1,x2,V=V,Covariance=Matern, smoothness=1.5,
+ Taper.args=list( theta=1.5,k=3, dimension=2), verbose=FALSE)
+test.for.zero( look1, as.matrix(look2))
+
+
+# some tests of great circle distance
+
+
+stationary.taper.cov( x[1:3,],x[1:10,] , theta=200, Taper.args=
+ list(k=2,theta=300, dimension=2),
+ Dist.args=list( method="greatcircle") )-> temp
+
+# temp is now a tapered 3X10 cross covariance matrix in sparse format.
+# should be identical to
+# the direct matrix product
+
+temp2<- Exponential( rdist.earth(x[1:3,],x[1:10,]), range=200) *
+ Wendland(rdist.earth(x[1:3,],x[1:10,]), theta= 300, k=2, dimension=2)
+
+test.for.zero( as.matrix(temp), temp2, tol=1e-6, tag="taper with great circle")
+
+# example of calling the taper version directly
+# Note that default covariance is exponential and default taper is
+# Wendland (k=2).
+
+stationary.taper.cov( x[1:3,],x[1:10,] , theta=1.5, Taper.args=
+ list(k=2,theta=2.0, dimension=2) )-> temp
+# temp is now a tapered 5X10 cross covariance matrix in sparse format.
+# should be identical to
+# the direct matrix product
+
+temp2<- Exp.cov( x[1:3,],x[1:10,], theta=1.5) *
+ Wendland(rdist(x[1:3,],x[1:10,]),
+ theta= 2.0, k=2, dimension=2)
+
+test.for.zero( as.matrix(temp), temp2, tag= "high level test of taper cov")
+
+stationary.taper.cov( x[1:3,],x[1:10,] , range=1.5,
+ Taper.args= list(k=2,theta=2.0,
+ dimension=2) )-> temp
+
+test.for.zero( as.matrix(temp), temp2, tag= "high level test of taper cov")
+
+
+
+##### Test precomputing distance matrix
+#
+
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+
+#
+# Omit the NAs
+
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+
+#####test that stationary.cov returns the same result when passed distance matrix:
+
+#with x1 == x2:
+
+x1<- x[1:20,]
+compactDistMat = rdist(x1, compact=TRUE)
+distMat = rdist(x1)
+look<- stationary.cov(x1, theta=4)
+look2 <- stationary.cov(x1, theta=4, distMat = compactDistMat)
+look3 <- stationary.cov(x1, theta=4, distMat = distMat)
+test.for.zero( look, look2, tag="stationary.cov versus stationary.cov compact distMat")
+test.for.zero( look, look3, tag="stationary.cov versus stationary.cov matrix distMat")
+
+#with x1 != x2:
+
+x2=x[1:10,]
+distMat = rdist(x1, x2)
+look<- stationary.cov(x1, x2, theta=4)
+look2 <- stationary.cov(x1, x2, theta=4, distMat = distMat)
+test.for.zero( look, look2, tag="stationary.cov versus stationary.cov asymmetric distMat")
+
+#####test that stationary.cov returns the same result when passed distance matrix:
+
+#with x1 == x2:
+distMat = rdist(x1, x1)
+compactDistMat = rdist(x1, compact=TRUE)
+
+look<- Exp.cov(x1, theta=4)
+look2 <- Exp.cov(x1, theta=4, distMat = compactDistMat)
+look3 <- Exp.cov(x1, theta=4, distMat = distMat)
+test.for.zero( look, look2, tag="Exp.cov versus Exp.cov compact distMat")
+test.for.zero( look, look3, tag="Exp.cov versus Exp.cov matrix distMat")
+
+#with x1 != x2:
+
+x1<- x[1:20,]
+x2=x[1:10,]
+distMat = rdist(x1, x2)
+look<- Exp.cov(x1, x2, theta=4)
+look2 <- Exp.cov(x1, x2, theta=4, distMat = distMat)
+test.for.zero( look, look2, tag="Exp.cov versus Exp.cov asymmetric distMat")
+
+##### test for correct value when using C argument:
+
+Ctest<- rnorm(10)
+
+#with x1 == x2:
+
+x1 = x[1:10,]
+compactDistMat = rdist(x1, compact=TRUE)
+distMat = rdist(x1, x1)
+
+temp1<- stationary.cov( x1, C= Ctest, theta=4 )
+temp2 = stationary.cov( x1, C= Ctest, theta=4, distMat=compactDistMat )
+temp3 = stationary.cov( x1, C= Ctest, theta=4, distMat=distMat )
+
+exp1<- Exp.cov( x1, C= Ctest, theta=4 )
+exp2 = Exp.cov( x1, C= Ctest, theta=4, distMat=compactDistMat )
+exp3 = Exp.cov( x1, C= Ctest, theta=4, distMat=distMat )
+
+test.for.zero(temp1, temp2, tag="stationary.cov vs stationary.cov with C set, compact distMat")
+test.for.zero(temp1, temp3, tag="stationary.cov vs stationary.cov with C set, matrix distMat")
+test.for.zero(temp1, exp1, tag="stationary.cov vs Exp.cov with C set, no distMat")
+test.for.zero(temp2, exp2, tag="stationary.cov vs Exp.cov with C set, compact distMat")
+test.for.zero(temp3, temp3, tag="stationary.cov vs Exp.cov with C set, matrix distMat")
+
+#with x1 != x2:
+
+x1 = x
+x2 = x[1:10,]
+
+distMat = rdist(x1, x1)
+
+temp1<- stationary.cov( x1, x2, C= Ctest, theta=4 )
+temp2 = stationary.cov( x1, x2, C= Ctest, theta=4, distMat=distMat )
+exp1 <- Exp.cov( x1, x2, C= Ctest, theta=4 )
+exp2 = Exp.cov( x1, x2, C= Ctest, theta=4, distMat=distMat )
+
+test.for.zero(temp1, temp2, tag="stationary.cov vs stationary.cov with C set and asymmetric distMat given")
+test.for.zero(exp1, exp2, tag="Exp.cov vs Exp.cov with C set and asymmetric distMat given")
+
+
+##### test covariance functions for onlyUpper=TRUE
+#
+
+distMat = rdist(x1, x1)
+compactDistMat = rdist(x1, compact=TRUE)
+out1 = stationary.cov(x1, onlyUpper=TRUE)
+exp1 = Exp.cov(x1, onlyUpper=TRUE)
+out2 = stationary.cov(x1, onlyUpper=TRUE, distMat=compactDistMat)
+exp2 = Exp.cov(x1, onlyUpper=TRUE, distMat=compactDistMat)
+out3 = stationary.cov(x1, onlyUpper=TRUE, distMat=distMat)
+exp3 = Exp.cov(x1, onlyUpper=TRUE, distMat=distMat)
+
+test.for.zero( out2[upper.tri(out1)], out3[upper.tri(exp1)], tag="onlyUpper=TRUE: stationary.cov versus Exp.cov")
+test.for.zero( out2[upper.tri(out1)], out3[upper.tri(out2)], tag="onlyUpper=TRUE: stationary.cov versus stationary.cov with compactDistMat")
+test.for.zero( out2[upper.tri(out1)], out3[upper.tri(exp2)], tag="onlyUpper=TRUE: stationary.cov versus Exp.cov with compactDistMat")
+test.for.zero( out2[upper.tri(out1)], out3[upper.tri(out3)], tag="onlyUpper=TRUE: stationary.cov versus stationary.cov with matrix distMat")
+test.for.zero( out2[upper.tri(out1)], out3[upper.tri(exp3)], tag="onlyUpper=TRUE: stationary.cov versus Exp.cov with matrix distMat")
+
+##### test Exp.cov functions for correct use of p
+#
+
+p1 = 1
+p2 = 2
+p3 = 3
+distMat = rdist(x1, x1)
+
+exp1 = Exp.cov(x1, p=p1)
+exp2 = Exp.cov(x1, p=p2)
+exp2Dist = Exp.cov(x1, p=p2, distMat = distMat)
+exp3 = Exp.cov(x1, p=p3)
+test.for.zero(exp1^(rdist(x1, x1)^(p2 - p1)), exp2, tag="Testing p=1 v 2")
+test.for.zero(exp2^(rdist(x1, x1)^(p3 - p2)), exp3, tag="Testing p=2 v 3")
+test.for.zero(exp2, exp2Dist, tag="Testing p=2 v 2 with distMat")
+
+##### test Exp.cov functions for correct use of theta
+#
+
+theta1 = 1
+theta2 = 2
+theta3 = 3
+distMat = rdist(x1, x1)
+
+exp1 = Exp.cov(x1, theta=theta1)
+exp2 = Exp.cov(x1, thet=theta2)
+exp2Dist = Exp.cov(x1, theta=theta2, distMat = distMat)
+exp3 = Exp.cov(x1, theta=theta3)
+test.for.zero(exp1^(theta1/theta2), exp2, tag="Testing theta=1 v 2")
+test.for.zero(exp2^(theta2/theta3), exp3, tag="Testing theta=2 v 3")
+test.for.zero(exp2, exp2Dist, tag="Testing theta=2 v 2 with distMat")
+
+
+
+
+cat("end tests of V argument in covariances", fill=TRUE)
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tests/cov.test2.Rout.save b/tests/cov.test2.Rout.save
new file mode 100644
index 0000000..26e1407
--- /dev/null
+++ b/tests/cov.test2.Rout.save
@@ -0,0 +1,119 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: taper with great circle
+PASSED test at tolerance 1e-06
+Testing: high level test of taper cov
+PASSED test at tolerance 1e-08
+Testing: high level test of taper cov
+PASSED test at tolerance 1e-08
+Testing: stationary.cov versus stationary.cov compact distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov versus stationary.cov matrix distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov versus stationary.cov asymmetric distMat
+PASSED test at tolerance 1e-08
+Testing: Exp.cov versus Exp.cov compact distMat
+PASSED test at tolerance 1e-08
+Testing: Exp.cov versus Exp.cov matrix distMat
+PASSED test at tolerance 1e-08
+Testing: Exp.cov versus Exp.cov asymmetric distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov vs stationary.cov with C set, compact distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov vs stationary.cov with C set, matrix distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov vs Exp.cov with C set, no distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov vs Exp.cov with C set, compact distMat
+PASSED test at tolerance 1e-08
+Testing: stationary.cov vs Exp.cov with C set, matrix distMat
+PASSED test at tolerance 1e-08
+Testing:
+stationary.cov vs stationary.cov with C set and asymmetric distMat given
+PASSED test at tolerance 1e-08
+Testing: Exp.cov vs Exp.cov with C set and asymmetric distMat given
+PASSED test at tolerance 1e-08
+Testing: onlyUpper=TRUE: stationary.cov versus Exp.cov
+PASSED test at tolerance 1e-08
+Testing:
+onlyUpper=TRUE: stationary.cov versus stationary.cov with compactDistMat
+PASSED test at tolerance 1e-08
+Testing: onlyUpper=TRUE: stationary.cov versus Exp.cov with compactDistMat
+PASSED test at tolerance 1e-08
+Testing:
+onlyUpper=TRUE: stationary.cov versus stationary.cov with matrix distMat
+PASSED test at tolerance 1e-08
+Testing: onlyUpper=TRUE: stationary.cov versus Exp.cov with matrix distMat
+PASSED test at tolerance 1e-08
+Testing: Testing p=1 v 2
+PASSED test at tolerance 1e-08
+Testing: Testing p=2 v 3
+PASSED test at tolerance 1e-08
+Testing: Testing p=2 v 2 with distMat
+PASSED test at tolerance 1e-08
+Testing: Testing theta=1 v 2
+PASSED test at tolerance 1e-08
+Testing: Testing theta=2 v 3
+PASSED test at tolerance 1e-08
+Testing: Testing theta=2 v 2 with distMat
+PASSED test at tolerance 1e-08
+end tests of V argument in covariances
+> proc.time()
+ user system elapsed
+ 0.767 0.046 0.805
diff --git a/tests/derivative.test.R b/tests/derivative.test.R
new file mode 100644
index 0000000..57070c2
--- /dev/null
+++ b/tests/derivative.test.R
@@ -0,0 +1,200 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+#library( fields, lib.loc="lib.test")
+
+library( fields)
+options(echo=FALSE)
+test.for.zero.flag<- 1
+
+DD<- cbind( seq(.01,2,,50))
+look2<- RadialBasis(DD, dimension=2,M=3,derivative=1)
+
+look1<- ( RadialBasis(DD+1e-5, dimension=2,M=3,derivative=0 )
+- RadialBasis(DD-1e-5, dimension=2,M=3,derivative=0))/2e-5
+
+test.for.zero( look1, look2,tol=1e-6, tag="radial basis function exact" )
+
+
+ set.seed( 234)
+ x<- matrix( runif(10), ncol=2)
+ ctest<- rep(0,5)
+ ctest[3]<- 1
+ stationary.cov( x,x, Covariance="RadialBasis", dimension=2,M=3,derivative=0)-> look0
+ RadialBasis( rdist(x,x), dimension=2,M=3,derivative=0)-> sanity.look
+ test.for.zero( look0, sanity.look, tag="sanity test of stationary.cov with RadialBasis")
+
+ Rad.cov(x,x,p= (2*3 -2))-> look1
+ test.for.zero( sanity.look, look1, tag="sanity test of Rad.cov")
+
+ sanity.look%*% ctest->look0
+ stationary.cov( x,x, Covariance="RadialBasis", dimension=2,M=3,
+ derivative=0, C=ctest)-> look
+ test.for.zero( look0, look, tag="stat.cov Radbas C multiply")
+ Rad.cov(x,x,p= (2*3 -2), C=ctest)-> look1
+ test.for.zero( look0, look1, tag="Rad.cov C multiply")
+
+
+############################ end of radial basis
+
+
+DD<- cbind( seq(.01,2,,50))
+look2<- Wendland(DD, theta=1.0, dimension=2,k=3,derivative=1)
+
+look1<- (Wendland(DD+1e-5, theta=1.0, dimension=2,k=3)
+- Wendland(DD-1e-5, theta=1.0, dimension=2,k=3))/2e-5
+
+test.for.zero( look1, look2,tol=1e-6)
+
+
+
+look2<- Wendland(DD, theta=1.5, dimension=2,k=3,derivative=1)
+
+look1<- (Wendland(DD+1e-5, theta=1.5, dimension=2,k=3)
+- Wendland(DD-1e-5, theta=1.5, dimension=2,k=3))/2e-5
+
+test.for.zero( look1, look2,tol=1e-6, tag="Wendland exact")
+
+x<- seq( -1,1,,5)
+
+ctest<- rep(0,5)
+ctest[3]<- 1
+
+wendland.cov( x,x, k=2, theta=.75)-> look0
+Wendland( rdist(x,x)/.75, k=2, dimension=1)-> sanity.look
+test.for.zero( look0, sanity.look)
+
+look0%*% ctest->look0
+
+wendland.cov( x,x, k=2, theta=.75, C=ctest, derivative=0)-> look
+
+test.for.zero( look0, look, tag="Wendland C multiply")
+
+
+wendland.cov( x,x, k=2, theta=1.0, C=ctest, derivative=1)-> look
+
+wendland.cov( x+1e-5, x, k=2, theta=1.0, C=ctest)-
+wendland.cov( x-1e-5, x, k=2, theta=1.0, C=ctest)-> look2
+look2<- look2/2e-5
+
+test.for.zero( look, look2,tol=1e-7, tag="Wendland.cov theta=1.0")
+
+
+wendland.cov( x,x, k=2, theta=.75, C=ctest, derivative=1)-> look
+wendland.cov( x+1e-5, x, k=2, theta=.75, C=ctest)-
+wendland.cov( x-1e-5, x, k=2, theta=.75, C=ctest)-> look2
+look2<- look2/2e-5
+test.for.zero( look, look2,tol=1e-7, tag="Wendland.cov theta=.75")
+
+
+stationary.cov( x,x, Covariance="Wendland", dimension=1,
+ k=2, theta=1.0, C=ctest, derivative=0)-> look
+look0<- Wendland( rdist(x,x), k=2, dimension=1)%*%ctest
+test.for.zero( look0, look, tag="stationary.cov and exact C multiply for Wendland")
+
+wendland.cov( x,x, k=2,C=ctest, theta=1.0)-> look
+look0<- Wendland( rdist(x,x), k=2, dimension=1)%*%ctest
+test.for.zero( look0, look, tag=" Wendland C multiply")
+
+####### 2 -d quadratic surface
+
+x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+y<- (.123*x[,1] + .234*x[,2])
+obj<- mKrig( x,y, lambda=0, cov.function="wendland.cov", k=3, theta=.4)
+
+xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+predict( obj, xp, derivative=1)-> outd
+test.for.zero( outd[,1],.123, tag="2-d derivs from wend.cov/mKrig")
+test.for.zero( outd[,2],.234)
+
+
+#%%%%%%%% repeat to check derivatives in stationary.cov
+
+x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+y<- (.123*x[,1] + .234*x[,2])
+obj<- mKrig( x,y, lambda=0, cov.function="stationary.cov",
+ cov.args=list(k=3, theta=.2, dimension=2, Covariance="Wendland"))
+
+xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+predict( obj, xp, derivative=1)-> outd
+test.for.zero( outd[,1],.123, tag="2-d derivs from stationary-wend/mKrig")
+test.for.zero( outd[,2],.234)
+
+
+############## quadratic surface
+x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+y<- (x[,1]**2 - 2* x[,1]*x[,2] + x[,2]**2)/2
+
+############## wendland.cov
+obj<- mKrig( x,y, lambda=0, cov.function="wendland.cov", k=3, theta=.8)
+xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+true<- cbind( xp[,1] - xp[,2], xp[,2]- xp[,1])
+############## wendland.cov
+predict( obj, xp, derivative=1)-> outd
+rmse<-sqrt(mean((true[,1] - outd[,1])**2))/sqrt(mean(true[,1]**2))
+test.for.zero( rmse,0, tol=5e-3,relative=FALSE, tag="wendland.cov quad 2-d")
+
+############## stationary cov
+x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+y<- (x[,1]**3 + x[,2]**3)
+obj<- mKrig( x,y, lambda=0, cov.function="stationary.cov",
+ cov.args=list(k=3, theta=.8, dimension=2, Covariance="Wendland"))
+
+xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+true<- cbind( 3*xp[,1]**2 , 3*xp[,2]**2)
+predict( obj, xp, derivative=1)-> outd2
+rmse<-sqrt(mean((true[,1] - outd2[,1])**2))/sqrt(mean(true[,1]**2))
+test.for.zero( rmse,0, tol=1e-2,relative=FALSE,
+ tag="stationary.cov/Wendland cubic 2-d")
+
+############## stationary cov with radial basis
+x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+y<- (x[,1]**3 + x[,2]**3)
+obj<- Krig( x,y, cov.function="stationary.cov", m=3,
+ cov.args=list(M=3, dimension=2, Covariance="RadialBasis"))
+
+xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+true<- cbind( 3*xp[,1]**2 , 3*xp[,2]**2)
+predictDerivative.Krig( obj, xp)-> outd2
+look<- as.surface( xp, outd2[,1])
+rmse<-sqrt(mean((true[,1] - outd2[,1])**2))/sqrt(mean(true[,1]**2))
+test.for.zero( rmse,0, tol=5e-3,relative=FALSE,
+ tag="stationary.cov/Wendland cubic 2-d")
+
+
+#########################
+ x<- make.surface.grid( list(x=seq( -1,1,,20), y=seq( -1,1,,20)))
+ y<- (x[,1]**3 + x[,2]**3)
+
+ obj<- mKrig( x,y, lambda=0, cov.function="wendland.cov", k=3, V=diag(c( 1.1,1.1) ))
+ xp<- make.surface.grid( list(x=seq(-.5,.5,,24),y= seq( -.5,.5,,24)) )
+ predict( obj, xp, derivative=1)-> outd
+ true<- cbind( 3*xp[,1]**2 , 3*xp[,2]**2)
+ rmse<-sqrt(mean((true[,1] - outd[,1])**2)/mean(true[,1]**2))
+ test.for.zero( rmse,0, tol=5e-3,relative=FALSE)
+
+ obj<- Tps( x,y,lambda=0)
+ predictDerivative.Krig( obj, xp, derivative=1)-> outd
+ look<- as.surface( xp, outd[,1])
+ rmse<-sqrt(mean((true[,1] - outd[,1])**2)/mean(true[,1]**2))
+ test.for.zero( rmse,0, tol=2e-4,relative=FALSE, tag="Tps derivative x1")
+ rmse<-sqrt(mean((true[,2] - outd[,2])**2)/mean(true[,2]**2))
+ test.for.zero( rmse,0, tol=2e-4,relative=FALSE, tag="Tps derivative x2")
+
+cat("done with dervative tests", fill=TRUE)
+options( echo=TRUE)
+
diff --git a/tests/derivative.test.Rout.save b/tests/derivative.test.Rout.save
new file mode 100644
index 0000000..2a66be6
--- /dev/null
+++ b/tests/derivative.test.Rout.save
@@ -0,0 +1,108 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> #library( fields, lib.loc="lib.test")
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: radial basis function exact
+PASSED test at tolerance 1e-06
+Testing: sanity test of stationary.cov with RadialBasis
+PASSED test at tolerance 1e-08
+Testing: sanity test of Rad.cov
+PASSED test at tolerance 1e-08
+Testing: stat.cov Radbas C multiply
+PASSED test at tolerance 1e-08
+Testing: Rad.cov C multiply
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-06
+Testing: Wendland exact
+PASSED test at tolerance 1e-06
+PASSED test at tolerance 1e-08
+Testing: Wendland C multiply
+PASSED test at tolerance 1e-08
+Testing: Wendland.cov theta=1.0
+PASSED test at tolerance 1e-07
+Testing: Wendland.cov theta=.75
+PASSED test at tolerance 1e-07
+Testing: stationary.cov and exact C multiply for Wendland
+PASSED test at tolerance 1e-08
+Testing: Wendland C multiply
+PASSED test at tolerance 1e-08
+Testing: 2-d derivs from wend.cov/mKrig
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: 2-d derivs from stationary-wend/mKrig
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: wendland.cov quad 2-d
+PASSED test at tolerance 0.005
+Testing: stationary.cov/Wendland cubic 2-d
+PASSED test at tolerance 0.01
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (REML) Restricted maximum likelihood
+ minimum at right endpoint lambda = 1.938365e-08 (eff. df= 380 )
+Testing: stationary.cov/Wendland cubic 2-d
+PASSED test at tolerance 0.005
+PASSED test at tolerance 0.005
+Warning:
+Grid searches over lambda (nugget and sill variances) with minima at the endpoints:
+ (GCV) Generalized Cross-Validation
+ minimum at right endpoint lambda = 4.881835e-06 (eff. df= 380 )
+Testing: Tps derivative x1
+PASSED test at tolerance 2e-04
+Testing: Tps derivative x2
+PASSED test at tolerance 2e-04
+done with dervative tests
+>
+>
+> proc.time()
+ user system elapsed
+ 3.169 0.241 3.408
diff --git a/tests/diag.multiply.test.R b/tests/diag.multiply.test.R
new file mode 100644
index 0000000..c193954
--- /dev/null
+++ b/tests/diag.multiply.test.R
@@ -0,0 +1,36 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library( fields)
+options( echo=FALSE)
+set.seed( 234)
+test.for.zero.flag<- 1
+n <- 5
+m <- 4
+mat <- array(rnorm(n*m),c(n,m))
+mat2 <- array(rnorm(n*m),c(m,n))
+vec <- rnorm(n)
+vec2 <- rnorm(n)
+
+test.for.zero( mat2 %*% mat, mat2%d*%mat, tol=1e-8 )
+
+test.for.zero( (diag(vec)%*% mat), (vec%d*%mat), tol=1e-8 )
+
+
+test.for.zero( diag(vec)%*% vec2, vec%d*%vec2,tol=1e-8)
+cat("All done with testing diag multiply", fill=TRUE)
+options(echo=TRUE)
diff --git a/tests/diag.multiply.test.Rout.save b/tests/diag.multiply.test.Rout.save
new file mode 100644
index 0000000..d12d8e1
--- /dev/null
+++ b/tests/diag.multiply.test.Rout.save
@@ -0,0 +1,59 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+All done with testing diag multiply
+>
+> proc.time()
+ user system elapsed
+ 0.462 0.032 0.487
diff --git a/tests/diagonal2.test.R b/tests/diagonal2.test.R
new file mode 100644
index 0000000..81d3dae
--- /dev/null
+++ b/tests/diagonal2.test.R
@@ -0,0 +1,47 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library( fields)
+options(echo=FALSE)
+test.for.zero.flag<- 1
+
+n<- 50
+x<- matrix( runif(n*2), n, 2)
+A<- Exp.cov( x,x,theta=.2)
+B<- Exp.cov( x,x, theta=.5)
+
+fields.diagonalize(A,B)-> look
+fields.diagonalize2(A,B, verbose=FALSE)-> look2
+
+test.for.zero( look$D, look2$D, tol=1E-8,tag="eigenvalues of both versions")
+
+G1<- look$G
+G2<-look2$G
+a1<- sign( G1[1,])
+a2<- sign(G2[1,])
+a<- a1*a2
+
+
+lambda<- .8
+test.for.zero( solve( A + lambda* B), G2%*%diag( 1/(1+ lambda*look2$D))%*%t(G2), tag="inverse A+lambda*B", tol=1e-8 )
+test.for.zero( solve( A + lambda* B), G1%*%diag( 1/(1+ lambda*look$D))%*%t(G1), tag="inverse A+lambda*B", tol=1e-8 )
+test.for.zero( G2%*%diag( 1/(1+ lambda*look2$D))%*%t(G2) ,
+ G1%*%diag( 1/(1+ lambda*look$D))%*%t(G1), tag="inverse A+lambda*B" , tol=1e-8)
+
+options( echo=TRUE)
+cat("all done testing both versions of simultaneous diagonalization ", fill=TRUE)
+
diff --git a/tests/diagonal2.test.Rout.save b/tests/diagonal2.test.Rout.save
new file mode 100644
index 0000000..62c5348
--- /dev/null
+++ b/tests/diagonal2.test.Rout.save
@@ -0,0 +1,66 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: eigenvalues of both versions
+PASSED test at tolerance 1e-08
+Testing: inverse A+lambda*B
+PASSED test at tolerance 1e-08
+Testing: inverse A+lambda*B
+PASSED test at tolerance 1e-08
+Testing: inverse A+lambda*B
+PASSED test at tolerance 1e-08
+> cat("all done testing both versions of simultaneous diagonalization ", fill=TRUE)
+all done testing both versions of simultaneous diagonalization
+>
+>
+> proc.time()
+ user system elapsed
+ 0.506 0.034 0.531
diff --git a/tests/evlpoly.test.R b/tests/evlpoly.test.R
new file mode 100644
index 0000000..38a9c71
--- /dev/null
+++ b/tests/evlpoly.test.R
@@ -0,0 +1,69 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options( echo=FALSE)
+test.for.zero.flag<-1
+
+set.seed( 245)
+
+x<- runif(3)
+
+coef<- runif( 5)
+temp<- fields.evlpoly( x, coef)
+
+temp2<- coef[1]
+
+for( k in (2:5) ){
+temp2<- temp2 + coef[k]*x**(k-1)
+}
+
+test.for.zero( temp, temp2)
+
+
+set.seed( 124)
+x<- matrix( runif(12), ncol=3)
+
+fields.mkpoly(x, m=3)-> out
+
+attr( out, "ptab")-> ptab
+
+J<- nrow( ptab)
+
+coef<- runif( J)
+temp<- fields.evlpoly2( x, coef, ptab)
+
+temp2<-out%*% coef
+
+test.for.zero( temp,temp2)
+
+fields.derivative.poly( x, m=3, coef)-> temp
+
+fields.mkpoly( cbind( x[,1:2], x[,3]+1e-6), m=3)%*% coef-> temp2
+fields.mkpoly( cbind( x[,1:2], x[,3]-1e-6), m=3)%*% coef-> temp3
+temp2<- (temp2- temp3)/ 2e-6
+
+test.for.zero( temp[,3], temp2)
+
+cat("Done testing polynomial evaluation",fill=TRUE)
+
+options( echo=FALSE)
+
+
+
+
diff --git a/tests/evlpoly.test.Rout.save b/tests/evlpoly.test.Rout.save
new file mode 100644
index 0000000..5e6df88
--- /dev/null
+++ b/tests/evlpoly.test.Rout.save
@@ -0,0 +1,59 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Done testing polynomial evaluation
+> proc.time()
+ user system elapsed
+ 0.510 0.041 0.532
diff --git a/tests/fastTpsPredict.test.R b/tests/fastTpsPredict.test.R
new file mode 100644
index 0000000..eee7fa9
--- /dev/null
+++ b/tests/fastTpsPredict.test.R
@@ -0,0 +1,67 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options(echo=FALSE)
+test.for.zero.flag<-1
+set.seed(123)
+nc<- 10
+center<- matrix( runif(nc*2), nc,2)
+grid.list<- list( x= seq(0,1,,10), y=seq( 0,1,,15))
+coef<- rnorm( nc)
+delta<- .3
+
+out<- multWendlandGrid( grid.list,center, delta, coef)
+
+
+xg<- make.surface.grid( grid.list)
+test<- Wendland2.2( rdist( xg, center)/delta)%*% coef
+test.for.zero.flag<-1
+test.for.zero( test, out, tag="Comparing FORTRAN grid eval to matrix vector multiplication")
+
+
+
+
+
+# testing predictSurface function
+nc<- 100
+set.seed(12)
+x<- matrix( runif(nc*2), nc,2)
+
+y<- rnorm( nc)
+delta<- .2
+obj<- fastTps( x,y, theta=delta, lambda=.1)
+
+grid.list<- list( x= seq(0,1,,3), y=seq( 0,1,,4))
+xg<- make.surface.grid( grid.list)
+look0<- c(predict( obj, xg))
+look1<- predictSurface( obj, grid.list, extrap=TRUE)
+look2<- predict.mKrig( obj, xg)
+test.for.zero( look0, c(look1$z), tag="testing PredictSurface and predict.fastTps")
+test.for.zero( look0, c(look2), tag="testing PredictSurface with slower mKrig predict")
+
+# new y
+set.seed(123)
+ ynew<- rnorm( nc)
+ look0<- c(predict( obj, xg, ynew=ynew))
+ look1<- predictSurface( obj, grid.list, ynew=ynew, extrap=TRUE)
+ look2<- c(predict(fastTps( x,ynew, theta=delta, lambda=.1) , xg, ynew=ynew))
+ test.for.zero( look0, look2,tag="predict with ynew")
+ test.for.zero( look0, c(look1$z), tag="predictSurface with ynew")
+options( echo=TRUE)
+#
diff --git a/tests/fastTpsPredict.test.Rout.save b/tests/fastTpsPredict.test.Rout.save
new file mode 100644
index 0000000..c54915d
--- /dev/null
+++ b/tests/fastTpsPredict.test.Rout.save
@@ -0,0 +1,67 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: Comparing FORTRAN grid eval to matrix vector multiplication
+PASSED test at tolerance 1e-08
+Testing: testing PredictSurface and predict.fastTps
+PASSED test at tolerance 1e-08
+Testing: testing PredictSurface with slower mKrig predict
+PASSED test at tolerance 1e-08
+Testing: predict with ynew
+PASSED test at tolerance 1e-08
+Testing: predictSurface with ynew
+PASSED test at tolerance 1e-08
+> #
+>
+> proc.time()
+ user system elapsed
+ 0.990 0.037 1.019
diff --git a/tests/mKrig.MLE.test.R b/tests/mKrig.MLE.test.R
new file mode 100644
index 0000000..1ee6f77
--- /dev/null
+++ b/tests/mKrig.MLE.test.R
@@ -0,0 +1,101 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields )
+options( echo=FALSE)
+test.for.zero.flag<- 1
+
+#
+##### generate test data
+#
+
+genCovMat = function(x, theta, lambda) {
+ distanceMatrix<- rdist(x,x)
+ Sigma<- exp( -distanceMatrix/theta ) + diag(x=lambda, nrow=nrow(distanceMatrix))
+ return(Sigma)
+}
+
+#generate observation locations
+n=500
+x = matrix(runif(2*n), nrow=n)
+
+#generate observations at the locations
+trueTheta = .2
+trueLambda = .1
+Sigma = genCovMat(x, trueTheta, trueLambda)
+
+U = chol(Sigma)
+y = t(U)%*%as.vector(rnorm(n))
+
+#
+######set MLE computation parameters
+#
+
+testThetas = seq(from=trueTheta/2, to=2*trueTheta, length=20)
+par.grid=list(theta=testThetas)
+guessLambda = trueLambda
+
+#
+##### test using distance matrix
+#
+
+print("testing using distance matrix")
+
+set.seed(1)
+out1 = mKrig.MLE(x, y, lambda=guessLambda, par.grid=par.grid,
+ cov.args= list(Distance="rdist"))
+lambda.MLE = out1$lambda.MLE
+theta.MLE = out1$cov.args.MLE$theta
+
+#perform mKrig at MLE parameters
+out1 = mKrig(x, y, lambda=lambda.MLE, theta=theta.MLE, cov.args= list(Distance="rdist"))
+print("finished default case")
+
+set.seed(1)
+out2 = mKrig.MLE(x, y, lambda=guessLambda, par.grid=par.grid)
+lambda.MLE = out2$lambda.MLE
+theta.MLE = out2$cov.args.MLE$theta
+
+#perform mKrig at MLE parameters
+out2 = mKrig(x, y, lambda=lambda.MLE, theta=theta.MLE)
+
+print("finished compact distance matrix case")
+
+#
+##### test comatibility with other fields functions
+#
+
+temp1<- predict( out1)
+temp2<- predict( out2)
+test.for.zero( temp1, temp2, tag="predict compatibility: rdist with compact versus normal rdist")
+
+#
+##### test SE
+#
+
+temp1 = predictSE(out1)
+temp2 = predictSE(out2)
+
+test.for.zero( temp1, temp2, tag="predictSE compatibility: rdist with compact versus normal rdist")
+
+
+
+
+
+cat("all done with mKrig.MLE tests", fill=TRUE)
+options( echo=TRUE)
diff --git a/tests/mKrig.MLE.test.Rout.save b/tests/mKrig.MLE.test.Rout.save
new file mode 100644
index 0000000..4386d2f
--- /dev/null
+++ b/tests/mKrig.MLE.test.Rout.save
@@ -0,0 +1,64 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields )
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+[1] "testing using distance matrix"
+[1] "finished default case"
+[1] "finished compact distance matrix case"
+Testing: predict compatibility: rdist with compact versus normal rdist
+PASSED test at tolerance 1e-08
+Testing: predictSE compatibility: rdist with compact versus normal rdist
+PASSED test at tolerance 1e-08
+all done with mKrig.MLE tests
+>
+> proc.time()
+ user system elapsed
+ 9.053 0.397 9.453
diff --git a/tests/mKrig.Z.R b/tests/mKrig.Z.R
new file mode 100644
index 0000000..2664e78
--- /dev/null
+++ b/tests/mKrig.Z.R
@@ -0,0 +1,54 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library( fields)
+options( echo=FALSE)
+test.for.zero.flag<- 1
+
+data(COmonthlyMet)
+y<- CO.tmin.MAM.climate
+good<- !is.na( y)
+y<-y[good]
+x<- CO.loc[good,]
+Z<- CO.elev[good]
+out<- mKrig( x,y, Z=Z, cov.function="stationary.cov", Covariance="Matern",
+ theta=4.0, smoothness=1.0, lambda=.1)
+
+out2<- Krig( x,y, Z=Z, cov.function="stationary.cov", Covariance="Matern",
+ theta=4.0, smoothness=1.0, lambda=.1, GCV=TRUE)
+
+test.for.zero( predict( out), predict(out2), tag="Full prediction")
+test.for.zero( predict( out, drop.Z=TRUE), predict(out2, drop.Z=TRUE), tag=" prediction dropping Z")
+
+xnew<- CO.loc[!good,]
+Znew<- CO.elev[!good]
+temp1<- predict( out, xnew=xnew, drop.Z=TRUE)
+temp2<- predict( out2, x=xnew, drop.Z=TRUE)
+test.for.zero( temp1,temp2, tag="new x's dropping Z")
+
+temp1<- predict( out, xnew=xnew, Z=Znew)
+temp2<- predict( out2, x=xnew, Z=Znew)
+test.for.zero( temp1,temp2, tag="new x's new Z's")
+
+temp1<- predictSurface( out, nx=20, ny=20, drop.Z=TRUE, extrap=TRUE)
+temp2<- predictSurface( out2, nx=20, ny=20, drop.Z=TRUE, extrap=TRUE)
+test.for.zero( temp1$z,temp2$z, tag="predicting on surface with drop.Z")
+
+
+cat("all done with mKrig Z tests", fill=TRUE)
+options( echo=TRUE)
+
diff --git a/tests/mKrig.Z.Rout.save b/tests/mKrig.Z.Rout.save
new file mode 100644
index 0000000..7f1181f
--- /dev/null
+++ b/tests/mKrig.Z.Rout.save
@@ -0,0 +1,67 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+Testing: Full prediction
+PASSED test at tolerance 1e-08
+Testing: prediction dropping Z
+PASSED test at tolerance 1e-08
+Testing: new x's dropping Z
+PASSED test at tolerance 1e-08
+Testing: new x's new Z's
+PASSED test at tolerance 1e-08
+Testing: predicting on surface with drop.Z
+PASSED test at tolerance 1e-08
+all done with mKrig Z tests
+>
+>
+> proc.time()
+ user system elapsed
+ 1.140 0.056 1.189
diff --git a/tests/mKrig.parameters.test.R b/tests/mKrig.parameters.test.R
new file mode 100644
index 0000000..3489015
--- /dev/null
+++ b/tests/mKrig.parameters.test.R
@@ -0,0 +1,240 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+#options( echo=FALSE)
+test.for.zero.flag<- 1
+data(ozone2)
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+#
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+#source("~/Home/Src/fields/R/mKrig.family.R")
+
+# now look at mKrig w/o sparse matrix
+look<- mKrig( x,y, cov.function="stationary.cov", theta=10, lambda=.3,
+ chol.args=list( pivot=FALSE))
+
+
+lookKrig<- Krig( x,y, cov.function="stationary.cov",
+ theta=10)
+
+test.df<-Krig.ftrace(look$lambda,lookKrig$matrices$D)
+
+test<- Krig.coef( lookKrig, lambda=look$lambda)
+
+test.for.zero( look$d, test$d, tag="Krig mKrig d coef")
+test.for.zero( look$c, test$c, tag="Krig mKrig c coef")
+
+# test of trace calculation
+
+look<- mKrig( x,y, cov.function="stationary.cov", theta=10, lambda=.3,
+
+ find.trA=TRUE, NtrA= 1000, iseed=243)
+
+test.for.zero( look$eff.df, test.df,tol=.01, tag="Monte Carlo eff.df")
+
+
+#
+lookKrig<-Krig( x,y, cov.function="stationary.cov",
+ theta=350, Distance="rdist.earth",Covariance="Wendland",
+ cov.args=list( k=2, dimension=2) )
+
+look<- mKrig( x,y, cov.function="stationary.cov",
+ theta=350,
+ Distance="rdist.earth",Covariance="Wendland",
+ cov.args=list( k=2, dimension=2),
+ lambda=lookKrig$lambda,
+ find.trA=TRUE, NtrA= 1000, iseed=243)
+
+test.for.zero( look$c, lookKrig$c, tag="Test of wendland and great circle")
+
+test.for.zero(look$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D)
+ ,tol=.01, tag="eff.df")
+
+# same calculation using sparse matrices.
+
+look4<- mKrig( x,y, cov.function="wendland.cov",
+ theta=350,
+ Dist.args=list( method="greatcircle"),
+ cov.args=list( k=2),
+ lambda=lookKrig$lambda,
+ find.trA=TRUE, NtrA=500, iseed=243)
+
+test.for.zero( look$c, look4$c,tol=8e-7,
+ tag="Test of sparse wendland and great circle")
+test.for.zero(look4$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
+ tol=.01, tag="sparse eff.df")
+
+# great circle distance switch has been a big bug -- test some options
+
+look<- mKrig( x,y, cov.function="wendland.cov",
+ theta=350, Dist.args=list( method="greatcircle"),
+ cov.args=list( k=2),lambda=lookKrig$lambda,
+ find.trA=TRUE, NtrA=1000, iseed=243)
+
+test.for.zero(look$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
+ tol=1e-2, tag="exact sparse eff.df")
+
+# compare to fast Tps
+look3<- fastTps( x,y,theta=350,lambda=lookKrig$lambda, NtrA=200, iseed=243,
+ lon.lat=TRUE)
+#look3$c<- lookKrig$c
+#look3$d<- lookKrig$d
+object<- look3
+np<- object$np
+Ey <- diag(1, np)
+NtrA <- np
+hold <- predict.mKrig(object, ynew = Ey, collapseFixedEffect=FALSE)
+hold2<- matrix( NA, np,np)
+for( k in 1:np){
+hold2[,k] <- predict.Krig(lookKrig, y = Ey[,k])
+}
+#plot( diag(hold), diag(hold2))
+
+
+test.for.zero( look3$c, lookKrig$c, tol=5e-7)
+test.for.zero( look3$d, lookKrig$d, tol=2e-8)
+test.for.zero( look3$fitted.values, lookKrig$fitted.values, tol=1e-7)
+
+test.for.zero( predict( look3, xnew= look3$x), predict( lookKrig, xnew= lookKrig$x),
+ tol=5e-7)
+
+test.for.zero( hold[,1], hold2[,1], tol=1e-7, relative=FALSE)
+
+test.for.zero(diag(hold),diag(hold2), tol=2E-7,
+ relative=FALSE, tag="exact sparse eff.df by predict -- fastTps")
+#plot( diag(hold), ( 1- diag(hold2)/ diag(hold)) )
+
+test.for.zero(look3$eff.df,sum( diag(hold)) , tag="fastTps ef.df exact" )
+
+test.for.zero(look3$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
+ tol=2e-7, tag="exact sparse eff.df through mKrig-- fastTps")
+
+# calculations of likelihood, rho and sigma
+
+lam<-.2
+
+out<- mKrig( x,y, cov.function =Exp.cov, theta=4, lambda=lam)
+out2<- Krig( x,y, cov.function =Exp.cov, theta=4, lambda=lam)
+
+
+Sigma<- Exp.cov( x,x,theta=4)
+X<- cbind( rep(1, nrow(x)), x)
+
+Sinv<- solve( Sigma + lam* diag( 1, nrow( x)))
+
+#checks on likelihoods
+
+# quadratic form:
+dhat<- c(solve( t(X)%*%Sinv%*%(X) ) %*% t(X) %*%Sinv%*%y)
+test.for.zero( dhat, out$d, tag="initial check on d for likelihood")
+r<- y -X%*%dhat
+N<- nrow(x)
+look<- t( r)%*%(Sinv)%*%r/N
+
+
+
+test.for.zero( look, out$rho.MLE, tag="rho hat from likelihood")
+
+test.for.zero( look, out2$rhohat, tag="rho hat from likelihood compared to Krig")
+
+
+
+# check determinant
+lam<- .2
+Sigma<- Exp.cov( x,x,theta=4)
+M<- Sigma + lam * diag( 1, nrow(x))
+chol( M)-> Mc
+look2<- sum( log(diag( Mc)))*2
+
+out<-mKrig( x,y,cov.function =Exp.cov, theta=4, lambda=lam)
+
+test.for.zero( out$lnDetCov, look2)
+test.for.zero( out$lnDetCov, determinant(M, log=TRUE)$modulus)
+
+# weighted version
+lam<- .2
+Sigma<- Exp.cov( x,x,theta=4)
+set.seed( 123)
+weights<- runif(nrow( x))
+M<- Sigma + diag(lam/ weights)
+chol( M)-> Mc
+look2<- sum( log(diag( Mc)))*2
+
+out<-mKrig( x,y,weights=weights, cov.function =Exp.cov, theta=4, lambda=lam)
+
+test.for.zero( out$lnDetCov, look2)
+test.for.zero( look2, determinant(M, log=TRUE)$modulus)
+test.for.zero( out$lnDetCov, determinant(M, log=TRUE)$modulus)
+
+
+
+# check profile likelihood by estimating MLE
+lam.true<- .2
+N<- nrow( x)
+Sigma<- Exp.cov( x,x,theta=4)
+M<- Sigma + lam.true * diag( 1, nrow(x))
+chol( M)-> Mc
+t(Mc)%*%Mc -> test
+
+
+
+
+##D set.seed( 234)
+##D NSIM<- 100
+##D hold2<-rep( NA, NSIM)
+##D temp.fun<- function(lglam){
+##D out<-mKrig( x,ytemp,
+##D cov.function =Exp.cov, theta=4, lambda=exp(lglam))
+##D return(-1* out$lnProfileLike)}
+
+##D hold1<-rep( NA, NSIM)
+##D yt<- rep( 1, N)
+##D obj<- Krig( x,yt, theta=4)
+
+
+##D E<- matrix( rnorm( NSIM*N), ncol=NSIM)
+
+##D for ( j in 1:NSIM){
+##D cat( j, " ")
+##D ytemp <- x%*%c(1,2) + t(Mc)%*%E[,j]
+##D out<- optim( log(.2), temp.fun, method="BFGS")
+##D hold2[j]<- exp(out$par)
+##D hold1[j]<- gcv.Krig(obj, y=ytemp)$lambda.est[6,1]
+
+##D }
+##D test.for.zero( median( hold1), .2, tol=.08)
+##D test.for.zero( median( hold2), .2, tol=.12)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tests/mKrig.parameters.test.Rout.save b/tests/mKrig.parameters.test.Rout.save
new file mode 100644
index 0000000..8f70dd8
--- /dev/null
+++ b/tests/mKrig.parameters.test.Rout.save
@@ -0,0 +1,314 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> #options( echo=FALSE)
+> test.for.zero.flag<- 1
+> data(ozone2)
+> y<- ozone2$y[16,]
+> x<- ozone2$lon.lat
+> #
+> # Omit the NAs
+> good<- !is.na( y)
+> x<- x[good,]
+> y<- y[good]
+> #source("~/Home/Src/fields/R/mKrig.family.R")
+>
+> # now look at mKrig w/o sparse matrix
+> look<- mKrig( x,y, cov.function="stationary.cov", theta=10, lambda=.3,
++ chol.args=list( pivot=FALSE))
+>
+>
+> lookKrig<- Krig( x,y, cov.function="stationary.cov",
++ theta=10)
+>
+> test.df<-Krig.ftrace(look$lambda,lookKrig$matrices$D)
+>
+> test<- Krig.coef( lookKrig, lambda=look$lambda)
+>
+> test.for.zero( look$d, test$d, tag="Krig mKrig d coef")
+Testing: Krig mKrig d coef
+PASSED test at tolerance 1e-08
+> test.for.zero( look$c, test$c, tag="Krig mKrig c coef")
+Testing: Krig mKrig c coef
+PASSED test at tolerance 1e-08
+>
+> # test of trace calculation
+>
+> look<- mKrig( x,y, cov.function="stationary.cov", theta=10, lambda=.3,
++
++ find.trA=TRUE, NtrA= 1000, iseed=243)
+>
+> test.for.zero( look$eff.df, test.df,tol=.01, tag="Monte Carlo eff.df")
+Testing: Monte Carlo eff.df
+PASSED test at tolerance 0.01
+>
+>
+> #
+> lookKrig<-Krig( x,y, cov.function="stationary.cov",
++ theta=350, Distance="rdist.earth",Covariance="Wendland",
++ cov.args=list( k=2, dimension=2) )
+>
+> look<- mKrig( x,y, cov.function="stationary.cov",
++ theta=350,
++ Distance="rdist.earth",Covariance="Wendland",
++ cov.args=list( k=2, dimension=2),
++ lambda=lookKrig$lambda,
++ find.trA=TRUE, NtrA= 1000, iseed=243)
+>
+> test.for.zero( look$c, lookKrig$c, tag="Test of wendland and great circle")
+Testing: Test of wendland and great circle
+PASSED test at tolerance 1e-08
+>
+> test.for.zero(look$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D)
++ ,tol=.01, tag="eff.df")
+Testing: eff.df
+PASSED test at tolerance 0.01
+>
+> # same calculation using sparse matrices.
+>
+> look4<- mKrig( x,y, cov.function="wendland.cov",
++ theta=350,
++ Dist.args=list( method="greatcircle"),
++ cov.args=list( k=2),
++ lambda=lookKrig$lambda,
++ find.trA=TRUE, NtrA=500, iseed=243)
+>
+> test.for.zero( look$c, look4$c,tol=8e-7,
++ tag="Test of sparse wendland and great circle")
+Testing: Test of sparse wendland and great circle
+PASSED test at tolerance 8e-07
+> test.for.zero(look4$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
++ tol=.01, tag="sparse eff.df")
+Testing: sparse eff.df
+PASSED test at tolerance 0.01
+>
+> # great circle distance switch has been a big bug -- test some options
+>
+> look<- mKrig( x,y, cov.function="wendland.cov",
++ theta=350, Dist.args=list( method="greatcircle"),
++ cov.args=list( k=2),lambda=lookKrig$lambda,
++ find.trA=TRUE, NtrA=1000, iseed=243)
+>
+> test.for.zero(look$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
++ tol=1e-2, tag="exact sparse eff.df")
+Testing: exact sparse eff.df
+PASSED test at tolerance 0.01
+>
+> # compare to fast Tps
+> look3<- fastTps( x,y,theta=350,lambda=lookKrig$lambda, NtrA=200, iseed=243,
++ lon.lat=TRUE)
+> #look3$c<- lookKrig$c
+> #look3$d<- lookKrig$d
+> object<- look3
+> np<- object$np
+> Ey <- diag(1, np)
+> NtrA <- np
+> hold <- predict.mKrig(object, ynew = Ey, collapseFixedEffect=FALSE)
+> hold2<- matrix( NA, np,np)
+> for( k in 1:np){
++ hold2[,k] <- predict.Krig(lookKrig, y = Ey[,k])
++ }
+> #plot( diag(hold), diag(hold2))
+>
+>
+> test.for.zero( look3$c, lookKrig$c, tol=5e-7)
+PASSED test at tolerance 5e-07
+> test.for.zero( look3$d, lookKrig$d, tol=2e-8)
+PASSED test at tolerance 2e-08
+> test.for.zero( look3$fitted.values, lookKrig$fitted.values, tol=1e-7)
+PASSED test at tolerance 1e-07
+>
+> test.for.zero( predict( look3, xnew= look3$x), predict( lookKrig, xnew= lookKrig$x),
++ tol=5e-7)
+PASSED test at tolerance 5e-07
+>
+> test.for.zero( hold[,1], hold2[,1], tol=1e-7, relative=FALSE)
+PASSED test at tolerance 1e-07
+>
+> test.for.zero(diag(hold),diag(hold2), tol=2E-7,
++ relative=FALSE, tag="exact sparse eff.df by predict -- fastTps")
+Testing: exact sparse eff.df by predict -- fastTps
+PASSED test at tolerance 2e-07
+> #plot( diag(hold), ( 1- diag(hold2)/ diag(hold)) )
+>
+> test.for.zero(look3$eff.df,sum( diag(hold)) , tag="fastTps ef.df exact" )
+Testing: fastTps ef.df exact
+PASSED test at tolerance 1e-08
+>
+> test.for.zero(look3$eff.df, Krig.ftrace( lookKrig$lambda, lookKrig$matrices$D),
++ tol=2e-7, tag="exact sparse eff.df through mKrig-- fastTps")
+Testing: exact sparse eff.df through mKrig-- fastTps
+PASSED test at tolerance 2e-07
+>
+> # calculations of likelihood, rho and sigma
+>
+> lam<-.2
+>
+> out<- mKrig( x,y, cov.function =Exp.cov, theta=4, lambda=lam)
+> out2<- Krig( x,y, cov.function =Exp.cov, theta=4, lambda=lam)
+>
+>
+> Sigma<- Exp.cov( x,x,theta=4)
+> X<- cbind( rep(1, nrow(x)), x)
+>
+> Sinv<- solve( Sigma + lam* diag( 1, nrow( x)))
+>
+> #checks on likelihoods
+>
+> # quadratic form:
+> dhat<- c(solve( t(X)%*%Sinv%*%(X) ) %*% t(X) %*%Sinv%*%y)
+> test.for.zero( dhat, out$d, tag="initial check on d for likelihood")
+Testing: initial check on d for likelihood
+PASSED test at tolerance 1e-08
+> r<- y -X%*%dhat
+> N<- nrow(x)
+> look<- t( r)%*%(Sinv)%*%r/N
+>
+>
+>
+> test.for.zero( look, out$rho.MLE, tag="rho hat from likelihood")
+Testing: rho hat from likelihood
+PASSED test at tolerance 1e-08
+>
+> test.for.zero( look, out2$rhohat, tag="rho hat from likelihood compared to Krig")
+Testing: rho hat from likelihood compared to Krig
+PASSED test at tolerance 1e-08
+>
+>
+>
+> # check determinant
+> lam<- .2
+> Sigma<- Exp.cov( x,x,theta=4)
+> M<- Sigma + lam * diag( 1, nrow(x))
+> chol( M)-> Mc
+> look2<- sum( log(diag( Mc)))*2
+>
+> out<-mKrig( x,y,cov.function =Exp.cov, theta=4, lambda=lam)
+>
+> test.for.zero( out$lnDetCov, look2)
+PASSED test at tolerance 1e-08
+> test.for.zero( out$lnDetCov, determinant(M, log=TRUE)$modulus)
+PASSED test at tolerance 1e-08
+>
+> # weighted version
+> lam<- .2
+> Sigma<- Exp.cov( x,x,theta=4)
+> set.seed( 123)
+> weights<- runif(nrow( x))
+> M<- Sigma + diag(lam/ weights)
+> chol( M)-> Mc
+> look2<- sum( log(diag( Mc)))*2
+>
+> out<-mKrig( x,y,weights=weights, cov.function =Exp.cov, theta=4, lambda=lam)
+>
+> test.for.zero( out$lnDetCov, look2)
+PASSED test at tolerance 1e-08
+> test.for.zero( look2, determinant(M, log=TRUE)$modulus)
+PASSED test at tolerance 1e-08
+> test.for.zero( out$lnDetCov, determinant(M, log=TRUE)$modulus)
+PASSED test at tolerance 1e-08
+>
+>
+>
+> # check profile likelihood by estimating MLE
+> lam.true<- .2
+> N<- nrow( x)
+> Sigma<- Exp.cov( x,x,theta=4)
+> M<- Sigma + lam.true * diag( 1, nrow(x))
+> chol( M)-> Mc
+> t(Mc)%*%Mc -> test
+>
+>
+>
+>
+> ##D set.seed( 234)
+> ##D NSIM<- 100
+> ##D hold2<-rep( NA, NSIM)
+> ##D temp.fun<- function(lglam){
+> ##D out<-mKrig( x,ytemp,
+> ##D cov.function =Exp.cov, theta=4, lambda=exp(lglam))
+> ##D return(-1* out$lnProfileLike)}
+>
+> ##D hold1<-rep( NA, NSIM)
+> ##D yt<- rep( 1, N)
+> ##D obj<- Krig( x,yt, theta=4)
+>
+>
+> ##D E<- matrix( rnorm( NSIM*N), ncol=NSIM)
+>
+> ##D for ( j in 1:NSIM){
+> ##D cat( j, " ")
+> ##D ytemp <- x%*%c(1,2) + t(Mc)%*%E[,j]
+> ##D out<- optim( log(.2), temp.fun, method="BFGS")
+> ##D hold2[j]<- exp(out$par)
+> ##D hold1[j]<- gcv.Krig(obj, y=ytemp)$lambda.est[6,1]
+>
+> ##D }
+> ##D test.for.zero( median( hold1), .2, tol=.08)
+> ##D test.for.zero( median( hold2), .2, tol=.12)
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> proc.time()
+ user system elapsed
+ 2.995 0.119 3.108
diff --git a/tests/mKrig.se.test.R b/tests/mKrig.se.test.R
new file mode 100644
index 0000000..1ed56fb
--- /dev/null
+++ b/tests/mKrig.se.test.R
@@ -0,0 +1,174 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+
+# tests of predictSE
+# against direct linear algebra
+
+library(fields)
+options( echo=FALSE)
+
+test.for.zero.flag<- TRUE
+
+x0<- cbind( 0,4)
+
+Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50,
+ lambda=.06, GCV=FALSE)-> out
+
+# direct calculation
+Krig.Amatrix( out, x=x0)-> A
+test.for.zero( A%*%ChicagoO3$y, predict( out, x0),tag="Amatrix vs. predict")
+
+Sigma0<- out$rhohat*Exp.cov( ChicagoO3$x, ChicagoO3$x, theta=50)
+S0<- out$rhohat*c(Exp.cov( x0, x0, theta=50))
+S1<- out$rhohat*Exp.cov( out$x, x0, theta=50)
+
+#yhat= Ay
+#var( f0 - yhat)= var( f0) - 2 cov( f0,yhat)+ cov( yhat)
+
+look<- S0 - t(S1)%*% t(A) - A%*%S1 +
+ A%*% ( Sigma0 + diag(out$shat.MLE**2/out$weightsM))%*% t(A)
+#
+#compare to
+# diagonal elements
+
+
+test2<- predictSE( out, x= x0)
+test.for.zero( sqrt(diag( look)), test2,tag="Marginal predictSE")
+
+
+# now test shortcut formula that leverages the prediction step for Kriging
+#
+
+Sigma<- Exp.cov( ChicagoO3$x, ChicagoO3$x, theta=50) +
+ diag(out$lambda/out$weightsM)
+
+#Sigma<- ( Sigma0 + diag(out$shat.MLE**2/out$weightsM))
+
+Tmatrix <- do.call(out$null.function.name, c(out$null.args,
+ list(x = out$xM, Z = out$ZM)))
+
+Omega<- solve( t(Tmatrix)%*% solve( Sigma)%*% Tmatrix)
+Id<- diag( 1, nrow( Tmatrix))
+
+ qr.R( out$matrices$qr.VT) -> Rmat
+
+Omega.test<- solve(t(Rmat)%*% (Rmat))
+
+# Omega is the GLS covariance matrix for estimated parameters in fixed part of
+# spatial model (the d coefficients). These are usually the "spatial drift" -- a
+# low order polynomial
+
+test.for.zero( Omega, Omega.test, tag="comparing Omega")
+
+# M1 and M2 are matrices that go from obs to the estimated coefficients (d,c)
+
+M1<- Omega%*% t(Tmatrix)%*% solve( Sigma)
+M2<- solve( Sigma)%*% ( Id - Tmatrix%*% M1)
+
+
+x0<- cbind( 0,4)
+
+k0<- Exp.cov( out$x, x0, theta=50)
+
+#k0<- S1
+
+t0<- c( 1, c(x0))
+
+hold<- t( t0)%*%M1 + t(k0)%*% M2
+test.for.zero( hold, A)
+test.for.zero( M2%*%Sigma%*%t( M2), M2)
+
+# benchmark using standard predictSE function
+
+SE0<- predictSE( out, x=x0)
+
+# shortcut formula explicitly
+MSE<- S0 + out$rhohat*t(t0)%*% Omega %*%t0 -
+ out$rhohat*(t(k0)%*% M2 %*% k0 + t(t0)%*% M1%*% k0) -
+ out$rhohat*t(t0)%*% M1%*% k0
+
+# collecting terms to make this look like two predict steps.
+MSE2<- S0 + out$rhohat*t(t0)%*% Omega %*%t0 -
+ out$rhohat* predict( out, yM= k0, x=x0) -
+ out$rhohat* predict( out, yM= k0, x=x0, just.fixed=TRUE)
+
+hold<- Krig.coef(out, y=k0)
+
+tempc<- t(k0)%*% hold$c
+tempd<- t(t0)%*%hold$d
+
+MSE4<- S0 + out$rhohat*t(t0)%*% Omega %*%t0 -
+ out$rhohat * (tempc +2*tempd)
+
+test.for.zero(SE0, sqrt( MSE4), tag="test of formula with explicit d and c")
+
+
+# test of new function
+
+Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50,lambda=.06)-> out0
+SE0<- predictSE.Krig( out0, x=x0)
+mKrig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50, lambda=.06)-> out2
+SE3<- predictSE.mKrig( out2, xnew=x0)
+
+test.for.zero(SE0, sqrt( MSE), tag="Krig function and direct formula")
+
+
+test.for.zero(sqrt(MSE), sqrt( MSE2),
+ tag="new predict formula and direct formula")
+
+test.for.zero( SE3, SE0, tag="New se _function_ and old Krig _function_")
+#
+# test of vectors of locations.
+
+
+# receate object
+out0<- Krig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50, lambda=.06)
+out<- mKrig( ChicagoO3$x, ChicagoO3$y, cov.function = "Exp.cov", theta=50, lambda=.06)
+
+
+x0<-rep( c( -20, -10,10,20),4)
+
+x0 <- cbind( x0 , sort( x0))
+x0<- rbind( c(0,4), x0)
+
+k0<- Exp.cov( ChicagoO3$x,x0, theta=50)
+t0<- t(fields.mkpoly(x0, m=out$m))
+hold<- Krig.coef(out0, y=k0)
+
+MSE5<- (rep( S0,nrow(x0)) +
+ out0$rhohat * colSums( t0 *(out0$matrices$Omega%*%t0))
+ -out0$rhohat* colSums((k0)*hold$c) -
+ 2*out0$rhohat*colSums(t0*hold$d))
+
+hold<- mKrig.coef(out, y=k0, collapse=FALSE)
+MSE6<- (rep( S0,nrow(x0)) +
+ out$rhohat * colSums( t0 *(out$Omega%*%t0))
+ -out$rhohat* colSums((k0)*hold$c) -
+ 2*out$rhohat*colSums(t0*hold$d))
+
+test.for.zero( predictSE( out0, x0), sqrt(MSE5),
+ tag="Benchmark of formula")
+
+test.for.zero( predictSE( out0, x0), sqrt(MSE6),
+ tag="Benchmark of formula mKrig coefs")
+
+test.for.zero( predictSE( out, x0), predictSE.mKrig(out, x0),
+ tag="test function with several locations Krig mKrig functions" )
+
+
diff --git a/tests/mKrig.se.test.Rout.save b/tests/mKrig.se.test.Rout.save
new file mode 100644
index 0000000..7591914
--- /dev/null
+++ b/tests/mKrig.se.test.Rout.save
@@ -0,0 +1,81 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+>
+> # tests of predictSE
+> # against direct linear algebra
+>
+> library(fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+Testing: Amatrix vs. predict
+PASSED test at tolerance 1e-08
+Testing: Marginal predictSE
+PASSED test at tolerance 1e-08
+Testing: comparing Omega
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: test of formula with explicit d and c
+PASSED test at tolerance 1e-08
+Testing: Krig function and direct formula
+PASSED test at tolerance 1e-08
+Testing: new predict formula and direct formula
+PASSED test at tolerance 1e-08
+Testing: New se _function_ and old Krig _function_
+PASSED test at tolerance 1e-08
+Testing: Benchmark of formula
+PASSED test at tolerance 1e-08
+Testing: Benchmark of formula mKrig coefs
+PASSED test at tolerance 1e-08
+Testing: test function with several locations Krig mKrig functions
+PASSED test at tolerance 1e-08
+> proc.time()
+ user system elapsed
+ 1.118 0.039 1.149
diff --git a/tests/mKrig.test.R b/tests/mKrig.test.R
new file mode 100644
index 0000000..49252fb
--- /dev/null
+++ b/tests/mKrig.test.R
@@ -0,0 +1,287 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options( echo=FALSE)
+test.for.zero.flag<- 1
+
+# test data
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+
+#first test addToDiagC
+
+I3 = diag(nrow=3)
+twoI3 = I3*2
+.Call("addToDiagC", I3, rep(1.0, 3), as.integer(3))
+test.for.zero(twoI3, I3, tag="addToDiag")
+
+# turning spam on and off
+Krig(x,y, cov.function = "stationary.taper.cov", theta=1.5,
+ cov.args= list( spam.format=FALSE,
+ Taper.args= list( theta=2.0,k=2, dimension=2) )
+) -> out1
+
+Krig(x,y, cov.function = "stationary.taper.cov", lambda=2.0, theta=1.5,
+ cov.args= list( spam.format=TRUE,
+ Taper.args= list( theta=2.0,k=2, dimension=2) )
+) -> out2
+
+temp1<- predict( out1,lambda=2.0)
+temp2<- predict( out2)
+test.for.zero( temp1, temp2, tag="spam vs no spam")
+
+#
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+
+# now look at mKrig w/o sparse matrix
+mKrig( x,y, cov.function="stationary.cov", theta=10, lambda=.3,
+ chol.args=list( pivot=FALSE))-> look
+
+Krig( x,y, cov.function="stationary.cov", theta=10, lambda=.3) -> look2
+
+test.for.zero( look$d, look2$d, tag="Krig mKrig d coef")
+test.for.zero( look$c, look2$c, tag="Krig mKrig c coef")
+
+
+set.seed(123)
+xnew<- cbind( (runif(20)-.5)*5, (runif(20)-.5)*5)
+temp<- predict( look, xnew)
+temp2<- predict( look2, xnew)
+test.for.zero( temp, temp2, tag="test of predict at new locations")
+
+# test of matrix of obs
+N<- length( y)
+Y<- cbind( runif(N), y,runif(N), y)
+
+# collapse == FALSE means each fixed effect found separately for columns of Y
+lookY<- mKrig( x,Y, cov.function="stationary.cov",
+ theta=10, lambda=.3,collapse=FALSE)
+temp3<- predict( lookY, xnew, collapse=FALSE)[,4]
+
+test.for.zero( temp, temp3, tag="test of matrix Y predicts" )
+
+predictSurface( look)-> temp
+predictSurface( look2)-> temp2
+
+good<- !is.na( temp2$z)
+test.for.zero( temp$z[good], temp2$z[good])
+
+# testing stationary taper covariance
+# and also surface prediction
+
+N<- length( y)
+mKrig( x,y, cov.function="stationary.taper.cov", theta=2,
+ spam.format=FALSE, lambda=.35 )-> look
+
+Krig( x,y, cov.function="stationary.taper.cov", theta=2,
+ spam.format=FALSE, lambda=.35)-> look2
+
+predictSurface( look, nx=50, ny=45)-> temp
+predictSurface( look2, nx=50, ny=45)-> temp2
+
+good<- !is.na( temp2$z)
+test.for.zero( temp$z[good], temp2$z[good], tag="predictSurface with mKrig")
+
+#
+# Use Wendland with sparse off and on.
+Krig( x,y, cov.function="wendland.cov",
+ cov.args=list( k=2, theta=2.8),
+ lambda=.3, spam.format=FALSE)-> look
+
+mKrig( x,y, cov.function="wendland.cov",k=2, theta=2.8,
+ spam.format=FALSE, lambda=.3)-> look2
+
+mKrig( x,y, cov.function="wendland.cov",k=2, theta=2.8,
+ spam.format=TRUE, lambda=.3)-> look3
+
+# final tests for predict.
+set.seed(223)
+xnew<- cbind(runif( N)*.5 + x[,1], runif(N)*.5 + x[,2])
+temp<- predict( look, xnew)
+temp2<- predict( look2, xnew)
+temp3<- predict( look3, xnew)
+test.for.zero( temp, temp2, tag="Wendland/no spam")
+test.for.zero( temp2, temp3, tag="Wendland/spam")
+
+
+### testing coefficients for new data
+mKrig.coef( look2, cbind(y+1,y+2), collapse=FALSE)-> newc
+test.for.zero( look2$c, newc$c[,2], tag="new coef c no spam")
+
+test.for.zero( look2$d,
+ c(newc$d[1,2] -2, newc$d[2:3,2]), tag="new d coef no spam")
+
+mKrig.coef( look3, cbind(y+1,y+2), collapse=FALSE)-> newc
+test.for.zero( look3$c, newc$c[,2], tag="new coef c spam")
+
+test.for.zero( look3$d,
+ c(newc$d[1,2] -2, newc$d[2:3,2]), tag="new d coef spam")
+
+###
+
+
+### bigger sample size
+set.seed( 334)
+N<- 1000
+x<- matrix( runif(2*N),ncol=2)
+y<- rnorm( N)
+nzero <- length( wendland.cov(x,x, k=2,theta=.1)@entries)
+
+
+mKrig( x,y, cov.function="wendland.cov",k=2,
+ theta=.1, lambda=.3)-> look2
+
+
+test.for.zero( look2$non.zero.entires, nzero, tag="nzero in call to mKrig")
+
+######
+### test out passing to chol
+
+data( ozone2)
+y<- ozone2$y[16,]
+good<- !is.na( y)
+y<-y[good]
+x<- ozone2$lon.lat[good,]
+
+# interpolate using defaults (Exponential)
+# stationary covariance
+mKrig( x,y, theta = 1.5, lambda=.2)-> out
+#
+# NOTE this should be identical to
+Krig( x,y, theta=1.5, lambda=.2) -> out2
+temp<- predict( out)
+temp2<- predict( out2)
+
+test.for.zero( temp, temp2, tag="mKrig vs. Krig for ozone2")
+
+# test passing arguments for chol
+
+set.seed( 334)
+N<- 300
+x<- matrix( 2*(runif(2*N)-.5),ncol=2)
+y<- sin( 3*pi*x[,1])*sin( 3.5*pi*x[,2]) + rnorm( N)*.01
+
+
+Krig( x,y, Covariance="Wendland",
+ cov.args= list(k=2, theta=.8, dimension=2), ,
+ give.warnings=FALSE,
+ lambda=1e2) -> out
+
+mKrig( x,y,
+ cov.function="wendland.cov",k=2, theta=.8,
+ lambda=1e2,
+ chol.args=list( memory=list( nnzR=1e5)),
+)-> out2
+
+temp<- predict( out)
+temp2<- predict( out2)
+
+test.for.zero( temp, temp2, tag="predict Wendland mKrig vs Krig")
+
+
+
+
+# test of fastTps
+nx<- 50
+ny<- 60
+x<- seq( 0,1,,nx)
+y<- seq( 0,1,,ny)
+gl<- list( x=x, y=y)
+xg<- make.surface.grid(gl)
+ztrue<- sin( xg[,1]*pi*3)* cos(xg[,2]*pi*2.5)
+#image.plot(x,y,matriz( ztrue, nx,ny))
+set.seed( 222)
+ind<- sample( 1:(nx*ny), 600)
+xdat<- xg[ind,]
+ydat <- ztrue[ind]
+out<- fastTps(xdat, ydat, theta=.3)
+out.p<-predictSurface( out, grid=gl, extrap=TRUE)
+# perfect agreement at data
+test.for.zero( ydat, c( out.p$z)[ind], tag="fastTps interp1")
+#image.plot(x,y,matrix( ztrue, nx,ny)- out.p$z)
+rmse<- sqrt(mean( (ztrue- c( out.p$z))^2)/ mean( (ztrue)^2))
+test.for.zero( rmse,0,tol=.01, relative=FALSE,tag="fastTps interp2")
+
+
+##### test precomputing distance matrices:
+#
+
+set.seed(1)
+
+# test data
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+
+#
+# Omit the NAs
+good<- !is.na( y)
+x<- x[good,]
+y<- y[good]
+compactDistMat = rdist(x, compact=TRUE)
+distMat = rdist(x)
+
+##### test using distance matrix
+print("testing using distance matrix")
+
+mKrig(x,y, cov.function = "stationary.cov", lambda=2.0, theta=1.5) -> out1
+
+mKrig(x,y, cov.args= list(Covariance="Exponential", Distance="rdist", Dist.args=list(compact=TRUE)),
+ lambda=2.0, theta=1.5) -> out2
+
+#NOTE: compact distance matrix should not be used by user for fields compatibility reasons
+mKrig(x,y, cov.args= list(Covariance="Exponential", Dist.args=list(compact=TRUE)),
+ lambda=2.0, theta=1.5, distMat=compactDistMat) -> out3
+
+mKrig(x,y, cov.args= list(Covariance="Exponential"),
+ lambda=2.0, theta=1.5, distMat=distMat) -> out4
+
+temp1<- predict( out1)
+temp2<- predict( out2)
+temp3 = predict( out3)
+temp4 = predict( out4)
+test.for.zero( temp1, temp2, tag="predict: stationary.cov versus Exp.cov")
+test.for.zero( temp2, temp3, tag="predict: no distance matrix versus compact distance matrix")
+test.for.zero( temp2, temp4, tag="predict: no distance matrix versus distance matrix")
+
+##### test SE
+print("testing using predictSE")
+
+temp1 = predictSE(out1)
+temp2 = predictSE(out2)
+temp3 = predictSE(out3)
+temp4 = predictSE(out4)
+
+test.for.zero( temp1, temp2, tag="predictSE: stationary.cov with exponential versus Exp.cov")
+test.for.zero( temp2, temp3, tag="predictSE: no distance matrix versus compact distance matrix")
+test.for.zero( temp2, temp4, tag="predictSE: no distance matrix versus distance matrix")
+
+
+
+
+
+cat("all done with mKrig tests", fill=TRUE)
+options( echo=TRUE)
+
+
+
diff --git a/tests/mKrig.test.Rout.save b/tests/mKrig.test.Rout.save
new file mode 100644
index 0000000..33d9bb4
--- /dev/null
+++ b/tests/mKrig.test.Rout.save
@@ -0,0 +1,114 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+NULL
+Testing: addToDiag
+PASSED test at tolerance 1e-08
+Testing: spam vs no spam
+PASSED test at tolerance 1e-08
+Testing: Krig mKrig d coef
+PASSED test at tolerance 1e-08
+Testing: Krig mKrig c coef
+PASSED test at tolerance 1e-08
+Testing: test of predict at new locations
+PASSED test at tolerance 1e-08
+Testing: test of matrix Y predicts
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+Testing: predictSurface with mKrig
+PASSED test at tolerance 1e-08
+Testing: Wendland/no spam
+PASSED test at tolerance 1e-08
+Testing: Wendland/spam
+PASSED test at tolerance 1e-08
+Testing: new coef c no spam
+PASSED test at tolerance 1e-08
+Testing: new d coef no spam
+PASSED test at tolerance 1e-08
+Testing: new coef c spam
+PASSED test at tolerance 1e-08
+Testing: new d coef spam
+PASSED test at tolerance 1e-08
+Testing: nzero in call to mKrig
+PASSED test at tolerance 1e-08
+Testing: mKrig vs. Krig for ozone2
+PASSED test at tolerance 1e-08
+Testing: predict Wendland mKrig vs Krig
+PASSED test at tolerance 1e-08
+Warning message:
+In fastTps(xdat, ydat, theta = 0.3) : fastTps will interpolate observations
+Testing: fastTps interp1
+PASSED test at tolerance 1e-08
+Testing: fastTps interp2
+PASSED test at tolerance 0.01
+[1] "testing using distance matrix"
+Testing: predict: stationary.cov versus Exp.cov
+PASSED test at tolerance 1e-08
+Testing: predict: no distance matrix versus compact distance matrix
+PASSED test at tolerance 1e-08
+Testing: predict: no distance matrix versus distance matrix
+PASSED test at tolerance 1e-08
+[1] "testing using predictSE"
+Testing: predictSE: stationary.cov with exponential versus Exp.cov
+PASSED test at tolerance 1e-08
+Testing: predictSE: no distance matrix versus compact distance matrix
+PASSED test at tolerance 1e-08
+Testing: predictSE: no distance matrix versus distance matrix
+PASSED test at tolerance 1e-08
+all done with mKrig tests
+>
+>
+>
+>
+> proc.time()
+ user system elapsed
+ 2.091 0.107 2.191
diff --git a/tests/mKrigMLETest.R b/tests/mKrigMLETest.R
new file mode 100644
index 0000000..2f1c94e
--- /dev/null
+++ b/tests/mKrigMLETest.R
@@ -0,0 +1,243 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+# Test adapted from fields package, under GPL license
+
+library( fields )
+options( echo=FALSE)
+
+#
+##### generate test data
+#
+
+data( ozone2)
+# x is a two column matrix where each row is a location in lon/lat
+# coordinates
+x<- ozone2$lon.lat
+# y is a vector of ozone measurements at day 16 a the locations.
+y<- ozone2$y[16,]
+#ind<- !is.na( y)
+#x<- x[ind,]
+#y<- y[ind]
+
+################ test that optim results match the model evaluated
+################ at the optimized parameters.
+optim.args = list(method = "BFGS",
+ control = list(fnscale = -1, parscale = c(0.5, 0.5),
+ ndeps = c(0.05,0.05)))
+
+MLEfit0 <- mKrigMLEJoint(x, y, lambda.start=.5,
+ cov.params.start= list(theta=1.2),
+ cov.fun="stationary.cov",
+ optim.args=optim.args,
+ cov.args = list(Covariance = "Matern", smoothness=1.0),
+ na.rm=TRUE,
+ mKrig.args = list( m=1),
+ verbose=FALSE)
+test.for.zero( MLEfit0$summary["lnProfileLike.FULL"], MLEfit0$optimResults$value)
+
+
+obj0<- mKrig( x,y, cov.args = list(Covariance = "Matern",
+ smoothness = 1.0),
+ na.rm=TRUE, m=1,
+ lambda= MLEfit0$pars.MLE[1],
+ theta=MLEfit0$pars.MLE[2])
+test.for.zero( MLEfit0$summary["lnProfileLike.FULL"],
+ obj0$lnProfileLike.FULL)
+
+test.for.zero( MLEfit0$summary["rhoMLE"],obj0$rho.MLE)
+
+par.grid<- list( theta= c(.99, 1.0, 1.01)*MLEfit0$summary["theta"] )
+MLEfit1<- mKrigMLEGrid(x, y,
+ cov.fun = "stationary.cov",
+ cov.args = list(Covariance = "Matern", smoothness = 1.0),
+ par.grid = par.grid,
+ lambda = .5,
+ lambda.profile = TRUE,
+ mKrig.args = list( m=1),
+ na.rm=TRUE,
+ verbose = FALSE)
+
+hold<- (MLEfit1$summary[1,"lnProfileLike.FULL"] < MLEfit1$summary[2,"lnProfileLike.FULL"]) &
+ (MLEfit1$summary[3,"lnProfileLike.FULL"] < MLEfit1$summary[2,"lnProfileLike.FULL"])
+
+test.for.zero(as.numeric(hold), 1, relative=FALSE)
+
+
+
+lambdaGrid<- c(.99, 1.0, 1.01)*MLEfit0$summary["lambda"]
+par.grid<- list( theta= rep(MLEfit0$summary["theta"] ,3 ) )
+MLEfit2 <- mKrigMLEGrid(x, y,
+ cov.fun = "stationary.cov",
+ cov.args = list(Covariance = "Matern", smoothness = 1.0),
+ mKrig.args = list( m=1),
+ par.grid = par.grid,
+ lambda = lambdaGrid,
+ lambda.profile = FALSE,
+ verbose = FALSE)
+hold<- (MLEfit2$summary[1,"lnProfileLike.FULL"] < MLEfit2$summary[2,"lnProfileLike.FULL"]) &
+ (MLEfit2$summary[3,"lnProfileLike.FULL"] < MLEfit2$summary[2,"lnProfileLike.FULL"])
+test.for.zero(as.numeric(hold), 1, relative=FALSE)
+
+MLEfit3<- MLESpatialProcess( x,y,
+ cov.args = list(Covariance = "Matern", smoothness = 1.0),
+ mKrig.args = list( m=1)
+ )
+
+test.for.zero(MLEfit0$summary[1:5],
+ (MLEfit3$MLEJoint$summary[1:5]), tol=2e-3 )
+
+obj<- spatialProcess( x, y, mKrig.args= list(m = 1),
+ theta = MLEfit0$summary[3] )
+
+obj1<- spatialProcess( x, y, mKrig.args= list(m = 1)
+ )
+
+test.for.zero(MLEfit0$summary[1],
+ obj$lnProfileLike.FULL )
+
+test.for.zero(MLEfit0$summary[1],
+ obj1$lnProfileLike.FULL)
+
+
+# testing Krig function
+
+out1<- Krig( x,y, cov.fun="stationary.cov",
+
+ cov.args = list(Covariance = "Matern",
+ smoothness=1.0, theta=.9),
+ na.rm=TRUE,
+ m=2)
+
+genCovMat = function(x, theta, lambda) {
+ distanceMatrix<- rdist(x,x)
+ Sigma<- Matern( distanceMatrix/theta, smoothness=1.0 ) + diag(x=lambda, nrow=nrow(distanceMatrix))
+ return(Sigma)
+}
+
+#generate observation locations
+set.seed( 223)
+n=50
+x = matrix(runif(2*n), nrow=n)
+#generate observations at the locations
+trueTheta = .1
+trueLambda = .1
+Sigma = genCovMat(x, trueTheta, trueLambda)
+
+U = chol(Sigma)
+M<- 1e4
+set.seed( 332)
+y = t(U)%*%matrix( rnorm(n*M), n,M)
+
+optim.args = list(method = "BFGS",
+ control = list(fnscale = -1, parscale = c(0.5, 0.5),
+ ndeps = c(0.05,0.05)))
+
+MLEfitA <- mKrigMLEJoint(x, y, lambda.start=.5,
+ cov.params.start= list(theta=.12),
+ cov.fun="stationary.cov",
+ optim.args=optim.args,
+ cov.args = list(Covariance = "Matern",
+ smoothness=1.0),
+ na.rm=TRUE,
+ mKrig.args = list( m=0),
+ verbose=FALSE)
+test.for.zero( MLEfitA$summary["lambda"],.1, tol=.02)
+test.for.zero( MLEfitA$summary["theta"],.1, tol=.02)
+test.for.zero( MLEfitA$summary["rhoMLE"], 1.0, tol=.002)
+
+### now test REML fitting
+MLEfitB <- mKrigMLEJoint(x, y, lambda.start=.5,
+ cov.params.start= list(theta=.12),
+ cov.fun="stationary.cov",
+ optim.args=optim.args,
+ cov.args = list(Covariance = "Matern",
+ smoothness=1.0),
+ na.rm=TRUE,
+ mKrig.args = list( m=0),
+ REML=TRUE,
+ verbose=FALSE)
+
+test.for.zero( MLEfitB$summary["lambda"],.1, tol=.02)
+test.for.zero( MLEfitB$summary["theta"],.1, tol=.02)
+test.for.zero( MLEfitB$summary["rhoMLE"], 1.0, tol=.002)
+
+
+
+MLEfitC <- mKrigMLEJoint(x, y, lambda.start=.5,
+ cov.params.start= list(theta=.12),
+ cov.fun="stationary.cov",
+ optim.args=optim.args,
+ cov.args = list(Covariance = "Matern",
+ smoothness=1.0),
+ na.rm=TRUE,
+ mKrig.args = list( m=2),
+ REML=FALSE,
+ verbose=FALSE
+ )
+
+test.for.zero( MLEfitC$summary["lambda"], .1, tol=.02)
+test.for.zero( MLEfitC$summary[ "theta"], .1, tol=.02)
+test.for.zero( MLEfitC$summary["rhoMLE"], 1.0, tol=.002)
+
+
+MLEfitA$summary
+MLEfitB$summary
+MLEfitC$summary
+
+
+
+# simple Monte Carlo test
+NS<- 10
+n<-75
+M<- 400
+
+
+set.seed(123)
+x = matrix(runif(2*n), nrow=n)
+trueTheta = .1
+trueLambda = .04
+Sigma = genCovMat(x, trueTheta, trueLambda)
+U = chol(Sigma)
+set.seed( 332)
+hold<- matrix(NA, nrow=NS, ncol=7 )
+for( k in 1:NS){
+cat(k, " ")
+#generate observations at the locations
+
+y = t(U)%*%matrix( rnorm(n*M), n,M)
+
+MLEfitC <- mKrigMLEJoint(x, y, lambda.start=.5,
+ cov.params.start= list(theta=.12),
+ cov.fun="stationary.cov",
+ optim.args=optim.args,
+ cov.args = list(Covariance = "Matern",
+ smoothness=1.0),
+ na.rm=TRUE,
+ mKrig.args = list( m=2),
+ REML=FALSE,
+ verbose=FALSE)
+
+hold[k,]<- MLEfitC$summary
+}
+
+
+cat("all done with mKrigMLEGrid tests", fill=TRUE)
+options( echo=TRUE)
+
+test.for.zero( trueTheta, mean(hold[,3]), tol=2e-3,tag="Monte Carlo theta")
+test.for.zero( trueLambda, mean(hold[,2]), tol=2e-2,tag="Monte Carlo theta")
+
diff --git a/tests/mKrigMLETest.Rout.save b/tests/mKrigMLETest.Rout.save
new file mode 100644
index 0000000..d3fc5d4
--- /dev/null
+++ b/tests/mKrigMLETest.Rout.save
@@ -0,0 +1,94 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+> # Test adapted from fields package, under GPL license
+>
+> library( fields )
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 0.002
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.002
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.002
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.02
+PASSED test at tolerance 0.002
+lnProfileLike.FULL lambda theta sigmaMLE
+ -61.8145417 0.1015556 0.1002546 0.3185669
+ rhoMLE funEval gradEval
+ 0.9993032 8.0000000 6.0000000
+lnProfileREML.FULL lambda theta sigmaMLE
+ -61.8145417 0.1015556 0.1002546 0.3185669
+ rhoMLE funEval gradEval
+ 0.9993032 8.0000000 6.0000000
+lnProfileLike.FULL lambda theta sigmaMLE
+ -61.8143732 0.1015557 0.1002514 0.3185628
+ rhoMLE funEval gradEval
+ 0.9992768 8.0000000 6.0000000
+1 2 3 4 5 6 7 8 9 10 all done with mKrigMLEGrid tests
+>
+> test.for.zero( trueTheta, mean(hold[,3]), tol=2e-3,tag="Monte Carlo theta")
+Testing: Monte Carlo theta
+PASSED test at tolerance 0.002
+> test.for.zero( trueLambda, mean(hold[,2]), tol=2e-2,tag="Monte Carlo theta")
+Testing: Monte Carlo theta
+PASSED test at tolerance 0.02
+>
+>
+> proc.time()
+ user system elapsed
+ 17.036 1.064 18.124
diff --git a/tests/mKrigREMLTest.R b/tests/mKrigREMLTest.R
new file mode 100644
index 0000000..089fb35
--- /dev/null
+++ b/tests/mKrigREMLTest.R
@@ -0,0 +1,107 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+library( fields)
+options( echo=FALSE)
+set.seed( 123)
+x<- matrix( runif( 20),10, 2)
+y<- rnorm(10)
+
+
+lambda<- .1
+theta<- .2
+out<- mKrig( x,y, theta= theta, lambda=lambda)
+
+test.for.zero( out$lnDetOmega,
+2*log( prod(diag(chol(out$Omega))))
+)
+
+Mc<- exp( -rdist( x,x)/theta) + lambda* diag( 1,10)
+OmegaTest<- solve(t(out$Tmatrix)%*%solve( Mc)%*% out$Tmatrix)
+
+test.for.zero( OmegaTest, out$Omega,tag= "mKrigOmega")
+test.for.zero( log(det(OmegaTest)), out$lnDetOmega,
+ tag="lnDetOmega")
+test.for.zero( log( det( Mc)), out$lnDetCov, tag="lnDetMc" )
+
+# check that det adjustment really works.
+
+set.seed( 323)
+x<- matrix( runif( 20), 10, 2)
+temp<- matrix( NA, 50,8)
+thetaGrid<- seq( .1,.5, ,50)
+lambdaGrid<- 10**(runif( 50, -2,0))
+Q<- qr.qy( qr( cbind( rep(1,10),x) ), diag( 1,10))
+Q2<- Q[,4:10]
+y<- rnorm(10)
+
+testDet<- function(lambda, obj)
+{
+ D2 <- obj$matrices$D[obj$matrices$D > 0]
+ u2 <- obj$matrices$u[obj$matrices$D > 0]
+ lD <- D2 * lambda
+ N2 <- length(D2)
+ rho.MLE <- (sum((D2 * (u2)^2)/(1 + lD)))/N2
+ lnDetCov <- -sum(log(D2/(1 + lD)))
+# -1 * (-N2/2 - log(2 * pi) * (N2/2) - (N2/2) * log(rho.MLE) -
+# (1/2) * lnDetCov)
+ return( c(lnDetCov, rho.MLE) )
+}
+
+for ( k in 1:50) {
+ out<- mKrig( x,y, theta = thetaGrid[k],
+ lambda = lambdaGrid[k]
+ )
+# turn off warnings for lambda search because all we want are
+# matrix decompositions independent of lambda
+ out2<- Krig( x,y, theta= thetaGrid[k],
+ cov.args=list( Covariance = "Exponential"),
+ give.warnings=FALSE)
+
+ Mc<- exp( -rdist( x,x)/thetaGrid[k] ) + lambdaGrid[k]* diag( 1,10)
+ X<- out$Tmatrix
+ temp[k,]<-c(
+ out$lnDetCov,
+ out$lnDetOmega,
+ log( det( solve(t( Q2)%*%Mc%*%Q2) ) ),
+ log( det(Mc) ),
+ -1*log( det( t(X)%*%solve(Mc)%*%X ) ),
+ testDet( lambdaGrid[k], out2 ),
+ out$rho.MLE
+ )
+}
+
+
+test.for.zero( temp[,2], temp[,5], tag="testing det Omega formula")
+
+resid<- temp[,1] - temp[,2] + temp[,3]
+test.for.zero( mean(resid), resid, relative=FALSE,
+ tag="REML Det shortcut")
+#### testing Krig verses mKrig
+#
+test.for.zero( temp[,3], -temp[,6],
+ tag="Q2 Det and Eigen Det")
+###### testing rho.MLE from mKrig and Krig
+
+test.for.zero( (7/10)*temp[,7], temp[,8],
+ tag="rho.MLE Krig verses mKrig")
+
+
+#lm.out<-lm( temp[,1]~ temp[,c(2:3)])
+#summary( lm.out)
+
diff --git a/tests/mKrigREMLTest.Rout.save b/tests/mKrigREMLTest.Rout.save
new file mode 100644
index 0000000..4589d8a
--- /dev/null
+++ b/tests/mKrigREMLTest.Rout.save
@@ -0,0 +1,70 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+PASSED test at tolerance 1e-08
+Testing: mKrigOmega
+PASSED test at tolerance 1e-08
+Testing: lnDetOmega
+PASSED test at tolerance 1e-08
+Testing: lnDetMc
+PASSED test at tolerance 1e-08
+Testing: testing det Omega formula
+PASSED test at tolerance 1e-08
+Testing: REML Det shortcut
+PASSED test at tolerance 1e-08
+Testing: Q2 Det and Eigen Det
+PASSED test at tolerance 1e-08
+Testing: rho.MLE Krig verses mKrig
+PASSED test at tolerance 1e-08
+> proc.time()
+ user system elapsed
+ 2.699 0.054 2.753
diff --git a/tests/misc.test.R b/tests/misc.test.R
new file mode 100644
index 0000000..95d0b2f
--- /dev/null
+++ b/tests/misc.test.R
@@ -0,0 +1,45 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+library( fields)
+options( echo=FALSE)
+set.seed( 234)
+test.for.zero.flag<-1
+
+y<- runif( 30)
+lev<- sort(sample( 1:5,30, replace=TRUE))
+w<- runif( 30)*.1+1
+y<- as.matrix(y)
+
+# compute by loop
+hold<- rep( NA, 5)
+for( k in 1:5){
+ ind<- lev==k
+ hold[k]<- sum( y[ind,]*w[ind])/ sum( w[ind])}
+
+look<- fast.1way( lev, y, w)
+test.for.zero( look$means, hold, tag="fast.1way means")
+
+# now vectorized case
+
+ytemp<- cbind( y, y-10, y+10)
+look2<- fast.1way( lev, ytemp, w)
+test.for.zero( look2$means[,2], hold-10, tag="fast.1way vectorized means")
+
+
+cat("All done with testing misc functions", fill=TRUE)
+options(echo=TRUE)
diff --git a/tests/misc.test.Rout.save b/tests/misc.test.Rout.save
new file mode 100644
index 0000000..06459c1
--- /dev/null
+++ b/tests/misc.test.Rout.save
@@ -0,0 +1,60 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options( echo=FALSE)
+Testing: fast.1way means
+PASSED test at tolerance 1e-08
+Testing: fast.1way vectorized means
+PASSED test at tolerance 1e-08
+All done with testing misc functions
+>
+> proc.time()
+ user system elapsed
+ 0.507 0.035 0.533
diff --git a/tests/spam.test.R b/tests/spam.test.R
new file mode 100644
index 0000000..217376d
--- /dev/null
+++ b/tests/spam.test.R
@@ -0,0 +1,160 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+# test of rdist.near
+
+
+library( fields)
+options(echo=FALSE)
+test.for.zero.flag<- 1
+
+set.seed(123)
+x1<- matrix( runif(2*20), ncol=2)
+x2<- matrix( runif(2*10), ncol=2)
+
+fields.rdist.near( x1,x2, delta=.75)-> look
+temp<- matrix( NA, nrow(x1),nrow(x2))
+temp[ look$ind] <- look$ra
+temp2<- rdist( x1, x2)
+temp2[ temp2> .75] <- NA
+temp[ is.na( temp)]<- 0
+temp2[ is.na( temp2)]<- 0
+
+test.for.zero( temp, temp2)
+
+
+# test of constructing covariance matrix
+# and also versions of Wendland function
+# default taper is wendland k=2.
+DD<- rdist( x1,x2)
+temp<- Wendland2.2(DD, theta=.8)
+temp2<- Wendland( DD, theta=.8, dimension=2, k=2)
+
+test.for.zero( temp, temp2)
+
+
+
+
+stationary.taper.cov( x1,x2, Taper="Wendland2.2",
+ Taper.args= list( theta=.8), spam.format=FALSE )-> look
+temp0<- look
+
+stationary.taper.cov( x1,x2, Taper="Wendland2.2",
+ Taper.args= list( theta=.8), spam.format=TRUE )-> look
+temp1<- spam2full( look)
+
+test.for.zero( temp1, temp0)
+
+stationary.taper.cov( x1,x2, Taper="Wendland",
+ Taper.args= list( theta=.8, k=2, dimension=2),
+ spam.format=TRUE )-> look
+temp1b<- spam2full( look)
+
+
+temp2<- Wendland2.2(DD, theta=.8) * Exponential(DD)
+temp3<- wendland.cov(x1,x2, k=2, theta=.8) * Exponential(DD)
+temp4<- Wendland(DD, k=2, dimension=2, theta=.8)* Exponential(DD)
+
+
+test.for.zero( temp1, temp0, rel=FALSE)
+test.for.zero( temp1b, temp0, rel=FALSE)
+test.for.zero( temp2, temp0, rel=FALSE)
+
+test.for.zero( temp2, temp3,rel=FALSE)
+test.for.zero( temp2, temp4,rel=FALSE)
+
+
+
+set.seed( 256)
+rv<- runif( nrow(x2))
+
+# test of multiply
+stationary.taper.cov( x1, x2, C= rv)-> look
+temp2<-stationary.taper.cov( x1,x2)
+
+spam2full(temp2)%*%(rv)-> look2
+test.for.zero( look, look2)
+
+#
+
+set.seed( 123)
+temp<- matrix( 1:48, ncol=6, nrow=8)
+temp[ sample( 1:48, 20)] <- 0
+
+as.spam( temp)-> temp2
+test.for.zero( spam2full(temp2), temp )
+
+spam2spind( temp2)-> temp3
+
+test.for.zero( spind2full( temp3), temp)
+
+test.for.zero( spind2spam( temp3),temp2)
+
+# test that ordering works
+MM<- nrow( temp3$ind)
+ix<- sample( 1:MM,MM)
+# shuffle temp3
+temp3$ind<- temp3$ind[ix,]
+temp3$ra<- temp3$ra[ix]
+
+test.for.zero( spind2spam( temp3),temp2)
+
+
+
+# temp<- temp[1:4, 1:5] for help file
+#
+
+set.seed( 234)
+
+CC<- matrix( rnorm( 64), 8,8)
+A<- ( CC)%*% t(CC)
+as.spam( A)-> As
+
+test.for.zero( solve( As), solve( A))
+
+set.seed( 233)
+CC<- diag( 1, 8)
+CC[4,1:8] <- rnorm(8)
+CC[7,1:8] <- rnorm(8)
+A<- ( CC)%*% t(CC)
+as.spam( A)-> As
+
+test.for.zero( solve( As), solve( A))
+
+
+data( ozone2)
+x<- ozone2$lon.lat
+y<- ozone2$y[16,]
+
+
+Krig(x,y, cov.function = "stationary.taper.cov", theta=1.5,
+ give.warnings=FALSE,
+ cov.args= list( spam.format=FALSE,
+ Taper.args= list( dimension=2, theta=2.0,k=3) ) ) -> out1
+
+Krig(x,y, cov.function = "stationary.taper.cov", lambda=2.0, theta=1.5,
+ cov.args= list( spam.format=TRUE,
+ Taper.args= list( theta=2.0,k=3, dimension=2) )
+ ) -> out2
+
+temp1<- predict( out1,lambda=2.0)
+temp2<- predict( out2)
+test.for.zero( temp1, temp2)
+
+cat( "All done with SPAM tests", fill=TRUE)
+options(echo=TRUE)
diff --git a/tests/spam.test.Rout.save b/tests/spam.test.Rout.save
new file mode 100644
index 0000000..4cd63a6
--- /dev/null
+++ b/tests/spam.test.Rout.save
@@ -0,0 +1,76 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> # test of rdist.near
+>
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 1e-08
+All done with SPAM tests
+>
+> proc.time()
+ user system elapsed
+ 1.336 0.047 1.374
diff --git a/tests/sreg.test.R b/tests/sreg.test.R
new file mode 100644
index 0000000..69fbc63
--- /dev/null
+++ b/tests/sreg.test.R
@@ -0,0 +1,81 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+# test of sreg and related functions
+
+library( fields)
+options(echo=FALSE)
+ test.for.zero.flag<- 1
+
+set.seed(123)
+
+# Tps has been tested from scratch using basic linear algebra
+# so test sreg against this
+
+x<- rat.diet$t
+y<- rat.diet$trt
+
+sreg( x,y, lambda= 10)-> out
+Tps( x,y, scale="unscaled", lambda=10*length(y))-> out2
+
+test.for.zero( out$fitted.values, out2$fitted.values, tag="predict at lambda sreg/Tps")
+
+
+#### GCV test
+
+sreg( x,y, tol=1e-12)-> out
+gcv.sreg( out, tol=1e-12)$lambda.est -> look0
+
+test.for.zero( out$lambda.est[1,2], look0[1,2], tol=5e-4)
+
+Tps( x,y)-> out2
+gcv.Krig( out2, tol=1e-12)$lambda.est[1,2]-> look2
+gcv.sreg( out, tol=1e-12)$lambda.est[1,2] -> look
+
+test.for.zero( look, look2, tol=2.1e-6, tag="GCV sreg/Tps")
+
+#### replications
+set.seed( 123)
+x<- rep(rat.diet$t,3)
+y<- rep( rat.diet$trt,3) + rnorm(39*3)*5
+
+sreg( x,y)-> out
+gcv.sreg( out, tol=1e-12)$lambda.est -> look
+
+Tps( x,y, scale="unscaled")-> out2
+gcv.Krig( out2, tol=1e-12)$lambda.est-> look2
+look2[,1]<- look2[,1]/length( out$xM)
+
+test.for.zero( look[1:3,3], look2[1:3,3],
+ tag="GCV sreg/Tps reps case",tol=1e-06)
+
+test.for.zero( look[2,3], look2[2,3], tol=1e-6,
+ tag="GCV sreg/Tps reps case")
+
+
+cat( "All done with sreg tests", fill=TRUE)
+options(echo=TRUE)
+
+
+
+
+
+
+
+
+
diff --git a/tests/sreg.test.Rout.save b/tests/sreg.test.Rout.save
new file mode 100644
index 0000000..fe0b6c0
--- /dev/null
+++ b/tests/sreg.test.Rout.save
@@ -0,0 +1,83 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> # test of sreg and related functions
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: predict at lambda sreg/Tps
+PASSED test at tolerance 1e-08
+PASSED test at tolerance 5e-04
+Testing: GCV sreg/Tps
+PASSED test at tolerance 2.1e-06
+Warning message:
+In golden.section.search(ax = starts[1], bx = starts[2], cx = starts[3], :
+ Maximum iterations reached
+Warning message:
+In golden.section.search(ax = starts[1], bx = starts[2], cx = starts[3], :
+ Maximum iterations reached
+Testing: GCV sreg/Tps reps case
+PASSED test at tolerance 1e-06
+Testing: GCV sreg/Tps reps case
+PASSED test at tolerance 1e-06
+All done with sreg tests
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> proc.time()
+ user system elapsed
+ 1.492 0.051 1.535
diff --git a/tests/vgram.test.R b/tests/vgram.test.R
new file mode 100644
index 0000000..966e3b0
--- /dev/null
+++ b/tests/vgram.test.R
@@ -0,0 +1,99 @@
+ # fields is a package for analysis of spatial data written for
+ # the R software environment .
+ # Copyright (C) 2017
+ # University Corporation for Atmospheric Research (UCAR)
+ # Contact: Douglas Nychka, nychka at ucar.edu,
+ # National Center for Atmospheric Research,
+ # PO Box 3000, Boulder, CO 80307-3000
+ #
+ # This program is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or
+ # (at your option) any later version.
+ # This program is distributed in the hope that it will be useful,
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ # GNU General Public License for more details.
+
+
+# test of vgram
+
+library( fields)
+options(echo=FALSE)
+
+data( ozone2)
+
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+
+vgram( x, y, lon.lat=TRUE)-> out
+
+# compute "by hand"
+
+outer( y, y ,"-")-> hold
+hold<- .5*hold^2
+rdist.earth( x,x)-> hold2
+col( hold2)> row( hold2)-> upper
+
+hold<- hold[upper]
+hold2<- hold2[upper]
+ order( hold2)-> od
+hold2<- hold2[od]
+hold<- hold[od]
+ind<- is.na(hold)
+hold<- hold[!ind]
+hold2<- hold2[!ind]
+
+test.for.zero( hold, out$vgram, tag="vgram single time")
+
+
+# multiple times including NAs at some times
+
+y<- t(ozone2$y[16:18,])
+x<- ozone2$lon.lat[,]
+
+out<- vgram( x, y, lon.lat=TRUE)
+
+
+N<- nrow( y)
+
+hold<- cbind(c(outer( y[,1], y[,1],"-")),
+ c(outer( y[,2], y[,2],"-") ),
+ c(outer(y[,3], y[,3],"-")) )
+hold<- .5*hold^2
+hold<- rowMeans( hold, na.rm=TRUE)
+hold<- matrix( hold, N,N)
+
+rdist.earth( x,x)-> hold2
+
+col( hold2)> row( hold2)-> upper
+hold<- hold[upper]
+hold2<- hold2[upper]
+
+order( hold2)-> od
+hold2<- hold2[od]
+hold<- hold[od]
+
+ind<- is.na(hold)
+hold<- hold[!ind]
+hold2<- hold2[!ind]
+
+test.for.zero( hold, out$vgram, tag="vgram more than one time point")
+
+# test covariogram versus correlogram
+y<- ozone2$y[16,]
+x<- ozone2$lon.lat
+
+sigma2 = var(y, na.rm=TRUE)
+lookCov = vgram(x, y, lon.lat=TRUE, type="covariogram")
+lookCor = vgram(x, y, lon.lat=TRUE, type="correlogram")
+
+test.for.zero(lookCov$vgram*(1/sigma2), lookCor$vgram, tag="correlogram versus covariogram")
+
+# test cross-covariogram versus cross-correlogram
+
+sigma2 = var(y, na.rm=TRUE)
+lookCov = crossCoVGram(x, x, y, y, lon.lat=TRUE, type="cross-covariogram")
+lookCor = crossCoVGram(x, x, y, y, lon.lat=TRUE, type="cross-correlogram")
+
+test.for.zero(lookCov$vgram*(1/sigma2), lookCor$vgram, tag="correlogram versus covariogram")
diff --git a/tests/vgram.test.Rout.save b/tests/vgram.test.Rout.save
new file mode 100644
index 0000000..84728d9
--- /dev/null
+++ b/tests/vgram.test.Rout.save
@@ -0,0 +1,65 @@
+
+R version 3.4.0 (2017-04-21) -- "You Stupid Darkness"
+Copyright (C) 2017 The R Foundation for Statistical Computing
+Platform: x86_64-apple-darwin15.6.0 (64-bit)
+
+R is free software and comes with ABSOLUTELY NO WARRANTY.
+You are welcome to redistribute it under certain conditions.
+Type 'license()' or 'licence()' for distribution details.
+
+R is a collaborative project with many contributors.
+Type 'contributors()' for more information and
+'citation()' on how to cite R or R packages in publications.
+
+Type 'demo()' for some demos, 'help()' for on-line help, or
+'help.start()' for an HTML browser interface to help.
+Type 'q()' to quit R.
+
+> # fields is a package for analysis of spatial data written for
+> # the R software environment .
+> # Copyright (C) 2017
+> # University Corporation for Atmospheric Research (UCAR)
+> # Contact: Douglas Nychka, nychka at ucar.edu,
+> # National Center for Atmospheric Research,
+> # PO Box 3000, Boulder, CO 80307-3000
+> #
+> # This program is free software; you can redistribute it and/or modify
+> # it under the terms of the GNU General Public License as published by
+> # the Free Software Foundation; either version 2 of the License, or
+> # (at your option) any later version.
+> # This program is distributed in the hope that it will be useful,
+> # but WITHOUT ANY WARRANTY; without even the implied warranty of
+> # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+> # GNU General Public License for more details.
+>
+>
+> # test of vgram
+>
+> library( fields)
+Loading required package: spam
+Loading required package: grid
+Spam version 1.4-0 (2016-08-29) is loaded.
+Type 'help( Spam)' or 'demo( spam)' for a short introduction
+and overview of this package.
+Help for individual functions is also obtained by adding the
+suffix '.spam' to the function name, e.g. 'help( chol.spam)'.
+
+Attaching package: 'spam'
+
+The following objects are masked from 'package:base':
+
+ backsolve, forwardsolve
+
+Loading required package: maps
+> options(echo=FALSE)
+Testing: vgram single time
+PASSED test at tolerance 1e-08
+Testing: vgram more than one time point
+PASSED test at tolerance 1e-08
+Testing: correlogram versus covariogram
+PASSED test at tolerance 1e-08
+Testing: correlogram versus covariogram
+PASSED test at tolerance 1e-08
+> proc.time()
+ user system elapsed
+ 0.737 0.044 0.771
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-fields.git
More information about the debian-med-commit
mailing list