[Git][debian-gis-team/pdal][upstream] New upstream version 1.8~rc2+ds
Bas Couwenberg
gitlab at salsa.debian.org
Fri Nov 2 06:16:58 GMT 2018
Bas Couwenberg pushed to branch upstream at Debian GIS Project / pdal
Commits:
8db461d4 by Bas Couwenberg at 2018-11-01T21:18:42Z
New upstream version 1.8~rc2+ds
- - - - -
11 changed files:
- ChangeLog
- HOWTORELEASE.txt
- pdal/PointTable.hpp
- pdal/Streamable.cpp
- pdal/util/FileUtils.cpp
- plugins/greyhound/io/bounds.cpp
- plugins/greyhound/io/bounds.hpp
- test/unit/FileUtilsTest.cpp
- vendor/arbiter/arbiter.cpp
- − vendor/eigen/Eigen/src/SparseCholesky/SimplicialCholesky.h
- − vendor/eigen/Eigen/src/SparseCholesky/SimplicialCholesky_impl.h
Changes:
=====================================
ChangeLog
=====================================
@@ -1,3 +1,27 @@
+2018-11-01
+ * Andrew Bell <andrew.bell.ia at gmail.com> Only check for tildes as the first character in a filename. (#2264) (19:46:04)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Streamable skips. (#2224) (16:06:33)
+
+2018-10-31
+ * Andrew Bell <andrew.bell.ia at gmail.com> * Any code coming from Entwine is now BSD because Hobu is relicensing it * Two LGPLv2 files from Eigen are removed. We are not using them and we want to keep clean (#2265) (20:02:54)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Fix warning in gcc-8. (19:22:42)
+
+2018-10-29
+ * Andrew Bell <andrew.bell.ia at gmail.com> Update EptReader for EPT formatting changes before EPT release. (#2252) (18:04:34)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Fixes for gcc8. (17:57:32)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Remove private header from AssignFilter.hpp (#2249) (17:57:21)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Don't reference an out-of-scope PointTable. (#2255) (14:36:18)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Clairfy description of reflectance_as_intensity. (#2251) (14:34:12)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Don't use a destructed value in GDALReader::inspect. (#2247) (14:33:45)
+ * Andrew Bell <andrew.bell.ia at gmail.com> add 'memorycopy' option to readers.gdal (#2190) (14:30:35)
+
+2018-10-25
+ * Andrew Bell <andrew.bell.ia at gmail.com> Small updates to package script. (15:57:36)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Fix install issues. (14:56:14)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Treat OCI like everything else. (01:25:05)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Don't build OCI by default. (01:15:55)
+ * Andrew Bell <andrew.bell.ia at gmail.com> Packaging stuff. (01:02:03)
+
2018-10-24
* Andrew Bell <andrew.bell.ia at gmail.com> Remove references to hexbin as a plugin. (#2245) (18:10:36)
* Andrew Bell <andrew.bell.ia at gmail.com> Super-minor clean-ups. (16:34:48)
=====================================
HOWTORELEASE.txt
=====================================
@@ -1,176 +1,1033 @@
+================================================================================
+1.8.0
+================================================================================
+
+Important Issue
+===============
+
+- Those using PDAL to compress to LAZ should be aware that we have
+ found an issue with LASzip that may cause PDAL to create compressed
+ data for point types 6 and greater that can’t be fully read. See
+ https://github.com/LASzip/LASzip/issues/50 for more information or to
+ see if the issue has been resolved in LASzip.
+
+Changes of Note
+===============
+
+- PointTableRef is now publicly accessible from PointView (#1926)
+- Minimum CMake version is now 3.5
+- ``filters.hexbin`` is now a built-in stage, rather than a plugin.
+ (#2001)
+- Removed support for ``ght`` compression in ``writers.pgpointcloud``.
+ (#2148)
+- On OSX, plugins are now installed with ID of ``@rpath`` rather than
+ ``@loader_path/../lib``
+- The API for ``StreamPointTable::StreamPointTable()`` now requires the
+ capacity of the table to be passed as an argument.
+
+Enhancements
+============
+
+- Added ``denoise`` and ``reset`` options to ``pdal ground``. (#1579)
+- ``readers.gdal`` now supports stream mode and provides the ``header``
+ option to map dimensions. It also supports fetching bounds without
+ reading the entire file. (#1819)
+- ``readers.mbio`` added ``datatype`` option to support reading
+ sidescan data. (#1852)
+- ``filters.stats`` was computing expensive kurtosis and skewness
+ statistics by default. These statistics are now available with the
+ ``advanced`` option. (#1878)
+- Added backtrace support for alpine linux-based Docker containers.
+ (#1904)
+- Added a ``condition`` option for ``filters.assign`` to limit
+ assignment. (#1956)
+- Add access to artifact manager keys. (#2026)
+- Added support for LAZ compression in ``writers.pgpointcloud`` (#2050)
+- Replaced ``last`` option with ``returns`` to support more flexible
+ segmentation in ``filters.smrf`` and ``filters.pmf``. (#2053)
+- ``writers.text`` now supports stream mode. (#2064)
+- Added ``pdal tile`` kernel with streaming support to conveniently
+ tile data. (#2065)
+- A KD-tree used in one filter will now be reused in subsequent filters
+ when possible. (#2123)
+- ``writers.ply`` now has a ``precision`` option to specify output
+ precision. (#2144)
+- ``filters.smrf`` and ``filters.pmf`` supports complete range syntax
+ for the ``ignore`` option. (#2157)
+- ``filters.hexbin`` now supports stream mode. (#2170)
+- ``readers.numpy`` now has the ``order`` option, which replaces the
+ previous ``x``, ``y`` and ``z`` options. It also supports structured
+ numpy arrays and maps values to the X, Y and Z dimensions
+ accordingly.
+- All readers now support setting a spatial reference to override any
+ in the data with the ``spatialreference`` option.
+- Add support for unicode filenames in pipelines on Windows platforms.
+- Added NumpyReader::setArray() to support direct setting of a numpy
+ array into ``readers.numpy``.
+- Added StreamPointTable::setNumPoints() and support in
+ Streamable::execute() allowing custom point tables to know the number
+ of points in each pass through the point table.
+- Added SpatialReference::isProjected() to allow callers to determine
+ if a spatial reference specifies a projection. Also added
+ SpatialReference::identifyHorizontalEPSG() and
+ SpatialReference::identifyVerticalEPSG() to return an EPSG code from
+ a spatial reference if possible.
+- Added support for reading BPF files stored remotely.
+
+New stages
+==========
+
+- ``readers.rdb`` - Support reading RIEGL RDB data.
+- ``readers.i3s`` - Support reading of web service-style Esri I3S point
+ clouds.
+- ``readers.slpk`` - Support reading of file-based I3S-style point
+ clouds.
+- ``writers.fbx`` - Experimental Unity engine (SDK required) support.
+ (#2127)
+- ``filters.nndistance`` - Replaces ``filters.kdistance`` and adds
+ average distance support. (#2071)
+- ``filters.dem`` - Filter data based on bounds relative to a raster
+ value. (#2090)
+- ``filters.delaunay`` - Create a delauany triangulation of a point
+ cloud. (#1855)
+- ``filters.info`` - Generate metadata about an input point set. Used
+ by ``pdal info``.
+
+Deprecated stages
+=================
+
+- ``filters.kdistance`` - Replaced by ``filters.nndistance``.
+
+Bug fixes
+=========
+
+- Fixed an incorrect error message suggesting there were multiple SRSs
+ in some cases when reading multiple inputs. (#2009)
+- Fixed a problem in ``filters.reprojection`` in stream mode that would
+ improperly reproject when there were multiple input sources with
+ differing SRSs. (#2058)
+- Fixed a problem in stream mode where a stage with no SRS would
+ override the active SRS during processing. (#2069)
+- Fixed a problem in ``writers.gdal`` where output would be aggregated
+ if multiple inputs were provided. (#2074)
+- The ``count`` option was not respected in stream mode. It now
+ properly limits the number of points read. (#2086)
+- Fixed an off-by-one error that could create improper values in
+ ``writers.gdal``. Most data differences were small and not usually
+ problematic. (#2095)
+- Multiple option values can be specified on the command line by
+ repeating the option assignment. (#2114)
+- Added a missing initialization in ``filters.returns`` that could
+ cause more point views to be returned than requested. (#2115)
+- Emit an error if the ``count`` option isn’t set for ``readers.faux``.
+ (#2128)
+- PipelineManager::getStage() now returns a proper leaf node. (#2149)
+- Fixed logic for ``filters.crop`` in streaming mode with multiple crop
+ areas that could return too few points. (#2198)
+- Added the ``minimal`` option for ``readers.rxp`` that was documented
+ but not fully implemented. (#2225)
+- Fixed an error in failing to read all points in ``readers.rxp``
+ exposed with a newer SDK. (#2226)
+- Fixed an error in fetching floating point data from a PointContainer
+ when the value was NaN. (#2239)
+================================================================================
+1.8.0
+================================================================================
+
+Changes of Note
+===============
+
+- PointTableRef is now publicly accessible from PointView (#1926)
+- Minimum CMake version is now 3.5
+- ``filters.hexbin`` is now a built-in stage, rather than a plugin.
+ (#2001)
+- Removed support for ``ght`` compression in ``writers.pgpointcloud``.
+ (#2148)
+- On OSX, plugins are now installed with ID of ``@rpath`` rather than
+ ``@loader_path/../lib``
+
+Enhancements
+============
+
+- Added ``denoise`` and ``reset`` options to ``pdal ground``. (#1579)
+- ``readers.gdal`` now supports stream mode and provides the ``header``
+ option to map dimensions. It also supports fetching bounds without
+ reading the entire file. (#1819)
+- ``readers.mbio`` added ``datatype`` option to support reading
+ sidescan data. (#1852)
+- ``filters.stats`` was computing expensive kurtosis and skewness
+ statistics by default. These statistics are now available with the
+ ``advanced`` option. (#1878)
+- Added backtrace support for alpine linux-based Docker containers.
+ (#1904)
+- Added a ``condition`` option for ``filters.assign`` to limit
+ assignment. (#1956)
+- Add access to artifact manager keys. (#2026)
+- Added support for LAZ compression in ``writers.pgpointcloud`` (#2050)
+- Replaced ``last`` option with ``returns`` to support more flexible
+ segmentation in ``filters.smrf`` and ``filters.pmf``. (#2053)
+- ``writers.text`` now supports stream mode. (#2064)
+- Added ``pdal tile`` kernel with streaming support to conveniently
+ tile data. (#2065)
+- A KD-tree used in one filter will now be reused in subsequent filters
+ when possible. (#2123)
+- ``writers.ply`` now has a ``precision`` option to specify output
+ precision. (#2144)
+- ``filters.smrf`` and ``filters.pmf`` supports complete range syntax
+ for the ``ignore`` option. (#2157)
+- ``filters.hexbin`` now supports stream mode. (#2170)
+- ``readers.numpy`` now has the ``order`` option, which replaces the
+ previous ``x``, ``y`` and ``z`` options. It also supports structured
+ numpy arrays and maps values to the X, Y and Z dimensions
+ accordingly.
+- All readers now support setting a spatial reference to override any
+ in the data with the ``spatialreference`` option.
+- Add support for unicode filenames in pipelines on Windows platforms.
+- Added NumpyReader::setArray() to support direct setting of a numpy
+ array into ``readers.numpy``.
+- Added StreamPointTable::setNumPoints() and support in
+ Streamable::execute() allowing custom point tables to know the number
+ of points in each pass through the point table.
+- Added SpatialReference::isProjected() to allow callers to determine
+ if a spatial reference specifies a projection. Also added
+ SpatialReference::identifyHorizontalEPSG() and
+ SpatialReference::identifyVerticalEPSG() to return an EPSG code from
+ a spatial reference if possible.
+- Added support for reading BPF files stored remotely.
+
+New stages
+==========
+
+- ``readers.rdb`` - Support reading RIEGL RDB data.
+- ``readers.i3s`` - Support reading of web service-style Esri I3S point
+ clouds.
+- ``readers.slpk`` - Support reading of file-based I3S-style point
+ clouds.
+- ``writers.fbx`` - Experimental Unity engine (SDK required) support.
+ (#2127)
+- ``filters.nndistance`` - Replaces ``filters.kdistance`` and adds
+ average distance support. (#2071)
+- ``filters.dem`` - Filter data based on bounds relative to a raster
+ value. (#2090)
+- ``filters.delaunay`` - Create a delauany triangulation of a point
+ cloud. (#1855)
+- ``filters.info`` - Generate metadata about an input point set. Used
+ by ``pdal info``.
+
+Deprecated stages
+=================
+
+- ``filters.kdistance`` - Replaced by ``filters.nndistance``.
+
+Bug fixes
+=========
+
+- Fixed an incorrect error message suggesting there were multiple SRSs
+ in some cases when reading multiple inputs. (#2009)
+- Fixed a problem in ``filters.reprojection`` in stream mode that would
+ improperly reproject when there were multiple input sources with
+ differing SRSs. (#2058)
+- Fixed a problem in stream mode where a stage with no SRS would
+ override the active SRS during processing. (#2069)
+- Fixed a problem in ``writers.gdal`` where output would be aggregated
+ if multiple inputs were provided. (#2074)
+- The ``count`` option was not respected in stream mode. It now
+ properly limits the number of points read. (#2086)
+- Fixed an off-by-one error that could create improper values in
+ ``writers.gdal``. Most data differences were small and not usually
+ problematic. (#2095)
+- Multiple option values can be specified on the command line by
+ repeating the option assignment. (#2114)
+- Added a missing initialization in ``filters.returns`` that could
+ cause more point views to be returned than requested. (#2115)
+- Emit an error if the ``count`` option isn’t set for ``readers.faux``.
+ (#2128)
+- PipelineManager::getStage() now returns a proper leaf node. (#2149)
+- Fixed logic for ``filters.crop`` in streaming mode with multiple crop
+ areas that could return too few points. (#2198)
+- Added the ``minimal`` option for ``readers.rxp`` that was documented
+ but not fully implemented. (#2225)
+- Fixed an error in failing to read all points in ``readers.rxp``
+ exposed with a newer SDK. (#2226)
+- Fixed an error in fetching floating point data from a PointContainer
+ when the value was NaN. (#2239)
+
+================================================================================
+1.7.0
+================================================================================
+
+Changes of Note
+===============
+
+- ``filter.ferry`` now creates output dimensions with the same type as
+ the input dimension. It also takes an arrow ‘=>’ in addition to ‘=’
+ in the ``--dimension`` specification.
+- ``filters.hexbin`` now falls back to slow boundary creation if no
+ bounds information exists to do fast boundary creation.
+- Dimension names can now contain the forward slash (‘/’) character.
+- ``readers.gdal`` and ``filters.colorization`` now attempt to create
+ dimensions with the type of the associated raster.
+- The Python PDAL extension code was removed from the PDAL source tree
+ to its own `repository <https://github.com/PDAL/python>`__.
+- The JAVA PDAL interface was remove from the PDAL source tree to its
+ own `repository <https://github.com/PDAL/java>`__.
+- ``pdal pipeline``\ and ``pdal translate`` now use stream mode if the
+ operation being performed is streamable. An option ``--nostream`` has
+ been added to both commands to prevent the use of stream mode. The
+ ``--stream`` option of ``pdal pipeline`` is now obsolete and ignored.
+- A new interface has been provided for the creation of plugins
+ requiring less boilerplate code. There has been no API change.
+- Stages and pipelines can now be tested to check whether they are
+ streamable.
+
+Enhancements
+============
+
+- Added options ``--timegap`` and ``--speedmin`` to ``readers.mbio`` to
+ allow configuration of which points should be read.
+- Added support for compression schemes (xz, lzma, zstd) and created a
+ standardized interface (#1722).
+- ``writers.bpf`` now supports the option ``auto`` for the ``coord_id``
+ option to allow the UTM zone to be set from the spatial reference if
+ possible (#1723).
+- Added the ability read stage-specific options from a file with the
+ ``--option_file`` option (#1641).
+- Replace the GDAL point-in-polygon with a much faster implementation
+ in ``filters.crop``.
+- Add a ``--reverse`` option to ``filters.mortonorder`` to provide a
+ point ordering for good dispersal.
+- ``readers.bpf`` now supports the TCR (ECEF - earth centered, earth
+ fixed) coordinate system.
+- Added option ``--use_eb_vlr`` to allow ``readers.las`` to interpret
+ an extra bytes VLR as if the file were version 1.4 even if it’s using
+ an earlier LAS version.
+- ``readers.text`` added options ``--header`` and ``--skip`` to provide
+ an alternative header line and to allow skipping lines before reading
+ the header, respectively.
+- ``writers.text`` now supports the ability to specify individual
+ dimension precision using a colon (‘:’) and integer following the
+ dimension name in the ``--order`` option.
+- ``readers.numpy`` adds support for reading from Numpy (.npy) save
+ files.
+- ``pdal info`` now provides the ``--enumerate`` option. See the
+ documentation for
+ `filters.stats <https://pdal.io/stages/filters.stats.html>`__ for
+ details.
+- Added a general option ``--logtiming`` to cause log output to contain
+ the elapsed time from the start of the program. (#1882)
+
+Documentation
+=============
+
+- Added a description of the Alpine Linux environment used for Travis
+ and Docker.
+- Updated the documentation for building PDAL on Windows.
+- Added an example of how to loop files with PowerShell.
+- Corrected output shown in the documentation for ``filters.hexbin``.
+- Reorganized the stages landing page to make it easier to find.
+ (#1880)
+
+New stages
+==========
+
+- ``readers.greyhound`` - Allows reading points from a source using the
+ `greyhound <https://github.com/hobu/greyhound>`__ protocol.
+- ``filters.neighborclassifier`` - Re-classifies points based on the
+ classification of neighboring points.
+
+Removed stages
+==============
+
+- ``filters.computerange`` - Use ``filters.python`` to simulate the
+ functionality.
+
+Bug fixes
+=========
+
+- ``filters.range`` now always rejects NaN values as out of range.
+- Changed the default ``--timegap`` value in ``readers.mbio`` from 0 to
+ 1 second.
+- Fixed a bug when reading pointers from metadata on some OSX systems.
+- Fixed a problem in ``pdal translate`` where overriding a reader or
+ writer from a pipeline wouldn’t create the proper pipeline.
+- Fixed a problem where multiple LASzip VLRs would get written to LAS
+ files (#1726).
+- Fixed an installation problem with the Python extension.
+- Fixed a bug in ``writers.tindex`` that could cause a crash if the
+ output file wasn’t LAS.
+- Fixed JSON output from ``filters.hexbin`` when density/area can’t be
+ calculated.
+- Fixed a problem where output might be tagged with the wrong SRS when
+ using ``filters.reprojection`` in stream mode. (#1877)
+- PDAL_DRIVER_PATH would be improperly parsed on Windows systems when
+ the path contained colons. Windows builds now use the semicolon as
+ the path separator. (#1889)
+- Convert NaN and infinite double values to proper strings for output
+ as JSON.
+- Synthetic, keypoint and withheld flags are now added to the
+ classification dimension for version 1.0 - 1.3 files in
+ ``readers.las``.
+- Support missed case supporting points with color but no density in
+ ``filters.poisson``.
+- Throw an error if the input file can’t be opened in ``readers.ply``.
+- The ``--stdin`` option for ``kernels.tindex`` didn’t work. Now it
+ does. Various other fixes were made.
+- ``writers.gdal`` now throws an error if an attempt is made to write
+ an output file with no points available.
+- A build error that would be generated if lazsip was not found, even
+ if it was not requested, has been resolved.
+- A build error that would be generated if Python was found but not
+ requested has been resolved.
+- PDAL defaults to using `normal CMake
+ interface <https://cmake.org/cmake/help/v3.11/policy/CMP0022.html>`__
+ linking (#1890)
+- Fixed an issue where dimensions from ``readers.pcd`` and
+ ``writers.pcd`` could get confused with dimensions from
+ ``readers.sbet`` and ``writers.sbet``.
+- Fixed index computation in ``filters.voxelcentroidnearestneighbor``
+ and ``filters.voxelcenternearestneighbor`` #1901
+- Fixed libdl linking #1900
+
+================================================================================
+1.6.0
+================================================================================
+
+Changes of Note
+===============
+
+- PDAL's Travis CI configuration is now based on Alpine Linux.
+- PDAL is now built into containers with Alpine linux in addition to
+ Ubuntu linux. `Tags <https://hub.docker.com/r/pdal/pdal/tags/>`__
+ exist for each release, starting with 1.4, as well as the master
+ branch.
+- Pipeline tag names can now contain capital letters. They can also
+ contain underscores after the first character.
+- Replace ``filters.programmable`` and ``filters.predicate`` with the
+ more general ``filters.python``.
+- Add support for Matlab with ``filters.matlab``\ (#1661).
+- Remove the ``approximate`` option from ``filters.pmf`` and add an
+ ``exponential`` option.p
+- Placed base64 encoded VLR data in a subnode of the VLR itself with
+ the key "data" rather than duplicate the VLR node itself (#1648).
+- XML pipelines are no longer supported (#1666).
+- The number of proprietary dimensions in ``readers.text`` was expanded
+ from 255 to 4095 (#1657).
+- API hooks have been added to support the use of PDAL with JVM
+ languages such as Java or Scala.
+- Added support for LASzip 1.4 and switch to use the new LASzip API.
+ (#1205). LASzip support in PDAL will require LASzip.org release 3.1.0
+ or greater.
+- The cpd kernel has been replaced with ``filters.cpd``.
+- No more warnings about ReturnNumber or NumberOfReturns for LAS
+ permuations (#1682).
+- The KernelFactory class has been removed. Its functionality has been
+ moved to StageFactory.
+- Built-in eigen support has changed from version 3.2.8 to 3.3.4
+ (#1681).
+
+Enhancements
+============
+
+- API users can now create synonyms for existing arguments to a stage.
+- ``filters.splitter`` can now create buffered tiles with the
+ ``buffer`` option.
+- ``writers.ply``\ can now be made to write faces of an existing mesh
+ (created with ``filters.greedyprojection`` or ``filters.poisson``) if
+ the ``faces`` option is used. An option ``dims`` has also been added
+ that allows specification of the dimensions to be written as PLY
+ elements. The writer also now supports streaming mode.
+- ``readers.text`` is now automatically invoked for .csv files.
+- PDAL\_PLUGIN\_INSTALL\_PATH can now be set via override when building
+ PDAL from source.
+- Changed the use of null devices to eliminate potentially running out
+ of file descriptors on Windows.
+- ``filters.randomize`` can now be created by the stage factory
+ (#1598).
+- Provide the ability to specify a viewpoint and normal orientation in
+ ``filters.normal`` (#1638).
+- ``readers.las`` now provides the ``ignore_vlr`` option to allow named
+ VLRs to be dropped when read (#1651).
+- Allow ``writers.gdal`` to write output rasters of type other than
+ double (#1497).
+- ``filters.sqlite`` is now invoked automatically for .gpkg files.
+- ``filters.colorinterp`` can now be used in streaming mode in some
+ cases (#1675).
+- Pointers can now be stored as metadata.
+- ``filters.ferry`` can now create new dimensions without copying data
+ (#1694).
+
+Documentation
+-------------
+
+- Remove some leftover references to the ``classify`` and ``extract``
+ options that were removed from ``filters.ground`` in the last
+ release.
+- Add a note about running pgpointcloud tests.
+- Added a tutorial on filtering data with python.
+- Remove lingering XML pipeline examples and replace with JSON.
+- Many updates and corrections to the workshop.
+- Added to the FAQs and entry about why a stage might not be found.
+- Added information to stage docs to indicate whether or not they were
+ buit-in rather than plugins (#1612).
+- Added information to stage docs to indicate when they are streamable
+ (#1606).
+
+New filters
+===========
+
+- ``filters.greedyprojection`` - Performs triangulation of points
+ (surface reconstruction) based on the greedy projection algorithm.
+- ``filters.poisson`` - Performs triangulation of points (surface
+ reconstruction) based on the algorithm of Kazhdan.
+- ``filters.head`` - Passes through only the first N points.
+- ``filters.tail`` - Passes through only the last N points.
+- ``filters.cpd`` - Calculates and applies a transformation to align
+ two datasets using the `Coherent Point
+ Drift <https://sites.google.com/site/myronenko/research/cpd>`__
+ registration algorithm.
+- ``filters.icp`` - Calculates and applies a transformation to align
+ two datasets using the `Iterative Closest
+ Point <http://docs.pointclouds.org/trunk/classpcl_1_1_iterative_closest_point.html>`__
+ registration algorithm.
+- ``filters.voxelcenternearestneighbor`` - Finds points closest to the
+ center of a voxel (#1597).
+- ``filters.voxelcentroidnearestneighbor`` - Finds points closest to
+ the controid of points in a voxel (#1597).
+- ``filters.python`` - Replaces ``filters.predicate`` and
+ ``filters.programmable``.
+- ``filters.matlab`` - Provides support for matlab manipulation of PDAL
+ points and metadata (#1661).
+
+New readers
+===========
+
+- Add ``readers.osg`` to support Open Scene Graph format.
+- Add ``readers.matlab`` to support reading data from a user-defined
+ Matlab array struct. The same structure is written by
+ ``writers.matlab``.
+
+Bug fixes
+=========
+
+- Fixed a case where\ ``kernels.tindex`` would unconditionally set the
+ spatial reference on a feature from the ``a_srs`` option. The spatial
+ reference stored in ``a_srs`` is now only used if explicitly set or
+ no spatial reference was present.
+- Fixed a case where 'writers.gdal\` could fail to check for an
+ out-of-bounds point, potentially leading to a crash.
+- Fix an error in 'filters.cluster' where the points wouldn't properly
+ be placed in the first cluster because the starting cluster number
+ was incorrect.
+- Fixed an error in freeing OGR features that could cause a crash when
+ running "pdal density".
+- Fix potential memory leaks when creating OGRSpatialReference objects.
+- Make sure the ``global_encoding`` option is initialized to 0 in
+ ``writers.las`` (#1595).
+- Fix eigen::computeCovariance to compute the correct sample
+ covariance.
+- In some cases, the ``filters.crop`` would attempt to treat a 2D
+ bounding box as 3D, yeilding a NULL bounding box and an error in
+ behavior (#1626).
+- Fixed potential crash when using PDAL with multiple threads by
+ providing locking for gdal::ErrorHandler (#1637)
+- Made sure that an uncompressed LAS file would be properly read even
+ if the ``compression`` option was provided.
+- Throw an exception instead of crash when attempting to access a
+ non-existent color ramp. (#1688)
+
+================================================================================
+1.5.0
+================================================================================
+Changes of Note
+===============
+
+- PCL ``--visualize`` capability of the ``pdal`` command line
+ application has been removed.
+- ``writer.derivative`` has been removed. Use
+ `gdaldem <http://www.gdal.org/gdaldem.html>`__ for faster and more
+ featureful equivalent functionality.
+- GeoTIFF and Proj.4 are now required dependencies.
+- ``writers.p2g`` has been removed. It was replaced by ``writers.gdal``
+ in 1.4, but the P2G writer was essentially unmaintained and we will
+ be using the GDAL one going forward.
+- ``filters.attribute`` was split into ``filters.assign`` and
+ ``filters.overlay`` to separate their functionalities
+- ``filters.pmf`` and ``filters.outlier`` have dropped the ``classify``
+ and ``extract`` options. They now only classify points and leave it
+ downstream filters to ignore/extract classifications as needed.
+- ``filters.outlier`` has changed the default classification for noise
+ points from ``18`` to ``7`` to match the LAS classification code for
+ "Low point (noise)".
+
+Enhancements
+============
+
+- ``pdal pipeline`` now supports a ``--stream`` option which will
+ default to one-at-a-time or chunk-at-a-time point processing when all
+ stages in the pipeline support it. You can use this option to control
+ memory consumption -- for example when interpolating a very large
+ file with ``writers.gdal``
+- ``filters.crop`` was enhanced to support transformed filter polygons,
+ streaming, and radius cropping.
+- ``readers.greyhound`` updated to support greyhound.io 1.0 release,
+ with the most significant enhancement being support for passing
+ downstream JSON filters.
+- ``user_data`` JSON object can be applied to any PDAL pipeline object
+ and it will be carried through processing. You can use this mechanism
+ for carrying your own information in PDAL pipelines without having to
+ sidecar data. #1427
+- ``writers.las`` now can write ``pdal_metadata`` and ``pdal_pipeline``
+ VLRs for processing history tracking. #1509 #1525
+- ``metadata``, ``schema``, and ``spatialreference`` objects added to
+ global module for ``filters.programmable`` and ``filters.predicate``
+ Python filters.
+- ``pdalargs`` option for ``filters.programmable`` and
+ ``filters.predicate`` allow you to pass in a JSON dictionary to your
+ Python module for override or modification of your script
+- Stage tags can be used in pipeline override scenarios
+- User-settable VLRs in ``writers.las`` #1542
+- ``filters.sort`` now supports descending order and uses
+ ``std::stable_sort`` #1530 (Thanks to new contributor @wrenoud )
+- ``pdal tindex`` will now use data bounds if ``filters.hexbin`` cannot
+ be loaded for boundaries #1533
+- ``filters.pmf`` and ``filters.smrf`` improved performance #1531 and
+ #1541
+- ``filters.assign`` now supports
+ `Range <https://pdal.io/stages/filters.range.html>`__-based
+ filters
+- ``filters.outlier`` now accepts a user-specified ``class`` to
+ override the default value of ``7`` for points deemed outliers. #1545
+- ``filters.pmf`` and ``filters.smrf`` now accept a
+ `Range <https://pdal.io/stages/ranges.html#ranges>`__ via the
+ ``ignore`` option to specify values that should be excluded from
+ ground segmentation. #1545
+- ``filters.pmf`` and ``filters.smrf`` now consider only last returns
+ (when return information is available) as the default behavior. The
+ ``last`` option can be set to ``false`` to consider all returns.
+ #1545
+
+Documentation
+-------------
+
+- New `About page <https://pdal.io/about.html>`__ adapted from
+ `workshop <https://pdal.io/workshop/>`__
+- New `LAS reading and writing <https://pdal.io/tutorial/las.html>`__
+ tutorial
+- Consolidation of `Python <https://pdal.io/python.html>`__ information
+
+New filters
+-----------
+
+- ``filters.cluster`` - Perform Euclidean cluster extraction, and label
+ each point by its cluster ID. By @chambbj.
+- ``filters.groupby`` - Split incoming PointView into individual
+ PointViews categorically, e.g., by Classification. By @chambbj.
+- ``filters.locate`` - Locate and return the point with the minimum or
+ maximum value for a given dimension. By @chambbj.
+- ``filters.emf`` - Extended Local Maximum filter. By @chambbj.
+
+New readers
+-----------
+
+- ``readers.mbio`` Bathymetric point cloud support for formats
+ supported by the
+ `MB-System <https://www.ldeo.columbia.edu/res/pi/MB-System/>`__
+ software library
+
+Bug fixes
+---------
+
+- ``writers.pgpointcloud`` needed to treat table schema correctly
+ https://github.com/PDAL/PDAL/pull/1540 (thanks @elemoine)
+- ``pdal density`` kernel now supports overriding ``filters.hexbin``
+ options #1487
+- Arbiter embedded library updated to support setting Curl options
+ (certificate settings, etc).
+- Provided a default value for ``radius`` in ``writers.gdal`` #1475
+- ``writers.ply`` broken for non-standard dimensions #1556
+- No EVLRs for ``writers.las`` for files < LAS 1.4 #1551
+- LAS extra dims handling for standard PDAL dimension names #1555
+- LASzip defines #1549
+
+
+================================================================================
+1.4.0
+================================================================================
+Changes of Note
+===============
+
+- GeoTIFF is now required to compile PDAL
+- ``--scale`` and ``--offset`` kernel options are no longer supported.
+ Specify using stage-specific options as needed.
+- The ``--validate`` option of the ``pdal pipeline`` command now
+ invokes the preparation portion of the pipeline to force validation
+ of options.
+- The ``--verbose`` option to ``pdal`` now accepts log level names
+ ("Error", "Warning", "Info", "Debug", "Debug1", "Debug2", "Debug3",
+ "Debug4" and "Debug5") in addition to the corresponding numeric
+ values (0 - 8).
+
+Enhancements
+============
+
+New filters
+-----------
+
+- ```filters.colorinterp`` <http://pdal.io/stages/filters.colorinterp.html>`__
+ - Ramp RGB colors based on a specified dimension. By @hobu
+- ```filters.mad`` <http://pdal.io/stages/filters.mad.html>`__ - Filter
+ outliers in a given dimension by computing Median Absolute Deviation
+ (MAD). By @chambbj
+- ```filters.lof`` <http://pdal.io/stages/filters.lof.html>`__ -Filters
+ outliers by Local Outlier Factor (LOF). By @chambbj
+- ```filters.estimaterank`` <http://pdal.io/stages/filters.estimaterank.html>`__
+ - Estimate rank of each neighborhood of k-nearest neighbors. By
+ @chambbj
+- ```filters.eigenvalues`` <http://pdal.io/stages/filters.eigenvalues.html>`__
+ - Compute pointwise Eigenvalues. By @chambbj
+- ```filters.iqr`` <http://pdal/io/stages/filters.iqr.html>`__ - Filter
+ outliers in a given dimension by computing Interquartile Range (IQR).
+ By @chambbj
+- ```filters.kdistance`` <http://pdal.io/stages/filters.kdistance.html>`__
+ - Compute pointwise K-distance. By @chambbj
+- ```filters.radialdensity`` <http://pdal.io/stages/filters.radialdensity.html>`__
+ - Compute pointwise radial density. By @chambbj
+- ```filters.outlier`` <http://pdal.io/stages/filters.outlier.html>`__
+ - Radius and statistical outliers. By @chambbj
+
+New writers
+-----------
+
+- ```writers.gdal`` <http://pdal.io/stages/writers.gdal.html>`__ -
+ `points2grid <http://github.com/crrel/points2grid>`__ replacement. By
+ @abellgithub
+
+New kernels
+-----------
+
+- ```kernels.hausdorff`` <http://pdal.io/apps/hausdorff.html>`__ -
+ Compute `Hausdorff
+ distance <https://en.wikipedia.org/wiki/Hausdorff_distance>`__
+ between two point clouds. By @chambbj
+
+Improvements
+------------
+
+- `Filename
+ globbing <http://man7.org/linux/man-pages/man7/glob.7.html>`__ is now
+ supported in the JSON pipeline specification of reader input files.
+ Note that tilde expansion is NOT supported.
+- Source tree reorganization
+ https://lists.osgeo.org/pipermail/pdal/2016-December/001099.html
+- CMake updates to utilize ``target_include_directory`` and
+ ``target_link_libraries``.
+- JSON output for ``pdal --showjson --drivers`` and
+ ``pdal --showjson --options`` to support application builders being
+ able to fetch active lists of stages, kernels, and options.
+ https://github.com/PDAL/PDAL/issues/1315
+- Stacktrace logging to stderr on Unix systems
+ https://github.com/PDAL/PDAL/pull/1329
+- Geometry ingestion enhancements now support using
+ `GeoJSON <http://geojson.org>`__ or WKT in pipeline options
+ https://github.com/PDAL/PDAL/pull/1339.
+- Significant Python extension refactor
+ https://github.com/PDAL/PDAL/pull/1367 including ability to fetch
+ data schema, log, and pipeline information. Common utility classes to
+ support the Python extension were refactored in support of the Java
+ extension.
+- Java extension by `Azavea <https://www.azavea.com/>`__ to support
+ using PDAL in `Spark <http://spark.apache.org/>`__ and friends.
+ https://github.com/PDAL/PDAL/pull/1371
+- ```kernels.density`` <http://pdal.io/stages/kernels.density.html>`__
+ - Density kernel now supports writing into an existing OGR datasource
+ https://github.com/PDAL/PDAL/pull/1396
+- ```readers.greyhound`` <http://pdal.io/stages/readers.greyhound.html>`__
+ - Greyhound reader refactor.
+- Multi-threaded read support
+- Server-side filtering pass-through
+- ```writers.derivative`` <http://pdal.io/stages/writers.derivative.html>`__
+ - Derivative writer refactor.
+- ``slope_d8``
+- ``slope_fd``
+- ``aspect_d8``
+- ``aspect_fd``
+- ``contour_curvature``
+- ``profile_curvature``
+- ``tangential_curvature``
+- ``hillshade``
+- ``total_curvature``
+- Output to any GDAL-writable format
+ https://github.com/PDAL/PDAL/issues/1146
+- ```filters.crop`` <http://pdal.io/stages/filters.crop.html>`__ -
+ Radial cropping https://github.com/PDAL/PDAL/issues/1387
+- ```filters.stats`` <http://pdal.io/stages/filters.stats.html>`__ -
+ Optional per-dimension median and MAD computation
+- Support was added for the recently added cartesian coordinate in BPF
+ files.
+- ```writers.p2g`` <http://pdal.io/stages/writers.p2g.html>`__ now uses
+ the InCoreInterp method of the points2grid code. This uses more
+ memory but runs faster and doesn't crash.
+- The application now provides better error feedback on command-line
+ errors by indicating the invoked kernel when an error is detected.
+- PDAL now searches by default in the following locations for plugins:
+ ``"." "./lib", "../lib", "./bin", "../bin"``. Use
+ ``PDAL_DRIVER_PATH`` to explicitly override the plugin search
+ location.
+- Vector-based command-line arguments now accept default values in the
+ API.
+- JSON parsing errors of pipeline files now provide more detailed
+ messages.
+- Writers now add output filenames to metadata.
+- Stage names provided as input to other stages in pipelines can now be
+ specified as strings or arrays of strings. The previous version
+ required single input stage names to be placed in an array.
+- Added ``--smooth`` option to
+ ```filters.hexbin`` <http://pdal.io/stages/filters.hexbin.html>`__ to
+ allow user control of boundary smoothing.
+
+Bug fixes
+---------
+
+- Well-known text provided as a spatial reference isn't interpreted by
+ GDAL unless necessary.
+- ```filters.hexbin`` <http://pdal.io/stages/filters.hexbin.html>`__
+ now returns ``MULTIPOLYGON EMPTY`` when it is unable to compute a
+ boundary.
+- Reading a not a number (nan) value from a text file now works
+ properly.
+- The ``--compression`` option for
+ ```writers.pcd`` <http://pdal.io/stages/writers.pcd.html>`__ has been
+ fixed so that the writer actually compresses as requested.
+- The stage manager (and hence, pipelines) now properly recognizes the
+ text reader as
+ ```readers.text`` <http://pdal.io/stages/readers.text.html>`__.
+- ```readers.text`` <http://pdal.io/stages/readers.text.html>`__ now
+ detects the case when a dimension has been specified more than once
+ in an input file.
+- Fixed a problem where
+ ```filters.splitter`` <http://pdal.io/stages/filters.splitter.html>`__
+ could create cells larger than requested about the X and Y axes.
+- ```writers.nitf`` <http://pdal.io/stages/writers.nitf.html>`__ now
+ errors if it attempts to write an FTITLE field that exceeds the
+ allowable length.
+- If PDAL is build with LAZperf but without LASzip, the program now
+ properly defaults to using LAZperf.
+- Fixed a problem where
+ ```filters.sort`` <http://pdal.io/stages/filters.sort.html>`__ could
+ fail to properly order points depending on the implementation of the
+ C++ sort algorithm.
+- Fixed a problem in pgpostgres readers and writers where a failure in
+ a query could lead to a crash.
+
+
+================================================================================
+1.3.0
+================================================================================
+
+Changes of Note
+================================================================================
+
+- Command line parsing has been reworked to cause invalid options to emit
+ an error message. Stage options specified in pipelines and on the command
+ line are handled similarly.
+- The dimension PlatformHeading has been renamed to Azimuth. When looking
+ up a dimension by name, the string "platformheading" is still accepted and
+ returns Azimuth.
+- Errors detected by GDAL are no longer thrown as exceptions. A log message
+ is emitted instead.
+- Well-known dimensions are now added to PDAL by editing a JSON file,
+ Dimension.json.
+- Linking with PDAL using CMake no longer requires explicit linking with
+ curl, jsoncpp or arbiter libraries.
+- PDAL now searches for plugins in the following locations and order by
+ default: ./lib, ../lib, ../bin, the location where PDAL was installed.
+- The '--debug' and '--verbose' options are no longer supported as stage
+ options. The '--verbose' option is accepted on the PDAL command line. The
+ '--debug' option is deprecated, and if specified on the command line is
+ equivalent to '--verbose=3'. One can enable logging programmatically by
+ calling setLog() on a PipelineManager or a specific stage.
+- pdal::Dimension types are now C++11 enumeration classes. The change may
+ require editing any Stage implementations you might have and removing the
+ extraneous ::Enum type specification.
+
+Enhancements
+================================================================================
+
+- Pipelines can now be read directly from standard input.
+- Files can now be read from Amazon S3 buckets by providing an appropriate
+ URL.
+- Many new filters have been added: filters.approximatecoplanar,
+ filters.eigenvalues, filters.estimaterank, filters.hag, filters.normal,
+ filters.outlier, filters.pmf, filters.sample. Most of these are algorithm
+ extractions from the PCL library, with the hope of eliminating the need
+ for PCL in some future PDAL release.
+- The PLY reader now loads dimensions that aren't predefined PDAL dimensions.
+- A '--driver' option has been added to allow a specific driver to be loaded
+ for a file without regard to its extension.
+- The PDAL_DRIVER_PATH environment variable now accepts a list of locations
+ to search for drivers.
+- Beta release quality drivers improvements in readers.greyhound
+- Beta quality implementation of Mongus and Zalik ground filter
+- Experimental implementation of Pingel et al. ground filter
+- writers.pcd enhancements by Logan Byers (binary, compression) -- requires
+ PCL
+- Docker images upgraded to Ubuntu Xenial
+- Cyclone PTS reader -- readers.pts
+- skewness, kurtosis, stddev, and variance added to filters.stats output
+- Python API now available https://pypi.python.org/pypi/pdal
+
+Fixes
+================================================================================
+
+- A failure that may have resulted when using filters.hexbin to calculate
+ density in the southern hemisphere has been corrected.
+- A failure to create the index file with 'pdal tindex' and GDAL 2.X has
+ been fixed.
+- The '--tindex' option for the 'pdal tindex' command is now a positional
+ option as specified in the documentation.
+- The icebridge reader now reads the X dimension as longitude and forces
+ the value in the range (-180, 180]. It also properly uses the dimension
+ Azimuth instead of ScanAngleRank.
+- An error in writers.pgpointcloud where it ignored SQL to be run at the end
+ of the stage has been fixed.
+- An error that might incorrectly write values stored internally as bytes
+ when written as a different data type has been fixed.
+- A problem where 'pdal info' wouldn't properly report dimension names not
+ predefined by PDAL has been fixed.
+- A bug in filters.crop that wouldn't properly transform coordinates when
+ provided the '--a_srs' option has been fixed.
+
+================================================================================
+1.2.0
+================================================================================
+
+Changes of Note
+================================================================================
+
+- The GEOS library is now required to build PDAL. In earlier versions it was
+ an optional component.
+- Boost is no longer a required component. Unless you are building plugins
+ that require boost (notably PCL and Geowave), you no longer will need
+ boost installed on your system to build or run PDAL.
+- PDAL now builds on Microsoft Visual Studio 2015.
+- The PipelineReader class has been removed and its functionality has been
+ merged into PipelineManager.
+- Plugin libraries now support Linux versioning.
+- Naming changes have been made to allow packaging with the Debian release.
+- filters.height now uses the dimension 'HeightAboveGround' instead of a
+ dimension named 'Height' to be compatible with the filters.heightaboveground.
+- Option names no longer contain lowercase characters.
+- PDAL now works with GDAL version 1.9 and later.
+- Stages created with the StageFactory are now owned by the factory.
+- filters.dartthrowing has been renamed filters.dartsample
+- 'pipeline-serialization' now produces JSON output instead of XML.
+
+Enhancements
+================================================================================
+
+- Pipelines may now be specified using a JSON syntax. XML syntax is still
+ supported but users should switch to JSON when possible as the XML support
+ will be removed in a future version.
+- PDAL now can be built into a Docker container.
+- Many stages now support "streaming," which allows control of the number
+ of points stored in memory during processing. See
+ Stage::execute(StreamPointTable&) for more information.
+- A basic text reader has been added.
+- Added support for the dimension 'ClassFlags' in readers.las.
+- The derivative writer can now produce output for multiple primitive types
+ with a single execution.
+- 'pdal info' now provides bounding box output instead of a more refined
+ boundary when the hexbin plugin isn't found.
+- Added 'pdal density' to provide a command-line interface to the
+ filters.hexbin density calcuations.
+- The icebridge reader can now load an associated metadata file. The reader
+ also now marks the associated coordinate system as WGS84.
+- The stats filter now emits bounding box information in native and WGS84
+ projections.
+- PDAL command-line programs now (generally) check their argument lists for
+ correctness and report syntax errors.
+- 'pdal info' now provides spatial reference attributes in addition to
+ the actual well-known text.
+- Geometry can now be specified as GeoJSON as well as well-known-text in
+ most contexts. Geometry optionally provides Z-dimension output.
+- Stage and plugin creation is now thread-safe (NOTE: Most of PDAL is
+ NOT thread-safe, so tread carefully).
+- Many, many documentation enhancements.
+
+Fixes
+================================================================================
+
+- A bug in generating PCIDs when with multiple simultaneous PDAL executions
+ to the same Postgres database has been fixed.
+- Fixed a bug in generated SQL delete statements when certain table names
+ were used in the writers.postgres driver.
+- Properly escape quotes when generating JSON output.
+- Fix an off-by-one error when writing data with the derivative writer that
+ could lead to a crash.
+- Fixed a dependency error during builds that could lead to a failure to
+ properly load Python extensions on Linux.
+- Fixed a bug where passing certain options to 'pdal info' could be handled
+ in ambiguous ways.
+- Fixed bugs in the reading of raster data using readers.gdal.
+- Fixed population of the AIMIDB and ACFTB attributes in writers.nitf.
+- Corrected the parsing of some dimension names in filters.colorization.
+- Fixed a potential truncation in the GlobalEncoding dimension of readers.las.
+
+================================================================================
+1.1.0
+================================================================================
+
+Enhancements
+================================================================================
+
+- Add support for the LAZperf LAS compressor in decoding/encoding LAS files.
+ LAZperf can be enabled with the 'compression' option in readers.las and
+ writers.las.
+- Add PCL functionality as filters (filters.greedyprojection,
+ filters.gridprojection, filters.ground filters.movingleastsquares,
+ filters.poisson, filters.radiusoutlier, filters.statisticaloutlier,
+ filters.voxelgrid, filters.height, filters.dartsample)
+- Add readers.gdal to support reading raster sets as point clouds
+- Update writers.geowave and readers.geowave to work with the latest version
+ of GeoWave software.
+- Add readers.ilvis2 to support the Icebridge ILVIS2 format.
+- Disallow nested options. Check stage documentation for changes in option
+ names and handling. (filters.ferry, filters.colorization, filters.attribute,
+ filters.crop). Change filters.attribute to handle only a single dimension.
+- Add 'output_dims' options in writers.bpf to allow control of the dimensions
+ that should be written.
+- Add 'all' keyword in 'extra_dims' options of writers.las to cause all
+ dimensions to be written to either the standard or extra dimensions of
+ a LAS point.
+- Add filters.randomize to allow randomized order of points.
+- Add filters.divider to split a set of points into subsets of a fixed number
+ or into subsets containing a specific number of points.
+- Update to version 1.1.4 of rply in readers.rply.
+- Change the logic of the range filter to allow multiple ranges for a single
+ dimension and support a simple boolean logic.
+- Change the default scaling on writer.bpf to 'auto'.
+- Add support for smoothing boundaries generated by filters.hexbin.
+- Add readers.tindex to allow vector-filtered input of point cloud files.
+- Allow merging of datasets with non-matching spatial references.
+- Many, many documentation enhancements.
+
+Fixes
+================================================================================
+
+- Handle error with Pgpointcloud when pointcloud extension is not installed
+ on postgres server. Skip tests if extention is missing.
+- Set precision on output of doubles to metadata.
+- Fix a divide-by-zero error in readers.faux when the point count was 1.
+ (https://github.com/PDAL/PDAL/issues/1015)
+- Fix fatal error loading numpy library that occurred when running
+ filters.predicate or filters.programmable.
+ (https://github.com/PDAL/PDAL/issues/1010)
+- Correct readers.las to properly check WKT bit when choosing spatial
+ reference VLR.
+ (https://github.com/PDAL/PDAL/issues/1040)
+- Correct writer.las to emit only WKT or GeoTiff VLR, not both.
+ (https://github.com/PDAL/PDAL/issues/1040)
+- Check object ID against table column id (attrelid) to ensure correct PCID
+ retrieval in readers.pgpointcloud.
+ (https://github.com/PDAL/PDAL/pull/1051)
-Steps for Making a PDAL Release
-==============================================================================
-
-:Author: Howard Butler
-:Contact: howard at hobu.co
-:Date: 04/04/2018
-
-This document describes the process for releasing a new version of PDAL.
-
-General Notes
-------------------------------------------------------------------------------
-
-Release Process
-
-1) Increment Version Numbers
-
- - CMakeLists.txt
- * set(PDAL_VERSION_STRING "1.0.0" CACHE STRING "PDAL version")
- * DISSECT_VERSION() CMake macro will break version down into
- PDAL_VERSION_MAJOR, PDAL_VERSION_MINOR, PDAL_VERSION_PATCH,
- and PDAL_CANDIDATE_VERSION strings.
-
- - Update SO versioning
- set(PDAL_API_VERSION "1")
- set(PDAL_BUILD_VERSION "1.0.0")
- * https://github.com/libspatialindex/libspatialindex/pull/44#issuecomment-57088783
-
- - doc/quickstart.rst has a number of current-release references
-
- - doc/download.rst point to new release
-
- - appveyor.yml
-
- - Make and push new release branch
-
- ::
-
- git branch 1.7-maintenance
- git push origin 1.7-maintenance
-
-
- - Increment the doc build branch of .travis.yml:
-
- "$TRAVIS_BRANCH" = "1.7-maintenance"
-
- - Make DockerHub build entry for new release branch.
-
-
-2) Write and update release notes. Use the PDAL "releases" section to create one.
- Write the document in Markdown for convenience on GitHub.
-
- - Manually store a copy of it in ./doc/development/release-notes/1.7.0.md
- for future reference.
-
- - Convert it to reStructuredText using pandoc and add the output to the
- RELEASENOTES.txt document
-
- ::
-
- pandoc --from markdown --to rst --output=1.7.rst doc/development/release-notes/1.7.0.md
-
-3) Update ChangeLog with git2cl
-
- * git2cl . > ChangeLog
- * Delete any lines with "Merge" in them
-
- ::
-
- git2cl . > Changelog
- gsed -i '/Merge/d' ./ChangeLog
-
-4) Build and run the tests. Really.
-
- ::
-
- ctest -V
-
-
-5) Clone a new tree and issue cmake. The package_source CMake target is
- aggressive about scooping up every file in the tree to include in the package.
- It does ok with CMake-specific stuff, but any other cruft in the tree is
- likely to get dumped into the package.
-
- ::
-
- git clone git://github.com/PDAL/PDAL.git pdal2
- cmake .
-
-6) Make the source distribution. If you are doing a release candidate
- add an RC tag to the invocation.
-
- ::
-
- ./package.sh
- ./package.sh RC1
-
-
- package.sh will rename the source files if necessary for the release
- candidate tag and create .md5 sum files. This script only works on
- linux and windows.
-
-7) Update docs/download.txt to point at the location of the new release
-
-8) Upload the new release to download.osgeo.org:/osgeo/download/pdal
-
- ::
-
- scp PDAL-* hobu at download.osgeo.org:/osgeo/download/pdal
-
-9) Tag the release. Use the ``-f`` switch if you are retagging because you
- missed something.
-
- ::
- git tag 1.0.0
- git push --tags
-
-
-10) Write the release notes. Email PDAL mailing list with notice about release
-
-
-11) Upload new OSGeo4W package to download.osgeo.org:/osgeo/download/osgeo4w/x86_64/release/pdal
-
- - Go to https://ci.appveyor.com/project/hobu/pdal
- - Choose ``OSGEO4W_BUILD=ON`` build
- - Scroll to very bottom
- - Fetch tarball "OSGeo4W64 build will be uploaded to https://s3.amazonaws.com/pdal/osgeo4w/pdal-a4af2420b09725a4a0fff1ef277b1.7370c497d2.tar.bz2"
-
- - rename to match current release and set OSGeo4W build number to 1
-
- ::
-
- mv pdal-a4af2420b09725a4a0fff1ef277b1.7370c497d2.tar.bz2 pdal-1.7.0-1.tar.bz2
-
- - copy to OSGeo4W server
-
- ::
-
- scp pdal-1.7.0-1.tar.bz2 hobu at download.osgeo.org:/osgeo/download/osgeo4w/x86_64/release/pdal
-
- - refresh OSGeo4W
-
- ::
- http://upload.osgeo.org/cgi-bin/osgeo4w-regen.sh
-
-
- - promote release
-
- ::
-
- http://upload.osgeo.org/cgi-bin/osgeo4w-promote.sh
-
-12) Update Alpine package
-
- - The PDAL Alpine package lives at
- https://github.com/alpinelinux/aports/blob/master/testing/pdal/APKBUILD.
- Pull requests can be made against the alpinelinux/aports repository. If the
- build configuration alone is changing, with no version increase, simply
- increment the build number `pkgrel`. If the `pkgver` is changing, then
- reset `pkgrel` to 0.
- - Pull requests should have a commit message of the following form
- `testing/pdal: <description>`.
-
-13) Update Conda package
-
- - For PDAL releases that bump version number, but do not change dependencies
- or build configurations, the `regro-cf-autotick-bot` should automatically
- create a pull request at https://github.com/conda-forge/pdal-feedstock.
- Once the builds succeed, the PR can be merged and the updated package will
- soon be available in the `conda-forge` channel. If the PR does not build
- successfully, updates to the PR can be pushed to the bot's branch. Version
- bumps should reset the build number to zero.
- - Updates that alter the build configuration but do not bump the version
- number should be submitted as PRs from a fork of the
- https://github.com/conda-forge/pdal-feedstock repository. In these cases,
- the build number should be incremented.
=====================================
pdal/PointTable.hpp
=====================================
@@ -196,35 +196,63 @@ private:
class PDAL_DLL StreamPointTable : public SimplePointTable
{
protected:
- StreamPointTable(PointLayout& layout) : SimplePointTable(layout)
+ StreamPointTable(PointLayout& layout, point_count_t capacity)
+ : SimplePointTable(layout)
+ , m_capacity(capacity)
+ , m_skips(m_capacity, false)
{}
public:
/// Called when a new point should be added. Probably a no-op for
/// streaming.
virtual PointId addPoint()
- { return 0; }
+ { return 0; }
+
/// Called when execute() is started. Typically used to set buffer size
/// when all dimensions are known.
virtual void finalize()
- {}
- /// Called before the StreamPointTable is reset indicating the number of
- /// points that were populated, which must be less than or equal to its
- /// capacity.
- virtual void setNumPoints(PointId n)
- {}
+ {}
+
+ void clear(point_count_t count)
+ {
+ m_numPoints = count;
+ reset();
+ std::fill(m_skips.begin(), m_skips.end(), false);
+ }
+
+ /// Returns true if a point in the table was filtered out and should be
+ /// considered omitted.
+ bool skip(PointId n) const
+ { return m_skips[n]; }
+ void setSkip(PointId n)
+ { m_skips[n] = true; }
+
+ point_count_t capacity() const
+ { return m_capacity; }
+
+ /// During a given call to reset(), this indicates the number of points
+ /// populated in the table. This value will always be less then or equal
+ /// to capacity(), and also includes skipped points.
+ point_count_t numPoints() const
+ { return m_numPoints; }
+
+protected:
/// Called when the contents of StreamPointTable have been consumed and
/// the point data will be potentially overwritten.
virtual void reset()
{}
- virtual point_count_t capacity() const = 0;
+
+private:
+ point_count_t m_capacity;
+ point_count_t m_numPoints;
+ std::vector<bool> m_skips;
};
class PDAL_DLL FixedPointTable : public StreamPointTable
{
public:
- FixedPointTable(point_count_t capacity) : StreamPointTable(m_layout),
- m_capacity(capacity)
+ FixedPointTable(point_count_t capacity)
+ : StreamPointTable(m_layout, capacity)
{}
virtual void finalize()
@@ -232,22 +260,19 @@ public:
if (!m_layout.finalized())
{
BasePointTable::finalize();
- m_buf.resize(pointsToBytes(m_capacity + 1));
+ m_buf.resize(pointsToBytes(capacity() + 1));
}
}
+protected:
virtual void reset()
{ std::fill(m_buf.begin(), m_buf.end(), 0); }
- point_count_t capacity() const
- { return m_capacity; }
-protected:
virtual char *getPoint(PointId idx)
{ return m_buf.data() + pointsToBytes(idx); }
private:
std::vector<char> m_buf;
- point_count_t m_capacity;
PointLayout m_layout;
};
=====================================
pdal/Streamable.cpp
=====================================
@@ -178,7 +178,6 @@ void Streamable::execute(StreamPointTable& table)
void Streamable::execute(StreamPointTable& table,
std::list<Streamable *>& stages, SrsMap& srsMap)
{
- std::vector<bool> skips(table.capacity());
std::list<Streamable *> filters;
SpatialReference srs;
@@ -243,11 +242,11 @@ void Streamable::execute(StreamPointTable& table,
s->startLogging();
for (PointId idx = 0; idx < pointLimit; idx++)
{
- if (skips[idx])
+ if (table.skip(idx))
continue;
point.setPointId(idx);
if (!s->processOne(point))
- skips[idx] = true;
+ table.setSkip(idx);
}
const SpatialReference& tempSrs = s->getSpatialReference();
if (!tempSrs.empty())
@@ -258,11 +257,7 @@ void Streamable::execute(StreamPointTable& table,
s->stopLogging();
}
- // Yes, vector<bool> is terrible. Can do something better later.
- for (size_t i = 0; i < skips.size(); ++i)
- skips[i] = false;
- table.setNumPoints(pointLimit);
- table.reset();
+ table.clear(pointLimit);
}
}
=====================================
pdal/util/FileUtils.cpp
=====================================
@@ -101,9 +101,7 @@ namespace FileUtils
std::istream *openFile(std::string const& filename, bool asBinary)
{
- std::string::size_type found_tilde(std::string::npos);
- found_tilde = filename.find('~');
- if (found_tilde != std::string::npos)
+ if (filename[0] == '~')
throw pdal::pdal_error("PDAL does not support shell expansion");
std::ifstream *ifs = nullptr;
@@ -406,10 +404,7 @@ std::vector<std::string> glob(std::string path)
{
std::vector<std::string> filenames;
-
- std::string::size_type found_tilde(std::string::npos);
- found_tilde = path.find('~');
- if (found_tilde != std::string::npos)
+ if (path[0] == '~')
throw pdal::pdal_error("PDAL does not support shell expansion");
#ifdef _WIN32
=====================================
plugins/greyhound/io/bounds.cpp
=====================================
@@ -1,12 +1,41 @@
/******************************************************************************
-* Copyright (c) 2016, Connor Manning (connor at hobu.co)
+* Copyright (c) 2018, Connor Manning (connor at hobu.co)
*
-* Entwine -- Point cloud indexing
+* All rights reserved.
*
-* Entwine is available under the terms of the LGPL2 license. See COPYING
-* for specific license text and more information.
+* Redistribution and use in source and binary forms, with or without
+* modification, are permitted provided that the following
+* conditions are met:
*
-******************************************************************************/
+* * Redistributions of source code must retain the above copyright
+* notice, this list of conditions and the following disclaimer.
+* * Redistributions in binary form must reproduce the above copyright
+* notice, this list of conditions and the following disclaimer in
+* the documentation and/or other materials provided
+* with the distribution.
+* * Neither the name of Hobu, Inc. or Flaxen Geo Consulting nor the
+* names of its contributors may be used to endorse or promote
+* products derived from this software without specific prior
+* written permission.
+*
+* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
+* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+* OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
+* OF SUCH DAMAGE.
+****************************************************************************/
+
+// This file was originally in Entwine, which is LGPL2, but it has been
+// relicensed for inclusion in PDAL.
+
+
#include "bounds.hpp"
=====================================
plugins/greyhound/io/bounds.hpp
=====================================
@@ -1,12 +1,40 @@
/******************************************************************************
-* Copyright (c) 2016, Connor Manning (connor at hobu.co)
+* Copyright (c) 2018, Connor Manning (connor at hobu.co)
*
-* Entwine -- Point cloud indexing
+* All rights reserved.
*
-* Entwine is available under the terms of the LGPL2 license. See COPYING
-* for specific license text and more information.
+* Redistribution and use in source and binary forms, with or without
+* modification, are permitted provided that the following
+* conditions are met:
*
-******************************************************************************/
+* * Redistributions of source code must retain the above copyright
+* notice, this list of conditions and the following disclaimer.
+* * Redistributions in binary form must reproduce the above copyright
+* notice, this list of conditions and the following disclaimer in
+* the documentation and/or other materials provided
+* with the distribution.
+* * Neither the name of Hobu, Inc. or Flaxen Geo Consulting nor the
+* names of its contributors may be used to endorse or promote
+* products derived from this software without specific prior
+* written permission.
+*
+* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
+* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+* OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
+* OF SUCH DAMAGE.
+****************************************************************************/
+
+// This file was originally in Entwine, which is LGPL2, but it has been
+// relicensed for inclusion in PDAL.
+
#pragma once
=====================================
test/unit/FileUtilsTest.cpp
=====================================
@@ -77,6 +77,7 @@ TEST(FileUtilsTest, test_file_ops)
EXPECT_TRUE(FileUtils::fileExists(tmp2)==false);
EXPECT_THROW(FileUtils::openFile("~foo1.glob"), pdal::pdal_error);
+ EXPECT_NO_THROW(FileUtils::openFile("foo~1.glob"));
}
TEST(FileUtilsTest, test_readFileIntoString)
@@ -245,7 +246,8 @@ TEST(FileUtilsTest, glob)
EXPECT_EQ(FileUtils::glob(TP("foo1.glob")).size(), 0u);
#ifdef _WIN32
- EXPECT_THROW(FileUtils::glob(TP("~foo1.glob")), pdal::pdal_error);
+ EXPECT_THROW(FileUtils::glob("~foo1.glob"), pdal::pdal_error);
+ EXPECT_NO_THROW(FileUtils::glob(TP("foo1~.glob")));
#endif
FileUtils::deleteFile("temp.glob");
=====================================
vendor/arbiter/arbiter.cpp
=====================================
@@ -2159,7 +2159,7 @@ std::vector<std::string> S3::glob(std::string path, bool verbose) const
{
xml.parse<0>(data.data());
}
- catch (Xml::parse_error)
+ catch (Xml::parse_error&)
{
throw ArbiterError("Could not parse S3 response.");
}
=====================================
vendor/eigen/Eigen/src/SparseCholesky/SimplicialCholesky.h deleted
=====================================
@@ -1,689 +0,0 @@
-// This file is part of Eigen, a lightweight C++ template library
-// for linear algebra.
-//
-// Copyright (C) 2008-2012 Gael Guennebaud <gael.guennebaud at inria.fr>
-//
-// This Source Code Form is subject to the terms of the Mozilla
-// Public License v. 2.0. If a copy of the MPL was not distributed
-// with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
-
-#ifndef EIGEN_SIMPLICIAL_CHOLESKY_H
-#define EIGEN_SIMPLICIAL_CHOLESKY_H
-
-namespace Eigen {
-
-enum SimplicialCholeskyMode {
- SimplicialCholeskyLLT,
- SimplicialCholeskyLDLT
-};
-
-namespace internal {
- template<typename CholMatrixType, typename InputMatrixType>
- struct simplicial_cholesky_grab_input {
- typedef CholMatrixType const * ConstCholMatrixPtr;
- static void run(const InputMatrixType& input, ConstCholMatrixPtr &pmat, CholMatrixType &tmp)
- {
- tmp = input;
- pmat = &tmp;
- }
- };
-
- template<typename MatrixType>
- struct simplicial_cholesky_grab_input<MatrixType,MatrixType> {
- typedef MatrixType const * ConstMatrixPtr;
- static void run(const MatrixType& input, ConstMatrixPtr &pmat, MatrixType &/*tmp*/)
- {
- pmat = &input;
- }
- };
-} // end namespace internal
-
-/** \ingroup SparseCholesky_Module
- * \brief A base class for direct sparse Cholesky factorizations
- *
- * This is a base class for LL^T and LDL^T Cholesky factorizations of sparse matrices that are
- * selfadjoint and positive definite. These factorizations allow for solving A.X = B where
- * X and B can be either dense or sparse.
- *
- * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization
- * such that the factorized matrix is P A P^-1.
- *
- * \tparam Derived the type of the derived class, that is the actual factorization type.
- *
- */
-template<typename Derived>
-class SimplicialCholeskyBase : public SparseSolverBase<Derived>
-{
- typedef SparseSolverBase<Derived> Base;
- using Base::m_isInitialized;
-
- public:
- typedef typename internal::traits<Derived>::MatrixType MatrixType;
- typedef typename internal::traits<Derived>::OrderingType OrderingType;
- enum { UpLo = internal::traits<Derived>::UpLo };
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::RealScalar RealScalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar,ColMajor,StorageIndex> CholMatrixType;
- typedef CholMatrixType const * ConstCholMatrixPtr;
- typedef Matrix<Scalar,Dynamic,1> VectorType;
- typedef Matrix<StorageIndex,Dynamic,1> VectorI;
-
- enum {
- ColsAtCompileTime = MatrixType::ColsAtCompileTime,
- MaxColsAtCompileTime = MatrixType::MaxColsAtCompileTime
- };
-
- public:
-
- using Base::derived;
-
- /** Default constructor */
- SimplicialCholeskyBase()
- : m_info(Success), m_shiftOffset(0), m_shiftScale(1)
- {}
-
- explicit SimplicialCholeskyBase(const MatrixType& matrix)
- : m_info(Success), m_shiftOffset(0), m_shiftScale(1)
- {
- derived().compute(matrix);
- }
-
- ~SimplicialCholeskyBase()
- {
- }
-
- Derived& derived() { return *static_cast<Derived*>(this); }
- const Derived& derived() const { return *static_cast<const Derived*>(this); }
-
- inline Index cols() const { return m_matrix.cols(); }
- inline Index rows() const { return m_matrix.rows(); }
-
- /** \brief Reports whether previous computation was successful.
- *
- * \returns \c Success if computation was succesful,
- * \c NumericalIssue if the matrix.appears to be negative.
- */
- ComputationInfo info() const
- {
- eigen_assert(m_isInitialized && "Decomposition is not initialized.");
- return m_info;
- }
-
- /** \returns the permutation P
- * \sa permutationPinv() */
- const PermutationMatrix<Dynamic,Dynamic,StorageIndex>& permutationP() const
- { return m_P; }
-
- /** \returns the inverse P^-1 of the permutation P
- * \sa permutationP() */
- const PermutationMatrix<Dynamic,Dynamic,StorageIndex>& permutationPinv() const
- { return m_Pinv; }
-
- /** Sets the shift parameters that will be used to adjust the diagonal coefficients during the numerical factorization.
- *
- * During the numerical factorization, the diagonal coefficients are transformed by the following linear model:\n
- * \c d_ii = \a offset + \a scale * \c d_ii
- *
- * The default is the identity transformation with \a offset=0, and \a scale=1.
- *
- * \returns a reference to \c *this.
- */
- Derived& setShift(const RealScalar& offset, const RealScalar& scale = 1)
- {
- m_shiftOffset = offset;
- m_shiftScale = scale;
- return derived();
- }
-
-#ifndef EIGEN_PARSED_BY_DOXYGEN
- /** \internal */
- template<typename Stream>
- void dumpMemory(Stream& s)
- {
- int total = 0;
- s << " L: " << ((total+=(m_matrix.cols()+1) * sizeof(int) + m_matrix.nonZeros()*(sizeof(int)+sizeof(Scalar))) >> 20) << "Mb" << "\n";
- s << " diag: " << ((total+=m_diag.size() * sizeof(Scalar)) >> 20) << "Mb" << "\n";
- s << " tree: " << ((total+=m_parent.size() * sizeof(int)) >> 20) << "Mb" << "\n";
- s << " nonzeros: " << ((total+=m_nonZerosPerCol.size() * sizeof(int)) >> 20) << "Mb" << "\n";
- s << " perm: " << ((total+=m_P.size() * sizeof(int)) >> 20) << "Mb" << "\n";
- s << " perm^-1: " << ((total+=m_Pinv.size() * sizeof(int)) >> 20) << "Mb" << "\n";
- s << " TOTAL: " << (total>> 20) << "Mb" << "\n";
- }
-
- /** \internal */
- template<typename Rhs,typename Dest>
- void _solve_impl(const MatrixBase<Rhs> &b, MatrixBase<Dest> &dest) const
- {
- eigen_assert(m_factorizationIsOk && "The decomposition is not in a valid state for solving, you must first call either compute() or symbolic()/numeric()");
- eigen_assert(m_matrix.rows()==b.rows());
-
- if(m_info!=Success)
- return;
-
- if(m_P.size()>0)
- dest = m_P * b;
- else
- dest = b;
-
- if(m_matrix.nonZeros()>0) // otherwise L==I
- derived().matrixL().solveInPlace(dest);
-
- if(m_diag.size()>0)
- dest = m_diag.asDiagonal().inverse() * dest;
-
- if (m_matrix.nonZeros()>0) // otherwise U==I
- derived().matrixU().solveInPlace(dest);
-
- if(m_P.size()>0)
- dest = m_Pinv * dest;
- }
-
- template<typename Rhs,typename Dest>
- void _solve_impl(const SparseMatrixBase<Rhs> &b, SparseMatrixBase<Dest> &dest) const
- {
- internal::solve_sparse_through_dense_panels(derived(), b, dest);
- }
-
-#endif // EIGEN_PARSED_BY_DOXYGEN
-
- protected:
-
- /** Computes the sparse Cholesky decomposition of \a matrix */
- template<bool DoLDLT>
- void compute(const MatrixType& matrix)
- {
- eigen_assert(matrix.rows()==matrix.cols());
- Index size = matrix.cols();
- CholMatrixType tmp(size,size);
- ConstCholMatrixPtr pmat;
- ordering(matrix, pmat, tmp);
- analyzePattern_preordered(*pmat, DoLDLT);
- factorize_preordered<DoLDLT>(*pmat);
- }
-
- template<bool DoLDLT>
- void factorize(const MatrixType& a)
- {
- eigen_assert(a.rows()==a.cols());
- Index size = a.cols();
- CholMatrixType tmp(size,size);
- ConstCholMatrixPtr pmat;
-
- if(m_P.size()==0 && (UpLo&Upper)==Upper)
- {
- // If there is no ordering, try to directly use the input matrix without any copy
- internal::simplicial_cholesky_grab_input<CholMatrixType,MatrixType>::run(a, pmat, tmp);
- }
- else
- {
- tmp.template selfadjointView<Upper>() = a.template selfadjointView<UpLo>().twistedBy(m_P);
- pmat = &tmp;
- }
-
- factorize_preordered<DoLDLT>(*pmat);
- }
-
- template<bool DoLDLT>
- void factorize_preordered(const CholMatrixType& a);
-
- void analyzePattern(const MatrixType& a, bool doLDLT)
- {
- eigen_assert(a.rows()==a.cols());
- Index size = a.cols();
- CholMatrixType tmp(size,size);
- ConstCholMatrixPtr pmat;
- ordering(a, pmat, tmp);
- analyzePattern_preordered(*pmat,doLDLT);
- }
- void analyzePattern_preordered(const CholMatrixType& a, bool doLDLT);
-
- void ordering(const MatrixType& a, ConstCholMatrixPtr &pmat, CholMatrixType& ap);
-
- /** keeps off-diagonal entries; drops diagonal entries */
- struct keep_diag {
- inline bool operator() (const Index& row, const Index& col, const Scalar&) const
- {
- return row!=col;
- }
- };
-
- mutable ComputationInfo m_info;
- bool m_factorizationIsOk;
- bool m_analysisIsOk;
-
- CholMatrixType m_matrix;
- VectorType m_diag; // the diagonal coefficients (LDLT mode)
- VectorI m_parent; // elimination tree
- VectorI m_nonZerosPerCol;
- PermutationMatrix<Dynamic,Dynamic,StorageIndex> m_P; // the permutation
- PermutationMatrix<Dynamic,Dynamic,StorageIndex> m_Pinv; // the inverse permutation
-
- RealScalar m_shiftOffset;
- RealScalar m_shiftScale;
-};
-
-template<typename _MatrixType, int _UpLo = Lower, typename _Ordering = AMDOrdering<typename _MatrixType::StorageIndex> > class SimplicialLLT;
-template<typename _MatrixType, int _UpLo = Lower, typename _Ordering = AMDOrdering<typename _MatrixType::StorageIndex> > class SimplicialLDLT;
-template<typename _MatrixType, int _UpLo = Lower, typename _Ordering = AMDOrdering<typename _MatrixType::StorageIndex> > class SimplicialCholesky;
-
-namespace internal {
-
-template<typename _MatrixType, int _UpLo, typename _Ordering> struct traits<SimplicialLLT<_MatrixType,_UpLo,_Ordering> >
-{
- typedef _MatrixType MatrixType;
- typedef _Ordering OrderingType;
- enum { UpLo = _UpLo };
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar, ColMajor, StorageIndex> CholMatrixType;
- typedef TriangularView<const CholMatrixType, Eigen::Lower> MatrixL;
- typedef TriangularView<const typename CholMatrixType::AdjointReturnType, Eigen::Upper> MatrixU;
- static inline MatrixL getL(const MatrixType& m) { return MatrixL(m); }
- static inline MatrixU getU(const MatrixType& m) { return MatrixU(m.adjoint()); }
-};
-
-template<typename _MatrixType,int _UpLo, typename _Ordering> struct traits<SimplicialLDLT<_MatrixType,_UpLo,_Ordering> >
-{
- typedef _MatrixType MatrixType;
- typedef _Ordering OrderingType;
- enum { UpLo = _UpLo };
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar, ColMajor, StorageIndex> CholMatrixType;
- typedef TriangularView<const CholMatrixType, Eigen::UnitLower> MatrixL;
- typedef TriangularView<const typename CholMatrixType::AdjointReturnType, Eigen::UnitUpper> MatrixU;
- static inline MatrixL getL(const MatrixType& m) { return MatrixL(m); }
- static inline MatrixU getU(const MatrixType& m) { return MatrixU(m.adjoint()); }
-};
-
-template<typename _MatrixType, int _UpLo, typename _Ordering> struct traits<SimplicialCholesky<_MatrixType,_UpLo,_Ordering> >
-{
- typedef _MatrixType MatrixType;
- typedef _Ordering OrderingType;
- enum { UpLo = _UpLo };
-};
-
-}
-
-/** \ingroup SparseCholesky_Module
- * \class SimplicialLLT
- * \brief A direct sparse LLT Cholesky factorizations
- *
- * This class provides a LL^T Cholesky factorizations of sparse matrices that are
- * selfadjoint and positive definite. The factorization allows for solving A.X = B where
- * X and B can be either dense or sparse.
- *
- * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization
- * such that the factorized matrix is P A P^-1.
- *
- * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<>
- * \tparam _UpLo the triangular part that will be used for the computations. It can be Lower
- * or Upper. Default is Lower.
- * \tparam _Ordering The ordering method to use, either AMDOrdering<> or NaturalOrdering<>. Default is AMDOrdering<>
- *
- * \implsparsesolverconcept
- *
- * \sa class SimplicialLDLT, class AMDOrdering, class NaturalOrdering
- */
-template<typename _MatrixType, int _UpLo, typename _Ordering>
- class SimplicialLLT : public SimplicialCholeskyBase<SimplicialLLT<_MatrixType,_UpLo,_Ordering> >
-{
-public:
- typedef _MatrixType MatrixType;
- enum { UpLo = _UpLo };
- typedef SimplicialCholeskyBase<SimplicialLLT> Base;
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::RealScalar RealScalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar,ColMajor,Index> CholMatrixType;
- typedef Matrix<Scalar,Dynamic,1> VectorType;
- typedef internal::traits<SimplicialLLT> Traits;
- typedef typename Traits::MatrixL MatrixL;
- typedef typename Traits::MatrixU MatrixU;
-public:
- /** Default constructor */
- SimplicialLLT() : Base() {}
- /** Constructs and performs the LLT factorization of \a matrix */
- explicit SimplicialLLT(const MatrixType& matrix)
- : Base(matrix) {}
-
- /** \returns an expression of the factor L */
- inline const MatrixL matrixL() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial LLT not factorized");
- return Traits::getL(Base::m_matrix);
- }
-
- /** \returns an expression of the factor U (= L^*) */
- inline const MatrixU matrixU() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial LLT not factorized");
- return Traits::getU(Base::m_matrix);
- }
-
- /** Computes the sparse Cholesky decomposition of \a matrix */
- SimplicialLLT& compute(const MatrixType& matrix)
- {
- Base::template compute<false>(matrix);
- return *this;
- }
-
- /** Performs a symbolic decomposition on the sparcity of \a matrix.
- *
- * This function is particularly useful when solving for several problems having the same structure.
- *
- * \sa factorize()
- */
- void analyzePattern(const MatrixType& a)
- {
- Base::analyzePattern(a, false);
- }
-
- /** Performs a numeric decomposition of \a matrix
- *
- * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed.
- *
- * \sa analyzePattern()
- */
- void factorize(const MatrixType& a)
- {
- Base::template factorize<false>(a);
- }
-
- /** \returns the determinant of the underlying matrix from the current factorization */
- Scalar determinant() const
- {
- Scalar detL = Base::m_matrix.diagonal().prod();
- return numext::abs2(detL);
- }
-};
-
-/** \ingroup SparseCholesky_Module
- * \class SimplicialLDLT
- * \brief A direct sparse LDLT Cholesky factorizations without square root.
- *
- * This class provides a LDL^T Cholesky factorizations without square root of sparse matrices that are
- * selfadjoint and positive definite. The factorization allows for solving A.X = B where
- * X and B can be either dense or sparse.
- *
- * In order to reduce the fill-in, a symmetric permutation P is applied prior to the factorization
- * such that the factorized matrix is P A P^-1.
- *
- * \tparam _MatrixType the type of the sparse matrix A, it must be a SparseMatrix<>
- * \tparam _UpLo the triangular part that will be used for the computations. It can be Lower
- * or Upper. Default is Lower.
- * \tparam _Ordering The ordering method to use, either AMDOrdering<> or NaturalOrdering<>. Default is AMDOrdering<>
- *
- * \implsparsesolverconcept
- *
- * \sa class SimplicialLLT, class AMDOrdering, class NaturalOrdering
- */
-template<typename _MatrixType, int _UpLo, typename _Ordering>
- class SimplicialLDLT : public SimplicialCholeskyBase<SimplicialLDLT<_MatrixType,_UpLo,_Ordering> >
-{
-public:
- typedef _MatrixType MatrixType;
- enum { UpLo = _UpLo };
- typedef SimplicialCholeskyBase<SimplicialLDLT> Base;
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::RealScalar RealScalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar,ColMajor,StorageIndex> CholMatrixType;
- typedef Matrix<Scalar,Dynamic,1> VectorType;
- typedef internal::traits<SimplicialLDLT> Traits;
- typedef typename Traits::MatrixL MatrixL;
- typedef typename Traits::MatrixU MatrixU;
-public:
- /** Default constructor */
- SimplicialLDLT() : Base() {}
-
- /** Constructs and performs the LLT factorization of \a matrix */
- explicit SimplicialLDLT(const MatrixType& matrix)
- : Base(matrix) {}
-
- /** \returns a vector expression of the diagonal D */
- inline const VectorType vectorD() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized");
- return Base::m_diag;
- }
- /** \returns an expression of the factor L */
- inline const MatrixL matrixL() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized");
- return Traits::getL(Base::m_matrix);
- }
-
- /** \returns an expression of the factor U (= L^*) */
- inline const MatrixU matrixU() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial LDLT not factorized");
- return Traits::getU(Base::m_matrix);
- }
-
- /** Computes the sparse Cholesky decomposition of \a matrix */
- SimplicialLDLT& compute(const MatrixType& matrix)
- {
- Base::template compute<true>(matrix);
- return *this;
- }
-
- /** Performs a symbolic decomposition on the sparcity of \a matrix.
- *
- * This function is particularly useful when solving for several problems having the same structure.
- *
- * \sa factorize()
- */
- void analyzePattern(const MatrixType& a)
- {
- Base::analyzePattern(a, true);
- }
-
- /** Performs a numeric decomposition of \a matrix
- *
- * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed.
- *
- * \sa analyzePattern()
- */
- void factorize(const MatrixType& a)
- {
- Base::template factorize<true>(a);
- }
-
- /** \returns the determinant of the underlying matrix from the current factorization */
- Scalar determinant() const
- {
- return Base::m_diag.prod();
- }
-};
-
-/** \deprecated use SimplicialLDLT or class SimplicialLLT
- * \ingroup SparseCholesky_Module
- * \class SimplicialCholesky
- *
- * \sa class SimplicialLDLT, class SimplicialLLT
- */
-template<typename _MatrixType, int _UpLo, typename _Ordering>
- class SimplicialCholesky : public SimplicialCholeskyBase<SimplicialCholesky<_MatrixType,_UpLo,_Ordering> >
-{
-public:
- typedef _MatrixType MatrixType;
- enum { UpLo = _UpLo };
- typedef SimplicialCholeskyBase<SimplicialCholesky> Base;
- typedef typename MatrixType::Scalar Scalar;
- typedef typename MatrixType::RealScalar RealScalar;
- typedef typename MatrixType::StorageIndex StorageIndex;
- typedef SparseMatrix<Scalar,ColMajor,StorageIndex> CholMatrixType;
- typedef Matrix<Scalar,Dynamic,1> VectorType;
- typedef internal::traits<SimplicialCholesky> Traits;
- typedef internal::traits<SimplicialLDLT<MatrixType,UpLo> > LDLTTraits;
- typedef internal::traits<SimplicialLLT<MatrixType,UpLo> > LLTTraits;
- public:
- SimplicialCholesky() : Base(), m_LDLT(true) {}
-
- explicit SimplicialCholesky(const MatrixType& matrix)
- : Base(), m_LDLT(true)
- {
- compute(matrix);
- }
-
- SimplicialCholesky& setMode(SimplicialCholeskyMode mode)
- {
- switch(mode)
- {
- case SimplicialCholeskyLLT:
- m_LDLT = false;
- break;
- case SimplicialCholeskyLDLT:
- m_LDLT = true;
- break;
- default:
- break;
- }
-
- return *this;
- }
-
- inline const VectorType vectorD() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial Cholesky not factorized");
- return Base::m_diag;
- }
- inline const CholMatrixType rawMatrix() const {
- eigen_assert(Base::m_factorizationIsOk && "Simplicial Cholesky not factorized");
- return Base::m_matrix;
- }
-
- /** Computes the sparse Cholesky decomposition of \a matrix */
- SimplicialCholesky& compute(const MatrixType& matrix)
- {
- if(m_LDLT)
- Base::template compute<true>(matrix);
- else
- Base::template compute<false>(matrix);
- return *this;
- }
-
- /** Performs a symbolic decomposition on the sparcity of \a matrix.
- *
- * This function is particularly useful when solving for several problems having the same structure.
- *
- * \sa factorize()
- */
- void analyzePattern(const MatrixType& a)
- {
- Base::analyzePattern(a, m_LDLT);
- }
-
- /** Performs a numeric decomposition of \a matrix
- *
- * The given matrix must has the same sparcity than the matrix on which the symbolic decomposition has been performed.
- *
- * \sa analyzePattern()
- */
- void factorize(const MatrixType& a)
- {
- if(m_LDLT)
- Base::template factorize<true>(a);
- else
- Base::template factorize<false>(a);
- }
-
- /** \internal */
- template<typename Rhs,typename Dest>
- void _solve_impl(const MatrixBase<Rhs> &b, MatrixBase<Dest> &dest) const
- {
- eigen_assert(Base::m_factorizationIsOk && "The decomposition is not in a valid state for solving, you must first call either compute() or symbolic()/numeric()");
- eigen_assert(Base::m_matrix.rows()==b.rows());
-
- if(Base::m_info!=Success)
- return;
-
- if(Base::m_P.size()>0)
- dest = Base::m_P * b;
- else
- dest = b;
-
- if(Base::m_matrix.nonZeros()>0) // otherwise L==I
- {
- if(m_LDLT)
- LDLTTraits::getL(Base::m_matrix).solveInPlace(dest);
- else
- LLTTraits::getL(Base::m_matrix).solveInPlace(dest);
- }
-
- if(Base::m_diag.size()>0)
- dest = Base::m_diag.asDiagonal().inverse() * dest;
-
- if (Base::m_matrix.nonZeros()>0) // otherwise I==I
- {
- if(m_LDLT)
- LDLTTraits::getU(Base::m_matrix).solveInPlace(dest);
- else
- LLTTraits::getU(Base::m_matrix).solveInPlace(dest);
- }
-
- if(Base::m_P.size()>0)
- dest = Base::m_Pinv * dest;
- }
-
- /** \internal */
- template<typename Rhs,typename Dest>
- void _solve_impl(const SparseMatrixBase<Rhs> &b, SparseMatrixBase<Dest> &dest) const
- {
- internal::solve_sparse_through_dense_panels(*this, b, dest);
- }
-
- Scalar determinant() const
- {
- if(m_LDLT)
- {
- return Base::m_diag.prod();
- }
- else
- {
- Scalar detL = Diagonal<const CholMatrixType>(Base::m_matrix).prod();
- return numext::abs2(detL);
- }
- }
-
- protected:
- bool m_LDLT;
-};
-
-template<typename Derived>
-void SimplicialCholeskyBase<Derived>::ordering(const MatrixType& a, ConstCholMatrixPtr &pmat, CholMatrixType& ap)
-{
- eigen_assert(a.rows()==a.cols());
- const Index size = a.rows();
- pmat = ≈
- // Note that ordering methods compute the inverse permutation
- if(!internal::is_same<OrderingType,NaturalOrdering<Index> >::value)
- {
- {
- CholMatrixType C;
- C = a.template selfadjointView<UpLo>();
-
- OrderingType ordering;
- ordering(C,m_Pinv);
- }
-
- if(m_Pinv.size()>0) m_P = m_Pinv.inverse();
- else m_P.resize(0);
-
- ap.resize(size,size);
- ap.template selfadjointView<Upper>() = a.template selfadjointView<UpLo>().twistedBy(m_P);
- }
- else
- {
- m_Pinv.resize(0);
- m_P.resize(0);
- if(int(UpLo)==int(Lower) || MatrixType::IsRowMajor)
- {
- // we have to transpose the lower part to to the upper one
- ap.resize(size,size);
- ap.template selfadjointView<Upper>() = a.template selfadjointView<UpLo>();
- }
- else
- internal::simplicial_cholesky_grab_input<CholMatrixType,MatrixType>::run(a, pmat, ap);
- }
-}
-
-} // end namespace Eigen
-
-#endif // EIGEN_SIMPLICIAL_CHOLESKY_H
=====================================
vendor/eigen/Eigen/src/SparseCholesky/SimplicialCholesky_impl.h deleted
=====================================
@@ -1,199 +0,0 @@
-// This file is part of Eigen, a lightweight C++ template library
-// for linear algebra.
-//
-// Copyright (C) 2008-2012 Gael Guennebaud <gael.guennebaud at inria.fr>
-
-/*
-
-NOTE: thes functions vave been adapted from the LDL library:
-
-LDL Copyright (c) 2005 by Timothy A. Davis. All Rights Reserved.
-
-LDL License:
-
- Your use or distribution of LDL or any modified version of
- LDL implies that you agree to this License.
-
- This library is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public
- License as published by the Free Software Foundation; either
- version 2.1 of the License, or (at your option) any later version.
-
- This library is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Lesser General Public License for more details.
-
- You should have received a copy of the GNU Lesser General Public
- License along with this library; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301
- USA
-
- Permission is hereby granted to use or copy this program under the
- terms of the GNU LGPL, provided that the Copyright, this License,
- and the Availability of the original version is retained on all copies.
- User documentation of any code that uses this code or any modified
- version of this code must cite the Copyright, this License, the
- Availability note, and "Used by permission." Permission to modify
- the code and to distribute modified code is granted, provided the
- Copyright, this License, and the Availability note are retained,
- and a notice that the code was modified is included.
- */
-
-#include "../Core/util/NonMPL2.h"
-
-#ifndef EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H
-#define EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H
-
-namespace Eigen {
-
-template<typename Derived>
-void SimplicialCholeskyBase<Derived>::analyzePattern_preordered(const CholMatrixType& ap, bool doLDLT)
-{
- const StorageIndex size = StorageIndex(ap.rows());
- m_matrix.resize(size, size);
- m_parent.resize(size);
- m_nonZerosPerCol.resize(size);
-
- ei_declare_aligned_stack_constructed_variable(StorageIndex, tags, size, 0);
-
- for(StorageIndex k = 0; k < size; ++k)
- {
- /* L(k,:) pattern: all nodes reachable in etree from nz in A(0:k-1,k) */
- m_parent[k] = -1; /* parent of k is not yet known */
- tags[k] = k; /* mark node k as visited */
- m_nonZerosPerCol[k] = 0; /* count of nonzeros in column k of L */
- for(typename CholMatrixType::InnerIterator it(ap,k); it; ++it)
- {
- StorageIndex i = it.index();
- if(i < k)
- {
- /* follow path from i to root of etree, stop at flagged node */
- for(; tags[i] != k; i = m_parent[i])
- {
- /* find parent of i if not yet determined */
- if (m_parent[i] == -1)
- m_parent[i] = k;
- m_nonZerosPerCol[i]++; /* L (k,i) is nonzero */
- tags[i] = k; /* mark i as visited */
- }
- }
- }
- }
-
- /* construct Lp index array from m_nonZerosPerCol column counts */
- StorageIndex* Lp = m_matrix.outerIndexPtr();
- Lp[0] = 0;
- for(StorageIndex k = 0; k < size; ++k)
- Lp[k+1] = Lp[k] + m_nonZerosPerCol[k] + (doLDLT ? 0 : 1);
-
- m_matrix.resizeNonZeros(Lp[size]);
-
- m_isInitialized = true;
- m_info = Success;
- m_analysisIsOk = true;
- m_factorizationIsOk = false;
-}
-
-
-template<typename Derived>
-template<bool DoLDLT>
-void SimplicialCholeskyBase<Derived>::factorize_preordered(const CholMatrixType& ap)
-{
- using std::sqrt;
-
- eigen_assert(m_analysisIsOk && "You must first call analyzePattern()");
- eigen_assert(ap.rows()==ap.cols());
- eigen_assert(m_parent.size()==ap.rows());
- eigen_assert(m_nonZerosPerCol.size()==ap.rows());
-
- const StorageIndex size = StorageIndex(ap.rows());
- const StorageIndex* Lp = m_matrix.outerIndexPtr();
- StorageIndex* Li = m_matrix.innerIndexPtr();
- Scalar* Lx = m_matrix.valuePtr();
-
- ei_declare_aligned_stack_constructed_variable(Scalar, y, size, 0);
- ei_declare_aligned_stack_constructed_variable(StorageIndex, pattern, size, 0);
- ei_declare_aligned_stack_constructed_variable(StorageIndex, tags, size, 0);
-
- bool ok = true;
- m_diag.resize(DoLDLT ? size : 0);
-
- for(StorageIndex k = 0; k < size; ++k)
- {
- // compute nonzero pattern of kth row of L, in topological order
- y[k] = 0.0; // Y(0:k) is now all zero
- StorageIndex top = size; // stack for pattern is empty
- tags[k] = k; // mark node k as visited
- m_nonZerosPerCol[k] = 0; // count of nonzeros in column k of L
- for(typename CholMatrixType::InnerIterator it(ap,k); it; ++it)
- {
- StorageIndex i = it.index();
- if(i <= k)
- {
- y[i] += numext::conj(it.value()); /* scatter A(i,k) into Y (sum duplicates) */
- Index len;
- for(len = 0; tags[i] != k; i = m_parent[i])
- {
- pattern[len++] = i; /* L(k,i) is nonzero */
- tags[i] = k; /* mark i as visited */
- }
- while(len > 0)
- pattern[--top] = pattern[--len];
- }
- }
-
- /* compute numerical values kth row of L (a sparse triangular solve) */
-
- RealScalar d = numext::real(y[k]) * m_shiftScale + m_shiftOffset; // get D(k,k), apply the shift function, and clear Y(k)
- y[k] = 0.0;
- for(; top < size; ++top)
- {
- Index i = pattern[top]; /* pattern[top:n-1] is pattern of L(:,k) */
- Scalar yi = y[i]; /* get and clear Y(i) */
- y[i] = 0.0;
-
- /* the nonzero entry L(k,i) */
- Scalar l_ki;
- if(DoLDLT)
- l_ki = yi / m_diag[i];
- else
- yi = l_ki = yi / Lx[Lp[i]];
-
- Index p2 = Lp[i] + m_nonZerosPerCol[i];
- Index p;
- for(p = Lp[i] + (DoLDLT ? 0 : 1); p < p2; ++p)
- y[Li[p]] -= numext::conj(Lx[p]) * yi;
- d -= numext::real(l_ki * numext::conj(yi));
- Li[p] = k; /* store L(k,i) in column form of L */
- Lx[p] = l_ki;
- ++m_nonZerosPerCol[i]; /* increment count of nonzeros in col i */
- }
- if(DoLDLT)
- {
- m_diag[k] = d;
- if(d == RealScalar(0))
- {
- ok = false; /* failure, D(k,k) is zero */
- break;
- }
- }
- else
- {
- Index p = Lp[k] + m_nonZerosPerCol[k]++;
- Li[p] = k ; /* store L(k,k) = sqrt (d) in column k */
- if(d <= RealScalar(0)) {
- ok = false; /* failure, matrix is not positive definite */
- break;
- }
- Lx[p] = sqrt(d) ;
- }
- }
-
- m_info = ok ? Success : NumericalIssue;
- m_factorizationIsOk = true;
-}
-
-} // end namespace Eigen
-
-#endif // EIGEN_SIMPLICIAL_CHOLESKY_IMPL_H
View it on GitLab: https://salsa.debian.org/debian-gis-team/pdal/commit/8db461d48327982944dd7354ab6342c5ce1ce2ee
--
View it on GitLab: https://salsa.debian.org/debian-gis-team/pdal/commit/8db461d48327982944dd7354ab6342c5ce1ce2ee
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20181102/e0968a79/attachment-0001.html>
More information about the Pkg-grass-devel
mailing list