[Git][debian-gis-team/python-rtree][master] 4 commits: New upstream version 0.9.2

Bas Couwenberg gitlab at salsa.debian.org
Tue Dec 10 05:04:27 GMT 2019



Bas Couwenberg pushed to branch master at Debian GIS Project / python-rtree


Commits:
8508c7c0 by Bas Couwenberg at 2019-12-10T04:48:50Z
New upstream version 0.9.2
- - - - -
792f2333 by Bas Couwenberg at 2019-12-10T04:48:51Z
Update upstream source from tag 'upstream/0.9.2'

Update to upstream version '0.9.2'
with Debian dir 9f9fd8f0a134294228c8694a659e1d27cd83db51
- - - - -
f26a8b65 by Bas Couwenberg at 2019-12-10T04:49:10Z
New upstream release.

- - - - -
9bab7b30 by Bas Couwenberg at 2019-12-10T04:50:44Z
Set distribution to unstable.

- - - - -


25 changed files:

- + .gitignore
- + .travis.yml
- − PKG-INFO
- + azure-pipelines.yml
- + ci/azp/linux.yml
- + ci/azp/osx.yml
- + ci/azp/win.yml
- debian/changelog
- docs/source/changes.txt
- + environment.yml
- + readthedocs.yml
- rtree/__init__.py
- rtree/core.py
- rtree/index.py
- + scripts/visualize.py
- − setup.cfg
- − tests/stream-check.py
- − tests/test_bounds.txt
- − tests/test_container.py
- − tests/test_customStorage.txt
- tests/test_index.py
- − tests/test_index_doctests.txt
- − tests/test_misc.txt
- − tests/test_pickle.py
- − tests/test_properties.txt


Changes:

=====================================
.gitignore
=====================================
@@ -0,0 +1,7 @@
+Rtree.egg-info/
+*.pyc
+docs/build
+build/
+dist/
+*.idx
+*.dat


=====================================
.travis.yml
=====================================
@@ -0,0 +1,29 @@
+dist: trusty
+
+cache:
+  - pip
+  - apt
+
+language: python
+
+matrix:
+  include:
+    - python: "2.7"
+    - python: "3.3"
+    - python: "3.4"
+    - python: "3.5"
+    - python: "3.6"
+    - python: "3.7"
+      sudo: required
+      dist: xenial
+
+addons:
+  apt:
+    packages:
+      - libspatialindex-dev
+
+install:
+  - pip install -e .
+
+script:
+  - python -m pytest --doctest-modules rtree tests/test_*


=====================================
PKG-INFO deleted
=====================================
@@ -1,66 +0,0 @@
-Metadata-Version: 2.1
-Name: Rtree
-Version: 0.9.1
-Summary: R-Tree spatial index for Python GIS
-Home-page: https://github.com/Toblerity/rtree
-Author: Sean Gillies
-Author-email: sean.gillies at gmail.com
-Maintainer: Howard Butler
-Maintainer-email: howard at hobu.co
-License: MIT
-Description: Rtree: Spatial indexing for Python
-        ------------------------------------------------------------------------------
-        
-        `Rtree`_ is a `ctypes`_ Python wrapper of `libspatialindex`_ that provides a 
-        number of advanced spatial indexing features for the spatially curious Python 
-        user.  These features include:
-        
-        * Nearest neighbor search
-        * Intersection search
-        * Multi-dimensional indexes
-        * Clustered indexes (store Python pickles directly with index entries)
-        * Bulk loading
-        * Deletion
-        * Disk serialization
-        * Custom storage implementation (to implement spatial indexing in ZODB, for example)
-        
-        Documentation and Website
-        ..............................................................................
-        
-        https://rtree.readthedocs.io/en/latest/
-        
-        Requirements
-        ..............................................................................
-        
-        * `libspatialindex`_ 1.8.5+.
-        
-        Download
-        ..............................................................................
-        
-        * PyPI http://pypi.python.org/pypi/Rtree/
-        * Windows binaries http://www.lfd.uci.edu/~gohlke/pythonlibs/#rtree
-        
-        Development
-        ..............................................................................
-        
-        * https://github.com/Toblerity/Rtree
-        
-        .. _`R-trees`: http://en.wikipedia.org/wiki/R-tree
-        .. _`ctypes`: http://docs.python.org/library/ctypes.html
-        .. _`libspatialindex`: http://libspatialindex.github.com
-        .. _`Rtree`: http://toblerity.github.com/rtree/
-        
-Keywords: gis spatial index r-tree
-Platform: UNKNOWN
-Classifier: Development Status :: 5 - Production/Stable
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: Science/Research
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Operating System :: OS Independent
-Classifier: Programming Language :: C
-Classifier: Programming Language :: C++
-Classifier: Programming Language :: Python
-Classifier: Topic :: Scientific/Engineering :: GIS
-Classifier: Topic :: Database
-Provides-Extra: test
-Provides-Extra: all


=====================================
azure-pipelines.yml
=====================================
@@ -0,0 +1,10 @@
+pr:
+  branches:
+    include:
+    - master
+
+jobs:
+  - template: ./ci/azp/linux.yml
+  - template: ./ci/azp/win.yml
+  - template: ./ci/azp/osx.yml
+


=====================================
ci/azp/linux.yml
=====================================
@@ -0,0 +1,37 @@
+jobs:
+- job:
+  displayName: ubuntu-16.04
+  pool:
+    vmImage: 'ubuntu-16.04'
+  strategy:
+    matrix:
+      Python36_185:
+        python.version: '3.6'
+        sidx.version: '1.8.5'
+      Python36_193:
+        python.version: '3.6'
+        sidx.version: '1.9.3'
+      Python37:
+        python.version: '3.7'
+        sidx.version: '1.9.3'
+      Python38:
+        python.version: '3.8'
+        sidx.version: '1.9.3'
+        
+  steps:
+  - bash: echo "##vso[task.prependpath]$CONDA/bin"
+    displayName: Add conda to PATH
+
+  - bash: conda create --yes --quiet --name rtree
+    displayName: Create Anaconda environment
+
+  - bash: |
+      source activate rtree
+      conda install --yes --quiet --name rtree python=$PYTHON_VERSION libspatialindex=$SIDX_VERSION
+    displayName: Install Anaconda packages
+
+  - bash: |
+      source activate rtree
+      pip install pytest numpy
+      python -m pytest --doctest-modules rtree tests/test_*
+    displayName: pytest


=====================================
ci/azp/osx.yml
=====================================
@@ -0,0 +1,50 @@
+# -*- mode: yaml -*-
+
+
+jobs:
+- job:
+  displayName: macOS-10.13
+  pool:
+    vmImage: 'macOS-10.13'
+  strategy:
+    matrix:
+      Python36_185:
+        python.version: '3.6'
+        sidx.version: '1.8.5'
+      Python36_193:
+        python.version: '3.6'
+        sidx.version: '1.9.3'
+      Python37:
+        python.version: '3.7'
+        sidx.version: '1.9.3'
+      Python38:
+        python.version: '3.8'
+        sidx.version: '1.9.3'
+        
+  steps:
+  - script: |
+      echo "Removing homebrew from Azure to avoid conflicts."
+      curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall > ~/uninstall_homebrew
+      chmod +x ~/uninstall_homebrew
+      ~/uninstall_homebrew -fq
+      rm ~/uninstall_homebrew
+    displayName: Remove homebrew
+  - bash: |
+      echo "##vso[task.prependpath]$CONDA/bin"
+      sudo chown -R $USER $CONDA
+    displayName: Add conda to PATH
+
+
+  - bash: conda create --yes --quiet --name rtree
+    displayName: Create Anaconda environment
+
+  - bash: |
+      source activate rtree
+      conda install --yes --quiet --name rtree python=$PYTHON_VERSION libspatialindex=$SIDX_VERSION
+    displayName: Install Anaconda packages
+
+  - bash: |
+      source activate rtree
+      pip install pytest numpy
+      python -m pytest --doctest-modules rtree tests/test_*
+    displayName: pytest


=====================================
ci/azp/win.yml
=====================================
@@ -0,0 +1,39 @@
+# -*- mode: yaml -*-
+
+jobs:
+- job:
+  displayName: vs2017-win2016
+  pool:
+    vmImage: 'vs2017-win2016'
+  strategy:
+    matrix:
+      Python36_185:
+        python.version: '3.6'
+        sidx.version: '1.8.5'
+      Python36_193:
+        python.version: '3.6'
+        sidx.version: '1.9.3'
+      Python37:
+        python.version: '3.7'
+        sidx.version: '1.9.3'
+      Python38:
+        python.version: '3.8'
+        sidx.version: '1.9.3'
+
+  steps:
+  - powershell: Write-Host "##vso[task.prependpath]$env:CONDA\Scripts"
+    displayName: Add conda to PATH
+
+  - script: conda create --yes --quiet --name rtree
+    displayName: Create Anaconda environment
+
+  - script: |
+      call activate rtree
+      conda install --yes --quiet --name rtree python=%PYTHON_VERSION% libspatialindex=%SIDX_VERSION%
+    displayName: Install Anaconda packages
+
+  - script: |
+      call activate rtree
+      pip install pytest numpy
+      python -m pytest --doctest-modules rtree tests
+    displayName: pytest


=====================================
debian/changelog
=====================================
@@ -1,9 +1,10 @@
-python-rtree (0.9.1+ds-2) UNRELEASED; urgency=medium
+python-rtree (0.9.2-1) unstable; urgency=medium
 
+  * New upstream release.
   * Update watch file to use GitHub releases.
   * Drop Name field from upstream metadata.
 
- -- Bas Couwenberg <sebastic at debian.org>  Mon, 09 Dec 2019 08:24:47 +0100
+ -- Bas Couwenberg <sebastic at debian.org>  Tue, 10 Dec 2019 05:50:34 +0100
 
 python-rtree (0.9.1+ds-1) unstable; urgency=medium
 


=====================================
docs/source/changes.txt
=====================================
@@ -3,18 +3,26 @@
 Changes
 ..............................................................................
 
+0.9.2: 2019-12-09
+===============
+
+- Refactored tests to be based on unittest https://github.com/Toblerity/rtree/pull/129
+- Update libspatialindex library loading code to adapt previous behavior https://github.com/Toblerity/rtree/pull/128
+- Empty data streams throw exceptions and do not partially construct indexes https://github.com/Toblerity/rtree/pull/127
+
 0.9.0: 2019-11-24
 ===============
 
-- Add Index.GetResultSetOffset() 
+- Add Index.GetResultSetOffset()
 - Add Index.contains() method for object and id (requires libspatialindex 1.9.3+) #116
-- Add Index.Flush() #107 
+- Add Index.Flush() #107
 - Add TPRTree index support (thanks @sdhiscocks #117 )
 - Return container sizes without returning objects #90
 - Add set_result_limit and set_result_offset for Index paging  44ad21aecd3f7b49314b9be12f3334d8bae7e827
 
-## Bug fixes
-- Better exceptions in cases where stream functions throw #80 
+Bug fixes:
+
+- Better exceptions in cases where stream functions throw #80
 - Migrated CI platform to Azure Pipelines  https://dev.azure.com/hobuinc/rtree/_build?definitionId=5
 - Minor test enhancements and fixups. Both libspatialindex 1.8.5 and libspatialindex 1.9.3 are tested with CI
 
@@ -45,13 +53,13 @@ Changes
 - Number of results for :py:meth:`~rtree.index.Index.nearest` defaults to 1.
 - libsidx C library of 0.5.0 removed and included in libspatialindex
 - objects="raw" in :py:meth:`~rtree.index.Index.intersection` to return the object sent in (for speed).
-- :py:meth:`~rtree.index.Index.count` method to return the intersection count without the overhead 
+- :py:meth:`~rtree.index.Index.count` method to return the intersection count without the overhead
   of returning a list (thanks Leonard Norrgård).
 - Improved bulk loading performance
 - Supposedly no memory leaks :)
 - Many other performance tweaks (see docs).
 - Bulk loader supports interleaved coordinates
-- Leaf queries.  You can return the box and ids of the leaf nodes of the index.  
+- Leaf queries.  You can return the box and ids of the leaf nodes of the index.
   Useful for visualization, etc.
 - Many more docstrings, sphinx docs, etc
 
@@ -70,9 +78,9 @@ available as a result of this refactoring.
 * bulk loading of indexes at instantiation time
 * ability to quickly return the bounds of the entire index
 * ability to return the bounds of index entries
-* much better windows support 
+* much better windows support
 * libspatialindex 1.4.0 required.
-  
+
 0.4.3: 2009-06-05
 =================
 - Fix reference counting leak #181
@@ -99,7 +107,7 @@ available as a result of this refactoring.
 - Reraise index query errors as Python exceptions.
 - Improved persistence.
 
-0.2: 
+0.2:
 ==================
 - Link spatialindex system library.
 


=====================================
environment.yml
=====================================
@@ -0,0 +1,7 @@
+name: _rtree
+channels:
+- defaults
+- conda-forge
+dependencies:
+- python>=3.5
+- libspatialindex


=====================================
readthedocs.yml
=====================================
@@ -0,0 +1,5 @@
+python:
+  version: 3
+  pip_install: true
+conda:
+  file: environment.yml


=====================================
rtree/__init__.py
=====================================
@@ -2,4 +2,4 @@ from .index import Rtree
 
 from .core import rt
 
-__version__ = '0.9.1'
+__version__ = '0.9.2'


=====================================
rtree/core.py
=====================================
@@ -76,35 +76,71 @@ def free_error_msg_ptr(result, func, cargs):
     rt.Index_Free(p)
     return retvalue
 
-
+def _load_library(dllname, loadfunction, dllpaths=('', )):
+    """Load a DLL via ctypes load function. Return None on failure.
+    Try loading the DLL from the current package directory first,
+    then from the Windows DLL search path.
+    """
+    try:
+        dllpaths = (os.path.abspath(os.path.dirname(__file__)),
+                    ) + dllpaths
+    except NameError:
+        pass  # no __file__ attribute on PyPy and some frozen distributions
+    for path in dllpaths:
+        if path:
+            # temporarily add the path to the PATH environment variable
+            # so Windows can find additional DLL dependencies.
+            try:
+                oldenv = os.environ['PATH']
+                os.environ['PATH'] = path + ';' + oldenv
+            except KeyError:
+                oldenv = None
+        try:
+            return loadfunction(os.path.join(path, dllname))
+        except (WindowsError, OSError):
+            pass
+        finally:
+            if path and oldenv is not None:
+                os.environ['PATH'] = oldenv
+    return None
 
 
 if os.name == 'nt':
 
-
     base_name = 'spatialindex_c'
     if '64' in platform.architecture()[0]:
         arch = '64'
     else:
         arch = '32'
 
-    if 'conda' in sys.version:
-        os.environ['PATH'] = "{};{}".format(os.environ['PATH'], os.path.join(sys.prefix, "Library", "bin"))
-    rt = ctypes.CDLL('%s-%s.dll' % (base_name, arch))
+    lib_name = '%s-%s.dll' % (base_name, arch)
+    if 'SPATIALINDEX_C_LIBRARY' in os.environ:
+        lib_path, lib_name = os.path.split(os.environ['SPATIALINDEX_C_LIBRARY'])
+        rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
+    elif 'conda' in sys.version:
+        lib_path = os.path.join(sys.prefix, "Library", "bin")
+        rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
+    else:
+        rt = _load_library(lib_name, ctypes.cdll.LoadLibrary)
+    if not rt:
+        raise OSError("could not find or load %s" % lib_name)
 
 elif os.name == 'posix':
-    if 'conda' in sys.version:
-        os.environ['PATH'] = "{};{}".format(os.environ['PATH'], os.path.join(sys.prefix, "lib"))
-    
-    lib_name = find_library('spatialindex_c')
-    if not lib_name:
-        if 'linux' in sys.platform:
-            lib_name = 'libspatialindex_c.so'
-        elif 'darwin' in sys.platform:
-            lib_name = 'libspatialindex_c.dylib'
-        else:
-            lib_name = 'libspatialindex_c'
-    rt = ctypes.CDLL(lib_name)
+
+    if 'SPATIALINDEX_C_LIBRARY' in os.environ:
+        lib_name = os.environ['SPATIALINDEX_C_LIBRARY']
+        rt = ctypes.CDLL(lib_name)
+    elif 'conda' in sys.version:
+        lib_path = os.path.join(sys.prefix, "lib")
+        lib_name = find_library('spatialindex_c')
+        rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
+    else:
+        lib_name = find_library('spatialindex_c')
+        rt = ctypes.CDLL(lib_name)
+
+    if not rt:
+        raise OSError("Could not load libspatialindex_c library")
+
 else:
     raise RTreeError('Unsupported OS "%s"' % os.name)
 


=====================================
rtree/index.py
=====================================
@@ -290,11 +290,7 @@ class Index(object):
 
         if stream and self.properties.type == RT_RTree:
             self._exception = None
-            try:
-                self.handle = self._create_idx_from_stream(stream)
-            except:
-                if self._exception:
-                    raise self._exception
+            self.handle = self._create_idx_from_stream(stream)
             if self._exception:
                 raise self._exception
         else:
@@ -1171,6 +1167,9 @@ class Item(object):
         self.bounds = _get_bounds(
             self.handle, core.rt.IndexItem_GetBounds, False)
 
+    def __gt__(self, other):
+        return self.id > other.id
+
     @property
     def bbox(self):
         """Returns the bounding box of the index entry"""


=====================================
scripts/visualize.py
=====================================
@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+
+from rtree import index
+
+import ogr
+
+
+def quick_create_layer_def(lyr, field_list):
+    # Each field is a tuple of (name, type, width, precision)
+    # Any of type, width and precision can be skipped.  Default type is string.
+
+    for field in field_list:
+        name = field[0]
+        if len(field) > 1:
+            type = field[1]
+        else:
+            type = ogr.OFTString
+
+        field_defn = ogr.FieldDefn(name, type)
+
+        if len(field) > 2:
+            field_defn.SetWidth(int(field[2]))
+
+        if len(field) > 3:
+            field_defn.SetPrecision(int(field[3]))
+
+        lyr.CreateField(field_defn)
+
+        field_defn.Destroy()
+
+
+import sys
+
+shape_drv = ogr.GetDriverByName('ESRI Shapefile')
+
+shapefile_name = sys.argv[1].split('.')[0]
+shape_ds = shape_drv.CreateDataSource(shapefile_name)
+leaf_block_lyr = shape_ds.CreateLayer('leaf', geom_type=ogr.wkbPolygon)
+point_block_lyr = shape_ds.CreateLayer('point', geom_type=ogr.wkbPolygon)
+point_lyr = shape_ds.CreateLayer('points', geom_type=ogr.wkbPoint)
+
+quick_create_layer_def(
+    leaf_block_lyr,
+    [
+        ('BLK_ID', ogr.OFTInteger),
+        ('COUNT', ogr.OFTInteger),
+    ])
+
+quick_create_layer_def(
+    point_block_lyr,
+    [
+        ('BLK_ID', ogr.OFTInteger),
+        ('COUNT', ogr.OFTInteger),
+    ])
+
+quick_create_layer_def(
+    point_lyr,
+    [
+        ('ID', ogr.OFTInteger),
+        ('BLK_ID', ogr.OFTInteger),
+    ])
+
+p = index.Property()
+p.filename = sys.argv[1]
+p.overwrite = False
+
+p.storage = index.RT_Disk
+idx = index.Index(sys.argv[1])
+
+leaves = idx.leaves()
+# leaves[0] == (0L, [2L, 92L, 51L, 55L, 26L], [-132.41727847799999,
+# -96.717721818399994, -132.41727847799999, -96.717721818399994])
+
+from liblas import file
+f = file.File(sys.argv[1])
+
+
+def area(minx, miny, maxx, maxy):
+    width = abs(maxx - minx)
+    height = abs(maxy - miny)
+
+    return width*height
+
+
+def get_bounds(leaf_ids, lasfile, block_id):
+    # read the first point and set the bounds to that
+
+    p = lasfile.read(leaf_ids[0])
+    minx, maxx = p.x, p.x
+    miny, maxy = p.y, p.y
+
+    print(len(leaf_ids))
+    print(leaf_ids[0:10])
+
+    for p_id in leaf_ids:
+        p = lasfile.read(p_id)
+        minx = min(minx, p.x)
+        maxx = max(maxx, p.x)
+        miny = min(miny, p.y)
+        maxy = max(maxy, p.y)
+        feature = ogr.Feature(feature_def=point_lyr.GetLayerDefn())
+        g = ogr.CreateGeometryFromWkt('POINT (%.8f %.8f)' % (p.x, p.y))
+        feature.SetGeometry(g)
+        feature.SetField('ID', p_id)
+        feature.SetField('BLK_ID', block_id)
+        result = point_lyr.CreateFeature(feature)
+        del result
+
+    return (minx, miny, maxx, maxy)
+
+
+def make_poly(minx, miny, maxx, maxy):
+    wkt = 'POLYGON ((%.8f %.8f, %.8f %.8f, %.8f %.8f, %.8f %.8f, %.8f %.8f))'\
+        % (minx, miny, maxx, miny, maxx, maxy, minx, maxy, minx, miny)
+    shp = ogr.CreateGeometryFromWkt(wkt)
+    return shp
+
+
+def make_feature(lyr, geom, id, count):
+    feature = ogr.Feature(feature_def=lyr.GetLayerDefn())
+    feature.SetGeometry(geom)
+    feature.SetField('BLK_ID', id)
+    feature.SetField('COUNT', count)
+    result = lyr.CreateFeature(feature)
+    del result
+
+t = 0
+for leaf in leaves:
+    id = leaf[0]
+    ids = leaf[1]
+    count = len(ids)
+    # import pdb;pdb.set_trace()
+
+    if len(leaf[2]) == 4:
+        minx, miny, maxx, maxy = leaf[2]
+    else:
+        minx, miny, maxx, maxy, minz, maxz = leaf[2]
+
+    if id == 186:
+        print(leaf[2])
+
+    print(leaf[2])
+    leaf = make_poly(minx, miny, maxx, maxy)
+    print('leaf: ' + str([minx, miny, maxx, maxy]))
+
+    pminx, pminy, pmaxx, pmaxy = get_bounds(ids, f, id)
+    point = make_poly(pminx, pminy, pmaxx, pmaxy)
+
+    print('point: ' + str([pminx, pminy, pmaxx, pmaxy]))
+    print('point bounds: ' +
+          str([point.GetArea(), area(pminx, pminy, pmaxx, pmaxy)]))
+    print('leaf bounds: ' +
+          str([leaf.GetArea(), area(minx, miny, maxx, maxy)]))
+    print('leaf - point: ' + str([abs(point.GetArea() - leaf.GetArea())]))
+    print([minx, miny, maxx, maxy])
+    #  if shp2.GetArea() != shp.GetArea():
+    #      import pdb;pdb.set_trace()
+    # sys.exit(1)
+
+    make_feature(leaf_block_lyr, leaf, id, count)
+    make_feature(point_block_lyr, point, id, count)
+
+    t += 1
+    # if t ==2:
+    #     break
+
+leaf_block_lyr.SyncToDisk()
+point_lyr.SyncToDisk()
+
+shape_ds.Destroy()


=====================================
setup.cfg deleted
=====================================
@@ -1,4 +0,0 @@
-[egg_info]
-tag_build = 
-tag_date = 0
-


=====================================
tests/stream-check.py deleted
=====================================
@@ -1,81 +0,0 @@
-import numpy as np
-import rtree
-import time
-
-def random_tree_stream(points_count, include_object):
-    properties = rtree.index.Property()
-    properties.dimension = 3
-
-    points_random = np.random.random((points_count,3,3))
-    points_bounds = np.column_stack((points_random.min(axis=1),
-                                     points_random.max(axis=1)))
-
-
-    stacked = zip(np.arange(points_count),
-                  points_bounds,
-                  np.arange(points_count))
-
-
-    tic = time.time()
-    tree = rtree.index.Index(stacked,
-                             properties = properties)
-    toc = time.time()
-    print('creation, objects:', include_object, '\tstream method: ', toc-tic)
-
-    return tree
-
-def random_tree_insert(points_count, include_object):
-    properties = rtree.index.Property()
-    properties.dimension = 3
-
-    points_random = np.random.random((points_count,3,3))
-    points_bounds = np.column_stack((points_random.min(axis=1),
-                                     points_random.max(axis=1)))
-    tree = rtree.index.Index(properties = properties)
-
-    if include_object:
-        stacked = zip(np.arange(points_count),
-                      points_bounds,
-                      np.arange(points_count))
-    else:
-        stacked = zip(np.arange(points_count),
-                                points_bounds)
-    tic = time.time()
-    for arg in stacked:
-        tree.insert(*arg)
-    toc = time.time()
-
-    print ('creation, objects:', include_object, '\tinsert method: ', toc-tic)
-
-    return tree
-
-
-def check_tree(tree, count):
-    # tid should intersect every box,
-    # as our random boxes are all inside [0,0,0,1,1,1]
-    tic = time.time()
-    tid = list(tree.intersection([-1,-1,-1,2,2,2]))
-    toc = time.time()
-    ok = (np.unique(tid) - np.arange(count) == 0).all()
-    print ('intersection, id method:    ', toc-tic, '\t query ok:', ok)
-
-    tic = time.time()
-    tid = [i.object for i in tree.intersection([-1,-1,-1,2,2,2], objects=True)]
-    toc = time.time()
-    ok = (np.unique(tid) - np.arange(count) == 0).all()
-    print ('intersection, object method:', toc-tic, '\t query ok:', ok)
-
-if __name__ == '__main__':
-    count = 10000
-
-    print ('\nChecking stream loading\n---------------')
-    tree = random_tree_stream(count, False)
-    tree = random_tree_stream(count, True)
-
-    check_tree(tree, count)
-
-    print ('\nChecking insert loading\n---------------')
-    tree = random_tree_insert(count, False)
-    tree = random_tree_insert(count, True)
-
-    check_tree(tree, count)


=====================================
tests/test_bounds.txt deleted
=====================================
@@ -1,26 +0,0 @@
-Bounding Box Checking
-=====================
-
-See http://trac.gispython.org/projects/PCL/ticket/127.
-
-Adding with bogus bounds
-------------------------
-
-  >>> import rtree
-  >>> index = rtree.Rtree()
-  >>> index.add(1, (0.0, 0.0, -1.0, 1.0))  #doctest: +IGNORE_EXCEPTION_DETAIL
-  Traceback (most recent call last):
-  ...
-  RTreeError: Coordinates must not have minimums more than maximums
-
-  >>> index.intersection((0.0, 0.0, -1.0, 1.0))  #doctest: +IGNORE_EXCEPTION_DETAIL
-  Traceback (most recent call last):
-  ...
-  RTreeError: Coordinates must not have minimums more than maximums
-
-Adding with invalid bounds argument should raise an exception
-
-  >>> index.add(1, 1)  #doctest: +IGNORE_EXCEPTION_DETAIL
-  Traceback (most recent call last):
-  ...
-  TypeError: Bounds must be a sequence


=====================================
tests/test_container.py deleted
=====================================
@@ -1,63 +0,0 @@
-import numpy as np
-import pytest
-
-import rtree.index
-
-
-def test_container():
-    container = rtree.index.RtreeContainer()
-    objects = list()
-
-    # Insert
-    boxes15 = np.genfromtxt('boxes_15x15.data')
-    for coordinates in boxes15:
-        objects.append(object())
-        container.insert(objects[-1], coordinates)
-
-    # Contains and length
-    assert all(obj in container for obj in objects)
-    assert len(container) == len(boxes15)
-
-    # Delete
-    for obj, coordinates in zip(objects, boxes15[:5]):
-        container.delete(obj, coordinates)
-
-    assert all(obj in container for obj in objects[5:])
-    assert all(obj not in container for obj in objects[:5])
-    assert len(container) == len(boxes15) - 5
-
-    # Delete already deleted object
-    with pytest.raises(IndexError):
-        container.delete(objects[0], boxes15[0])
-
-    # Insert duplicate object, at different location
-    container.insert(objects[5], boxes15[0])
-    assert objects[5] in container
-    # And then delete it, but check object still present
-    container.delete(objects[5], boxes15[0])
-    assert objects[5] in container
-
-    # Intersection
-    obj = objects[10]
-    results = container.intersection(boxes15[10])
-    assert obj in results
-
-    # Intersection with bbox
-    obj = objects[10]
-    results = container.intersection(boxes15[10], bbox=True)
-    result = [result for result in results if result.object is obj][0]
-    assert np.array_equal(result.bbox, boxes15[10])
-
-    # Nearest
-    obj = objects[8]
-    results = container.intersection(boxes15[8])
-    assert obj in results
-
-    # Nearest with bbox
-    obj = objects[8]
-    results = container.nearest(boxes15[8], bbox=True)
-    result = [result for result in results if result.object is obj][0]
-    assert np.array_equal(result.bbox, boxes15[8])
-
-    # Test iter method
-    assert objects[12] in set(container)


=====================================
tests/test_customStorage.txt deleted
=====================================
@@ -1,157 +0,0 @@
-
-Shows how to create custom storage backend.
-
-Derive your custom storage for rtree.index.CustomStorage and override the methods
-shown in this example.
-You can also derive from rtree.index.CustomStorageBase to get at the raw C buffers
-if you need the extra speed and want to avoid translating from/to python strings.
-
-The essential methods are the load/store/deleteByteArray. The rtree library calls
-them whenever it needs to access the data in any way.
-
-Example storage which maps the page (ids) to the page data.
-
-   >>> from rtree.index import Rtree, CustomStorage, Property
-   
-   >>> class DictStorage(CustomStorage):
-   ...     """ A simple storage which saves the pages in a python dictionary """
-   ...     def __init__(self):
-   ...         CustomStorage.__init__( self )
-   ...         self.clear()
-   ... 
-   ...     def create(self, returnError):
-   ...         """ Called when the storage is created on the C side """
-   ... 
-   ...     def destroy(self, returnError):
-   ...         """ Called when the storage is destroyed on the C side """
-   ... 
-   ...     def clear(self):
-   ...         """ Clear all our data """   
-   ...         self.dict = {}
-   ... 
-   ...     def loadByteArray(self, page, returnError):
-   ...         """ Returns the data for page or returns an error """   
-   ...         try:
-   ...             return self.dict[page]
-   ...         except KeyError:
-   ...             returnError.contents.value = self.InvalidPageError
-   ... 
-   ...     def storeByteArray(self, page, data, returnError):
-   ...         """ Stores the data for page """   
-   ...         if page == self.NewPage:
-   ...             newPageId = len(self.dict)
-   ...             self.dict[newPageId] = data
-   ...             return newPageId
-   ...         else:
-   ...             if page not in self.dict:
-   ...                 returnError.value = self.InvalidPageError
-   ...                 return 0
-   ...             self.dict[page] = data
-   ...             return page
-   ... 
-   ...     def deleteByteArray(self, page, returnError):
-   ...         """ Deletes a page """   
-   ...         try:
-   ...             del self.dict[page]
-   ...         except KeyError:
-   ...             returnError.contents.value = self.InvalidPageError
-   ... 
-   ...     hasData = property( lambda self: bool(self.dict) )
-   ...     """ Returns true if we contains some data """   
-
-
-Now let's test drive our custom storage.
-
-First let's define the basic properties we will use for all rtrees:
-
-    >>> settings = Property()
-    >>> settings.writethrough = True
-    >>> settings.buffering_capacity = 1
-
-Notice that there is a small in-memory buffer by default. We effectively disable
-it here so our storage directly receives any load/store/delete calls.
-This is not necessary in general and can hamper performance; we just use it here
-for illustrative and testing purposes.
-
-Let's start with a basic test:
-
-Create the storage and hook it up with a new rtree:
-
-    >>> storage = DictStorage()
-    >>> r = Rtree( storage, properties = settings )
-
-Interestingly enough, if we take a look at the contents of our storage now, we
-can see the Rtree has already written two pages to it. This is for header and
-index.
-
-    >>> state1 = storage.dict.copy()
-    >>> list(state1.keys())
-    [0, 1]
-    
-Let's add an item:
-
-    >>> r.add(123, (0, 0, 1, 1))
-
-Make sure the data in the storage before and after the addition of the new item
-is different:
-
-    >>> state2 = storage.dict.copy()
-    >>> state1 != state2
-    True
-
-Now perform a few queries and assure the tree is still valid:
-
-    >>> item = list(r.nearest((0, 0), 1, objects=True))[0]
-    >>> int(item.id)
-    123
-    >>> r.valid()
-    True
-    
-Check if the stored data is a byte string
-
-    >>> isinstance(list(storage.dict.values())[0], bytes)
-    True
-    
-Delete an item
-
-    >>> r.delete(123, (0, 0, 1, 1))
-    >>> r.valid()
-    True
-    
-Just for reference show how to flush the internal buffers (e.g. when
-properties.buffer_capacity is > 1)
-
-    >>> r.clearBuffer()
-    >>> r.valid()
-    True
-
-Let's get rid of the tree, we're done with it
-    
-    >>> del r
-
-Show how to empty the storage
-    
-    >>> storage.clear()
-    >>> storage.hasData
-    False
-    >>> del storage
-
-    
-Ok, let's create another small test. This time we'll test reopening our custom
-storage. This is useful for persistent storages.
-
-First create a storage and put some data into it:
-
-    >>> storage = DictStorage()
-    >>> r1 = Rtree( storage, properties = settings, overwrite = True )
-    >>> r1.add(555, (2, 2))
-    >>> del r1
-    >>> storage.hasData
-    True
-    
-Then reopen the storage with a new tree and see if the data is still there
-
-    >>> r2 = Rtree( storage, properties = settings, overwrite = False )
-    >>> r2.count( (0,0,10,10) ) == 1
-    True
-    >>> del r2


=====================================
tests/test_index.py
=====================================
@@ -1,27 +1,50 @@
 import unittest
+import ctypes
+import rtree
 from rtree import index, core
 import numpy as np
 import pytest
+import tempfile
+import pickle
 
-def boxes15_stream(interleaved=True):
-    boxes15 = np.genfromtxt('boxes_15x15.data')
-    for i, (minx, miny, maxx, maxy) in enumerate(boxes15):
-        
-        if interleaved:
-            yield (i, (minx, miny, maxx, maxy), 42)
-        else:
-            yield (i, (minx, maxx, miny, maxy), 42)
 
+class IndexTestCase(unittest.TestCase):
+    def setUp(self):
+        self.boxes15 = np.genfromtxt('boxes_15x15.data')
+        self.idx = index.Index()
+        for i, coords in enumerate(self.boxes15):
+            self.idx.add(i, coords)
 
-class IndexTests(unittest.TestCase):
+    def boxes15_stream(interleaved=True):
+        boxes15 = np.genfromtxt('boxes_15x15.data')
+        for i, (minx, miny, maxx, maxy) in enumerate(boxes15):
+
+            if interleaved:
+                yield (i, (minx, miny, maxx, maxy), 42)
+            else:
+                yield (i, (minx, maxx, miny, maxy), 42)
+
+
+class IndexVersion(unittest.TestCase):
+
+    def test_libsidx_version(self):
+        self.assertTrue(index.major_version == 1)
+        self.assertTrue(index.minor_version >= 7)
+
+
+
+class IndexBounds(unittest.TestCase):
+
+    def test_invalid_specifications(self):
+        """Invalid specifications of bounds properly throw"""
+
+        idx = index.Index()
+        self.assertRaises(core.RTreeError, idx.add, None, (0.0, 0.0, -1.0, 1.0))
+        self.assertRaises(core.RTreeError, idx.intersection, (0.0, 0.0, -1.0, 1.0))
+        self.assertRaises(ctypes.ArgumentError, idx.add, None,  (1, 1,))
+
+class IndexProperties(IndexTestCase):
 
-    def test_stream_input(self):
-        p = index.Property()
-        sindex = index.Index(boxes15_stream(), properties=p)
-        bounds = (0, 0, 60, 60)
-        hits = sindex.intersection(bounds)
-        self.assertEqual(sorted(hits), [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
-    
     @pytest.mark.skipif(
         not hasattr(core.rt, 'Index_GetResultSetOffset'),
         reason="Index_GetResultsSetOffset required in libspatialindex")
@@ -38,8 +61,348 @@ class IndexTests(unittest.TestCase):
         idx.set_result_limit(44)
         self.assertEqual(idx.result_limit, 44)
 
+    def test_invalid_properties(self):
+        """Invalid values are guarded"""
+        p = index.Property()
+
+        self.assertRaises(core.RTreeError, p.set_buffering_capacity, -4321)
+        self.assertRaises(core.RTreeError, p.set_region_pool_capacity, -4321)
+        self.assertRaises(core.RTreeError, p.set_point_pool_capacity, -4321)
+        self.assertRaises(core.RTreeError, p.set_index_pool_capacity, -4321)
+        self.assertRaises(core.RTreeError, p.set_pagesize, -4321)
+        self.assertRaises(core.RTreeError, p.set_index_capacity, -4321)
+        self.assertRaises(core.RTreeError, p.set_storage, -4321)
+        self.assertRaises(core.RTreeError, p.set_variant, -4321)
+        self.assertRaises(core.RTreeError, p.set_dimension, -2)
+        self.assertRaises(core.RTreeError, p.set_index_type, 6)
+        self.assertRaises(core.RTreeError, p.get_index_id)
+
+    def test_index_properties(self):
+        """Setting index properties returns expected values"""
+        idx = index.Rtree()
+        p = index.Property()
+
+        p.leaf_capacity = 100
+        p.fill_factor = 0.5
+        p.index_capacity = 10
+        p.near_minimum_overlap_factor = 7
+        p.buffering_capacity = 10
+        p.variant = 0
+        p.dimension = 3
+        p.storage = 0
+        p.pagesize = 4096
+        p.index_pool_capacity = 1500
+        p.point_pool_capacity = 1600
+        p.region_pool_capacity = 1700
+        p.tight_mbr = True
+        p.overwrite = True
+        p.writethrough  = True
+        p.tpr_horizon  = 20.0
+        p.reinsert_factor  = 0.3
+        p.idx_extension = 'index'
+        p.dat_extension = 'data'
+
+        idx = index.Index(properties = p)
+
+        props = idx.properties
+        self.assertEqual(props.leaf_capacity, 100)
+        self.assertEqual(props.fill_factor, 0.5)
+        self.assertEqual(props.index_capacity, 10)
+        self.assertEqual(props.near_minimum_overlap_factor, 7)
+        self.assertEqual(props.buffering_capacity, 10)
+        self.assertEqual(props.variant, 0)
+        self.assertEqual(props.dimension, 3)
+        self.assertEqual(props.storage, 0)
+        self.assertEqual(props.pagesize, 4096)
+        self.assertEqual(props.index_pool_capacity, 1500)
+        self.assertEqual(props.point_pool_capacity, 1600)
+        self.assertEqual(props.region_pool_capacity, 1700)
+        self.assertEqual(props.tight_mbr, True)
+        self.assertEqual(props.overwrite, True)
+        self.assertEqual(props.writethrough, True)
+        self.assertEqual(props.tpr_horizon, 20.0)
+        self.assertEqual(props.reinsert_factor, 0.3)
+        self.assertEqual(props.idx_extension, 'index')
+        self.assertEqual(props.dat_extension, 'data')
+
+class TestPickling(unittest.TestCase):
+
+    def test_index(self):
+        idx = rtree.index.Index()
+        unpickled = pickle.loads(pickle.dumps(idx))
+        self.assertNotEqual(idx.handle, unpickled.handle)
+        self.assertEqual(idx.properties.as_dict(),
+                          unpickled.properties.as_dict())
+        self.assertEqual(idx.interleaved, unpickled.interleaved)
+
+    def test_property(self):
+        p = rtree.index.Property()
+        unpickled = pickle.loads(pickle.dumps(p))
+        self.assertNotEqual(p.handle, unpickled.handle)
+        self.assertEqual(p.as_dict(), unpickled.as_dict())
+
+class IndexContainer(IndexTestCase):
+
+    def test_container(self):
+        """rtree.index.RtreeContainer works as expected"""
+
+        container = rtree.index.RtreeContainer()
+        objects = list()
+
+        for coordinates in self.boxes15:
+            objects.append(object())
+            container.insert(objects[-1], coordinates)
+
+        self.assertEqual(len(container), len(self.boxes15))
+        assert all(obj in container for obj in objects)
+
+        for obj, coordinates in zip(objects, self.boxes15[:5]):
+            container.delete(obj, coordinates)
+
+        assert all(obj in container for obj in objects[5:])
+        assert all(obj not in container for obj in objects[:5])
+        assert len(container) == len(self.boxes15) - 5
+
+        with pytest.raises(IndexError):
+            container.delete(objects[0], self.boxes15[0])
+
+            # Insert duplicate object, at different location
+        container.insert(objects[5], self.boxes15[0])
+        assert objects[5] in container
+        # And then delete it, but check object still present
+        container.delete(objects[5], self.boxes15[0])
+        assert objects[5] in container
+
+        # Intersection
+        obj = objects[10]
+        results = container.intersection(self.boxes15[10])
+        assert obj in results
+
+        # Intersection with bbox
+        obj = objects[10]
+        results = container.intersection(self.boxes15[10], bbox=True)
+        result = [result for result in results if result.object is obj][0]
+        assert np.array_equal(result.bbox, self.boxes15[10])
+
+        # Nearest
+        obj = objects[8]
+        results = container.intersection(self.boxes15[8])
+        assert obj in results
+
+        # Nearest with bbox
+        obj = objects[8]
+        results = container.nearest(self.boxes15[8], bbox=True)
+        result = [result for result in results if result.object is obj][0]
+        assert np.array_equal(result.bbox, self.boxes15[8])
+
+        # Test iter method
+        assert objects[12] in set(container)
+
+class IndexIntersection(IndexTestCase):
+
+
+    def test_intersection(self):
+        """Test basic insertion and retrieval"""
+
+        self.assertTrue(0 in self.idx.intersection((0, 0, 60, 60)))
+        hits = list(self.idx.intersection((0, 0, 60, 60)))
+
+        self.assertTrue(len(hits), 10)
+        self.assertEqual(hits, [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
+
+    def test_objects(self):
+        """Test insertion of objects"""
+
+        idx = index.Index()
+        for i, coords in enumerate(self.boxes15):
+            idx.add(i, coords)
+        idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42)
+        hits = idx.intersection((0, 0, 60, 60), objects=True)
+        hit = [h for h in hits if h.id == 4321][0]
+        self.assertEqual(hit.id, 4321)
+        self.assertEqual(hit.object, 42)
+        box = ['%.10f' % t for t in hit.bbox]
+        expected = ['34.3776829412', '26.7375853734', '49.3776829412', '41.7375853734']
+        self.assertEqual(box, expected)
+
+    def test_double_insertion(self):
+        """Inserting the same id twice does not overwrite data"""
+        idx = index.Index()
+        idx.add(1, (2,2))
+        idx.add(1, (3,3))
+
+        self.assertEqual([1,1], list(idx.intersection((0, 0, 5, 5))))
+
+class IndexSerialization(unittest.TestCase):
+
+    def setUp(self):
+        self.boxes15 = np.genfromtxt('boxes_15x15.data')
+
+    def boxes15_stream(interleaved=True):
+        boxes15 = np.genfromtxt('boxes_15x15.data')
+        for i, (minx, miny, maxx, maxy) in enumerate(self.boxes15):
+
+            if interleaved:
+                yield (i, (minx, miny, maxx, maxy), 42)
+            else:
+                yield (i, (minx, maxx, miny, maxy), 42)
+
+    def test_unicode_filenames(self):
+        """Unicode filenames work as expected"""
+
+        tname = tempfile.mktemp()
+        filename = tname + u'gilename\u4500abc'
+        idx = index.Index(filename)
+        idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42)
+
+
+    def test_pickling(self):
+        """Pickling works as expected"""
+
+        idx = index.Index()
+        import json
+
+        some_data = {"a": 22, "b": [1, "ccc"]}
+
+        idx.dumps = lambda obj: json.dumps(obj).encode('utf-8')
+        idx.loads = lambda string: json.loads(string.decode('utf-8'))
+        idx.add(0, (0, 0, 1, 1), some_data)
+
+        self.assertEqual(list(idx.nearest((0, 0), 1, objects="raw"))[0], some_data)
+
+    def test_custom_filenames(self):
+        """Test using custom filenames for index serialization"""
+        p = index.Property()
+        p.dat_extension = 'data'
+        p.idx_extension = 'index'
+        tname = tempfile.mktemp()
+        idx = index.Index(tname, properties = p)
+        for i, coords in enumerate(self.boxes15):
+            idx.add(i, coords)
+
+        hits = list(idx.intersection((0, 0, 60, 60)))
+        self.assertTrue(len(hits), 10)
+        self.assertEqual(hits, [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
+        del idx
+
+        # Check we can reopen the index and get the same results
+        idx2 = index.Index(tname, properties = p)
+        hits = list(idx2.intersection((0, 0, 60, 60)))
+        self.assertTrue(len(hits), 10)
+        self.assertEqual(hits, [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
+
+
+    def test_interleaving(self):
+        """Streaming against a persisted index without interleaving"""
+        def data_gen(interleaved=True):
+           for i, (minx, miny, maxx, maxy) in enumerate(self.boxes15):
+               if interleaved:
+                   yield (i, (minx, miny, maxx, maxy), 42)
+               else:
+                   yield (i, (minx, maxx, miny, maxy), 42)
+        p = index.Property()
+        tname = tempfile.mktemp()
+        idx = index.Index(tname,
+                          data_gen(interleaved = False),
+                          properties = p,
+                          interleaved = False)
+        hits = sorted(list(idx.intersection((0, 60, 0, 60))))
+        self.assertTrue(len(hits), 10)
+        self.assertEqual(hits, [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
+
+        leaves = idx.leaves()
+        expected = [(0, [2, 92, 51, 55, 26, 95, 7, 81, 38, 22, 58, 89, 91, 83, 98, 37, 70, 31, 49, 34, 11, 6, 13, 3, 23, 57, 9, 96, 84, 36, 5, 45, 77, 78, 44, 12, 42, 73, 93, 41, 71, 17, 39, 54, 88, 72, 97, 60, 62, 48, 19, 25, 76, 59, 66, 64, 79, 94, 40, 32, 46, 47, 15, 68, 10, 0, 80, 56, 50, 30], [-186.673789279, -96.7177218184, 172.392784956, 45.4856075292]), (2, [61, 74, 29, 99, 16, 43, 35, 33, 27, 63, 18, 90, 8, 53, 82, 21, 65, 24, 4, 1, 75, 67, 86, 52, 28, 85, 87, 14, 69, 20], [-174.739939684, 32.6596016791, 184.761387556, 96.6043699778])]
+
+        self.assertEqual(leaves, expected)
+
+        hits = sorted(list(idx.intersection((0, 60, 0, 60), objects = True)))
+        self.assertTrue(len(hits), 10)
+        self.assertEqual(hits[0].object, 42)
+
+    def test_overwrite(self):
+        """Index overwrite works as expected"""
+        tname = tempfile.mktemp()
+
+        idx = index.Index(tname)
+        del idx
+        idx = index.Index(tname, overwrite=True)
+
+class IndexNearest(IndexTestCase):
+
+    def test_nearest_basic(self):
+        """Test nearest basic selection of records"""
+        hits = list(self.idx.nearest((0,0,10,10), 3))
+        self.assertEqual(hits, [76, 48, 19])
+
+        idx = index.Index()
+        locs = [(2, 4), (6, 8), (10, 12), (11, 13), (15, 17), (13, 20)]
+        for i, (start, stop) in enumerate(locs):
+            idx.add(i, (start, 1, stop, 1))
+        hits = sorted(idx.nearest((13, 0, 20, 2), 3))
+        self.assertEqual(hits, [3, 4, 5])
+
+
+    def test_nearest_object(self):
+        """Test nearest object selection of records"""
+        idx = index.Index()
+        locs = [(14, 10, 14, 10), (16, 10, 16, 10)]
+        for i, (minx, miny, maxx, maxy) in enumerate(locs):
+            idx.add(i, (minx, miny, maxx, maxy), obj={'a': 42})
+
+        hits = sorted([(i.id, i.object) for i in idx.nearest((15, 10, 15, 10), 1, objects=True)])
+        self.assertEqual(hits, [(0, {'a': 42}), (1, {'a': 42})])
+
+class IndexDelete(IndexTestCase):
+
+    def test_deletion(self):
+        """Test we can delete data from the index"""
+        idx = index.Index()
+        for i, coords in enumerate(self.boxes15):
+            idx.add(i, coords)
+
+        for i, coords in enumerate(self.boxes15):
+            idx.delete(i, coords)
+
+        hits = list(idx.intersection((0, 0, 60, 60)))
+        self.assertEqual(hits, [])
+
+
+class IndexMoreDimensions(IndexTestCase):
+    def test_3d(self):
+        """Test we make and query a 3D index"""
+        p = index.Property()
+        p.dimension = 3
+        idx = index.Index(properties = p, interleaved = False)
+        idx.insert(1, (0, 0, 60, 60, 22, 22.0))
+        hits = idx.intersection((-1, 1, 58, 62, 22, 24))
+        self.assertEqual(list(hits), [1])
+    def test_4d(self):
+        """Test we make and query a 4D index"""
+        p = index.Property()
+        p.dimension = 4
+        idx = index.Index(properties = p, interleaved = False)
+        idx.insert(1, (0, 0, 60, 60, 22, 22.0, 128, 142))
+        hits = idx.intersection((-1, 1, 58, 62, 22, 24, 120, 150))
+        self.assertEqual(list(hits), [1])
+
+
+class IndexStream(IndexTestCase):
+
+    def test_stream_input(self):
+        p = index.Property()
+        sindex = index.Index(self.boxes15_stream(), properties=p)
+        bounds = (0, 0, 60, 60)
+        hits = sindex.intersection(bounds)
+        self.assertEqual(sorted(hits), [0, 4, 16, 27, 35, 40, 47, 50, 76, 80])
+        objects = list(sindex.intersection((0, 0, 60, 60), objects=True))
+
+        self.assertEqual(len(objects), 10)
+        self.assertEqual(objects[0].object, 42)
+
+    def test_empty_stream(self):
+        """Assert empty stream raises exception"""
+        self.assertRaises(core.RTreeError, index.Index, ((x for x in [])))
 
-class ExceptionTests(unittest.TestCase):
     def test_exception_in_generator(self):
         """Assert exceptions raised in callbacks are raised in main thread"""
         class TestException(Exception):
@@ -53,4 +416,128 @@ class ExceptionTests(unittest.TestCase):
                 raise TestException("raising here")
             return index.Index(gen())
 
-        self.assertRaises(TestException, create_index)
\ No newline at end of file
+        self.assertRaises(TestException, create_index)
+
+    def test_exception_at_beginning_of_generator(self):
+        """Assert exceptions raised in callbacks before generator function are raised in main thread"""
+        class TestException(Exception):
+            pass
+
+        def create_index():
+            def gen():
+
+                raise TestException("raising here")
+            return index.Index(gen())
+
+        self.assertRaises(TestException, create_index)
+
+
+
+class DictStorage(index.CustomStorage):
+   """ A simple storage which saves the pages in a python dictionary """
+   def __init__(self):
+       index.CustomStorage.__init__( self )
+       self.clear()
+
+   def create(self, returnError):
+       """ Called when the storage is created on the C side """
+
+   def destroy(self, returnError):
+       """ Called when the storage is destroyed on the C side """
+
+   def clear(self):
+       """ Clear all our data """
+       self.dict = {}
+
+   def loadByteArray(self, page, returnError):
+       """ Returns the data for page or returns an error """
+       try:
+           return self.dict[page]
+       except KeyError:
+           returnError.contents.value = self.InvalidPageError
+
+   def storeByteArray(self, page, data, returnError):
+       """ Stores the data for page """
+       if page == self.NewPage:
+           newPageId = len(self.dict)
+           self.dict[newPageId] = data
+           return newPageId
+       else:
+           if page not in self.dict:
+               returnError.value = self.InvalidPageError
+               return 0
+           self.dict[page] = data
+           return page
+
+   def deleteByteArray(self, page, returnError):
+       """ Deletes a page """
+       try:
+           del self.dict[page]
+       except KeyError:
+           returnError.contents.value = self.InvalidPageError
+
+   hasData = property( lambda self: bool(self.dict) )
+   """ Returns true if we contains some data """
+
+class IndexCustomStorage(unittest.TestCase):
+    def test_custom_storage(self):
+        """Custom index storage works as expected"""
+        settings = index.Property()
+        settings.writethrough = True
+        settings.buffering_capacity = 1
+
+# Notice that there is a small in-memory buffer by default. We effectively disable
+# it here so our storage directly receives any load/store/delete calls.
+# This is not necessary in general and can hamper performance; we just use it here
+# for illustrative and testing purposes.
+
+        storage = DictStorage()
+        r = index.Index( storage, properties = settings )
+
+# Interestingly enough, if we take a look at the contents of our storage now, we
+# can see the Rtree has already written two pages to it. This is for header and
+# index.
+
+        state1 = storage.dict.copy()
+        self.assertEqual(list(state1.keys()), [0, 1])
+
+        r.add(123, (0, 0, 1, 1))
+
+        state2 = storage.dict.copy()
+        self.assertNotEqual(state1, state2)
+
+        item = list(r.nearest((0, 0), 1, objects=True))[0]
+        self.assertEqual(item.id, 123)
+        self.assertTrue(r.valid())
+        self.assertTrue(isinstance(list(storage.dict.values())[0], bytes))
+
+        r.delete(123, (0, 0, 1, 1))
+        self.assertTrue(r.valid())
+
+        r.clearBuffer()
+        self.assertTrue(r.valid())
+
+        del r
+
+        storage.clear()
+        self.assertFalse(storage.hasData)
+
+        del storage
+
+
+    def test_custom_storage_reopening(self):
+        """Reopening custom index storage works as expected"""
+
+        storage = DictStorage()
+        settings = index.Property()
+        settings.writethrough = True
+        settings.buffering_capacity = 1
+
+        r1 = index.Index(storage, properties = settings, overwrite = True)
+        r1.add(555, (2, 2))
+        del r1
+        self.assertTrue(storage.hasData)
+
+        r2 = index.Index(storage, properly = settings, overwrite = False)
+        count = r2.count( (0, 0, 10, 10) )
+        self.assertEqual(count, 1)


=====================================
tests/test_index_doctests.txt deleted
=====================================
@@ -1,357 +0,0 @@
-.. _index_test:
-
-Examples
-..............................................................................
-
-    >>> import numpy as np
-    >>> from rtree import index
-    >>> from rtree.index import Rtree
-    >>>
-    >>> boxes15 = np.genfromtxt('boxes_15x15.data')
-    >>>
-
-Ensure libspatialindex version is >= 1.7.0
-
-    >>> int(index.__c_api_version__.decode('UTF-8').split('.')[1]) >= 7
-    True
-
-Make an instance, index stored in memory
-
-    >>> p = index.Property()
-
-    >>> idx = index.Index(properties=p)
-    >>> idx
-    <rtree.index.Index object at 0x...>
-
-Add 100 largish boxes randomly distributed over the domain
-
-    >>> for i, coords in enumerate(boxes15):
-    ...     idx.add(i, coords)
-
-    >>> 0 in idx.intersection((0, 0, 60, 60))
-    True
-    >>> hits = list(idx.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    10
-    >>> hits
-    [0, 4, 16, 27, 35, 40, 47, 50, 76, 80]
-
-Insert an object into the index that can be pickled
-
-    >>> idx.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42)
-
-Fetch our straggler that contains a pickled object
-    >>> hits = idx.intersection((0, 0, 60, 60), objects=True)
-    >>> for i in hits:
-    ...     if i.id == 4321:
-    ...         i.object
-    ...         ['%.10f' % t for t in i.bbox]
-    42
-    ['34.3776829412', '26.7375853734', '49.3776829412', '41.7375853734']
-
-
-Find the three items nearest to this one
-    >>> hits = list(idx.nearest((0,0,10,10), 3))
-    >>> hits
-    [76, 48, 19]
-    >>> len(hits)
-    3
-
-
-Default order is [xmin, ymin, xmax, ymax]
-    >>> ['%.10f' % t for t in idx.bounds]
-    ['-186.6737892790', '-96.7177218184', '184.7613875560', '96.6043699778']
-
-To get in order [xmin, xmax, ymin, ymax (... for n-d indexes)] use the kwarg:
-    >>> ['%.10f' % t for t in idx.get_bounds(coordinate_interleaved=False)]
-    ['-186.6737892790', '184.7613875560', '-96.7177218184', '96.6043699778']
-
-Delete index members
-
-    >>> for i, coords in enumerate(boxes15):
-    ...     idx.delete(i, coords)
-
-Delete our straggler too
-    >>> idx.delete(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734) )
-
-Check that we have deleted stuff
-
-    >>> hits = 0
-    >>> hits = list(idx.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    0
-
-Check that nearest returns *all* of the items that are nearby
-
-    >>> idx2 = Rtree()
-    >>> idx2
-    <rtree.index.Index object at 0x...>
-
-    >>> locs = [(14, 10, 14, 10),
-    ...         (16, 10, 16, 10)]
-
-    >>> for i, (minx, miny, maxx, maxy) in enumerate(locs):
-    ...        idx2.add(i, (minx, miny, maxx, maxy))
-
-    >>> sorted(idx2.nearest((15, 10, 15, 10),1))
-    [0]
-
-
-Check that nearest returns *all* of the items that are nearby (with objects)
-    >>> idx2 = Rtree()
-    >>> idx2
-    <rtree.index.Index object at 0x...>
-
-    >>> locs = [(14, 10, 14, 10),
-    ...         (16, 10, 16, 10)]
-
-    >>> for i, (minx, miny, maxx, maxy) in enumerate(locs):
-    ...        idx2.add(i, (minx, miny, maxx, maxy), obj={'a': 42})
-
-    >>> sorted([(i.id, i.object) for i in idx2.nearest((15, 10, 15, 10), 1, objects=True)])
-    [(0, {'a': 42}), (1, {'a': 42})]
-
-
-    >>> idx2 = Rtree()
-    >>> idx2
-    <rtree.index.Index object at 0x...>
-
-    >>> locs = [(2, 4), (6, 8), (10, 12), (11, 13), (15, 17), (13, 20)]
-
-    >>> for i, (start, stop) in enumerate(locs):
-    ...        idx2.add(i, (start, 1, stop, 1))
-
-    >>> sorted(idx2.nearest((13, 0, 20, 2), 1))
-    [3]
-
-Default page size 4096
-
-    >>> idx3 = Rtree("defaultidx")
-    >>> for i, coords in enumerate(boxes15):
-    ...     idx3.add(i, coords)
-    >>> hits = list(idx3.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    10
-
-Make sure to delete the index or the file is not flushed and it
-will be invalid
-
-    >>> del idx3
-
-Page size 3
-
-    >>> idx4 = Rtree("pagesize3", pagesize=3)
-    >>> for i, coords in enumerate(boxes15):
-    ...     idx4.add(i, coords)
-    >>> hits = list(idx4.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    10
-
-    >>> idx4.close()
-    >>> del idx4
-
-Test invalid name
-
-    >>> inv = Rtree("bogus/foo")  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    OSError: Unable to open file 'bogus/foo.idx' for index storage
-
-Load a persisted index
-
-    >>> import shutil
-    >>> _ = shutil.copy("defaultidx.dat", "testing.dat")
-    >>> _ = shutil.copy("defaultidx.idx", "testing.idx")
-
-    # >>> import pdb;pdb.set_trace()
-
-    >>> idx = Rtree("testing")
-    >>> hits = list(idx.intersection((0, 0, 60, 60)))
-    >>> idx.flush()
-    >>> len(hits)
-    10
-
-Make a 3D index
-    >>> p = index.Property()
-    >>> p.dimension = 3
-
-
-with interleaved=False, the order of input and output is:
-(xmin, xmax, ymin, ymax, zmin, zmax)
-
-    >>> idx3d = index.Index(properties=p, interleaved=False)
-    >>> idx3d
-    <rtree.index.Index object at 0x...>
-
-    >>> idx3d.insert(1, (0, 0, 60, 60, 22, 22.0))
-
-    >>> 1 in idx3d.intersection((-1, 1, 58, 62, 22, 24))
-    True
-
-
-Make a 4D index
-    >>> p = index.Property()
-    >>> p.dimension = 4
-
-
-with interleaved=False, the order of input and output is: (xmin, xmax, ymin, ymax, zmin, zmax, kmin, kmax)
-
-    >>> idx4d = index.Index(properties=p, interleaved=False)
-    >>> idx4d
-    <rtree.index.Index object at 0x...>
-
-    >>> idx4d.insert(1, (0, 0, 60, 60, 22, 22.0, 128, 142))
-
-    >>> 1 in idx4d.intersection((-1, 1, 58, 62, 22, 24, 120, 150))
-    True
-
-Check that we can make an index with custom filename extensions
-
-    >>> p = index.Property()
-    >>> p.dat_extension = 'data'
-    >>> p.idx_extension = 'index'
-
-    >>> idx_cust = Rtree('custom', properties=p)
-    >>> idx_cust.flush()
-    >>> for i, coords in enumerate(boxes15):
-    ...     idx_cust.add(i, coords)
-    >>> hits = list(idx_cust.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    10
-
-    >>> del idx_cust
-
-Reopen the index
-    >>> p2 = index.Property()
-    >>> p2.dat_extension = 'data'
-    >>> p2.idx_extension = 'index'
-
-    >>> idx_cust2 = Rtree('custom', properties=p2)
-    >>> hits = list(idx_cust2.intersection((0, 0, 60, 60)))
-    >>> len(hits)
-    10
-
-    >>> del idx_cust2
-
-Adding the same id twice does not overwrite existing data
-
-    >>> r = Rtree()
-    >>> r.add(1, (2, 2))
-    >>> r.add(1, (3, 3))
-    >>> list(r.intersection((0, 0, 5, 5)))
-    [1, 1]
-
-A stream of data need that needs to be an iterator that will raise a
-StopIteration. The order depends on the interleaved kwarg sent to the
-constructor.
-
-The object can be None, but you must put a place holder of 'None' there.
-
-    >>> p = index.Property()
-    >>> def data_gen(interleaved=True):
-    ...    for i, (minx, miny, maxx, maxy) in enumerate(boxes15):
-    ...        if interleaved:
-    ...            yield (i, (minx, miny, maxx, maxy), 42)
-    ...        else:
-    ...            yield (i, (minx, maxx, miny, maxy), 42)
-
-    >>> strm_idx = index.Rtree(data_gen(), properties = p)
-
-    >>> hits = list(strm_idx.intersection((0, 0, 60, 60)))
-
-    >>> len(hits)
-    10
-
-
-    >>> sorted(hits)
-    [0, 4, 16, 27, 35, 40, 47, 50, 76, 80]
-
-    >>> hits = list(strm_idx.intersection((0, 0, 60, 60), objects=True))
-    >>> len(hits)
-    10
-
-    >>> hits[0].object
-    42
-
-Try streaming against a persisted index without interleaving.
-    >>> strm_idx = index.Rtree('streamed', data_gen(interleaved=False), properties = p, interleaved=False)
-
-Note the arguments to intersection must be xmin, xmax, ymin, ymax for interleaved=False
-    >>> hits = list(strm_idx.intersection((0, 60, 0, 60)))
-    >>> len(hits)
-    10
-
-    >>> sorted(hits)
-    [0, 4, 16, 27, 35, 40, 47, 50, 76, 80]
-
-    >>> strm_idx.leaves()
-    [(0, [2, 92, 51, 55, 26, 95, 7, 81, 38, 22, 58, 89, 91, 83, 98, 37, 70, 31, 49, 34, 11, 6, 13, 3, 23, 57, 9, 96, 84, 36, 5, 45, 77, 78, 44, 12, 42, 73, 93, 41, 71, 17, 39, 54, 88, 72, 97, 60, 62, 48, 19, 25, 76, 59, 66, 64, 79, 94, 40, 32, 46, 47, 15, 68, 10, 0, 80, 56, 50, 30], [-186.673789279, -96.7177218184, 172.392784956, 45.4856075292]), (2, [61, 74, 29, 99, 16, 43, 35, 33, 27, 63, 18, 90, 8, 53, 82, 21, 65, 24, 4, 1, 75, 67, 86, 52, 28, 85, 87, 14, 69, 20], [-174.739939684, 32.6596016791, 184.761387556, 96.6043699778])]
-
-    >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects=True))
-    >>> len(hits)
-    10
-
-    >>> hits[0].object
-    42
-
-    >>> hits = list(strm_idx.intersection((0, 60, 0, 60), objects='raw'))
-    >>> hits[0]
-    42
-    >>> len(hits)
-    10
-
-    >>> int(strm_idx.count((0, 60, 0, 60)))
-    10
-
-    >>> del strm_idx
-
-    >>> p = index.Property()
-    >>> p.leaf_capacity = 100
-    >>> p.fill_factor = 0.5
-    >>> p.index_capacity = 10
-    >>> p.near_minimum_overlap_factor = 7
-    >>> idx = index.Index(data_gen(interleaved=False), properties = p, interleaved=False)
-
-    >>> leaves = idx.leaves()
-
-    >>> del idx
-
-    >>> import numpy as np
-    >>> import rtree
-
-    >>> properties = rtree.index.Property()
-    >>> properties.dimension = 3
-
-    >>> points_count = 100
-    >>> points_random = np.random.random((points_count, 3,3))
-    >>> points_bounds = np.column_stack((points_random.min(axis=1), points_random.max(axis=1)))
-
-    >>> stacked = zip(np.arange(points_count), points_bounds, [None] * points_count)
-
-    >>> tree = rtree.index.Index(stacked, properties = properties)
-
-    >>> tid = list(tree.intersection([-1,-1,-1,2,2,2]))
-
-    >>> len(tid)
-    100
-
-    >>> (np.array(tid) == 0).all()
-    False
-
-    >>> from rtree import index
-    >>> idx = index.Index()
-    >>> idx.insert(4321,
-    ...            (34.3776829412, 26.7375853734, 49.3776829412,
-    ...             41.7375853734),
-    ...            obj=42)
-
-    >>> hits = list(idx.contains((0, 0, 60, 60), objects=True))
-    ... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS +SKIP
-    >>> [(item.object, item.bbox) for item in hits if item.id == 4321]
-    ... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS +SKIP
-    [(42, [34.37768294..., 26.73758537..., 49.37768294...,
-           41.73758537...])]
-
-
-    >>> print (index.__c_api_version__.decode('utf-8'))
-    1.9.3


=====================================
tests/test_misc.txt deleted
=====================================
@@ -1,42 +0,0 @@
-
-make sure a file-based index is overwriteable.
-
-    >>> from rtree.index import Rtree
-    >>> r = Rtree('overwriteme')
-    >>> del r
-    >>> r = Rtree('overwriteme', overwrite=True)
-
-
-the default serializer is pickle, can use any by overriding dumps, loads
-
-    >>> r = Rtree()
-    >>> some_data = {"a": 22, "b": [1, "ccc"]}
-    >>> try:
-    ...     import simplejson
-    ...     r.dumps = lambda obj: simplejson.dumps(obj).encode('ascii')
-    ...     r.loads = lambda string: simplejson.loads(string.decode('ascii'))
-    ...     r.add(0, (0, 0, 1, 1), some_data)
-    ...     list(r.nearest((0, 0), 1, objects="raw"))[0] == some_data
-    ... except ImportError:
-    ...     # "no import, failed"
-    ...     True
-    True
-
-
-    >>> r = Rtree()
-    >>> r.add(123, (0, 0, 1, 1))
-    >>> item = list(r.nearest((0, 0), 1, objects=True))[0]
-    >>> item.id
-    123
-
-    >>> r.valid()
-    True
-
-test UTF-8 filenames
-
-    >>> f = u'gilename\u4500abc'
-
-    >>> r = Rtree(f)
-    >>> r.insert(4321, (34.3776829412, 26.7375853734, 49.3776829412, 41.7375853734), obj=42)
-
-    >>> del r


=====================================
tests/test_pickle.py deleted
=====================================
@@ -1,20 +0,0 @@
-import pickle
-import unittest
-import rtree.index
-
-
-class TestPickling(unittest.TestCase):
-
-    def test_index(self):
-        idx = rtree.index.Index()
-        unpickled = pickle.loads(pickle.dumps(idx))
-        self.assertNotEqual(idx.handle, unpickled.handle)
-        self.assertEqual(idx.properties.as_dict(),
-                          unpickled.properties.as_dict())
-        self.assertEqual(idx.interleaved, unpickled.interleaved)
-
-    def test_property(self):
-        p = rtree.index.Property()
-        unpickled = pickle.loads(pickle.dumps(p))
-        self.assertNotEqual(p.handle, unpickled.handle)
-        self.assertEqual(p.as_dict(), unpickled.as_dict())


=====================================
tests/test_properties.txt deleted
=====================================
@@ -1,230 +0,0 @@
-Testing rtree properties
-==========================
-
-Make a simple properties object
-
-    >>> from rtree import index
-    >>> p = index.Property()
-
-Test as_dict()
-
-    >>> d = p.as_dict()
-    >>> d['index_id'] is None
-    True
-
-Test creation from kwargs and eval() of its repr()
-
-    >>> q = index.Property(**d)
-    >>> eval(repr(q))['index_id'] is None
-    True
-
-
-Test property setting
-
-    >>> p = index.Property()
-    >>> p.type = 0
-    >>> p.type
-    0
-
-    >>> p.type = 2
-    >>> p.type
-    2
-
-    >>> p.type = 6  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: LASError in "IndexProperty_SetIndexType": Inputted value is not a valid index type
-
-    >>> p.dimension = 3
-    >>> p.dimension
-    3
-
-    >>> p.dimension = 2
-    >>> p.dimension
-    2
-
-    >>> p.dimension = -2  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: Negative or 0 dimensional indexes are not allowed
-
-    >>> p.variant = 0
-    >>> p.variant
-    0
-
-    >>> p.variant = 6  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: LASError in "IndexProperty_SetIndexVariant": Inputted value is not a valid index variant
-
-    >>> p.storage = 0
-    >>> p.storage
-    0
-
-    >>> p.storage = 1
-    >>> p.storage
-    1
-
-    >>> p.storage = 3  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: LASError in "IndexProperty_SetIndexStorage": Inputted value is not a valid index storage type
-
-    >>> p.index_capacity
-    100
-
-    >>> p.index_capacity = 300
-    >>> p.index_capacity
-    300
-
-    >>> p.index_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: index_capacity must be > 0
-
-    >>> p.pagesize
-    4096
-
-    >>> p.pagesize = 8192
-    >>> p.pagesize
-    8192
-
-    >>> p.pagesize = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: Pagesize must be > 0
-
-    >>> p.leaf_capacity
-    100
-
-    >>> p.leaf_capacity = 1000
-    >>> p.leaf_capacity
-    1000
-    >>> p.leaf_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: leaf_capacity must be > 0
-
-    >>> p.index_pool_capacity
-    100
-
-    >>> p.index_pool_capacity = 1500
-    >>> p.index_pool_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: index_pool_capacity must be > 0
-
-    >>> p.point_pool_capacity
-    500
-
-    >>> p.point_pool_capacity = 1500
-    >>> p.point_pool_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: point_pool_capacity must be > 0
-
-    >>> p.region_pool_capacity
-    1000
-
-    >>> p.region_pool_capacity = 1500
-    >>> p.region_pool_capacity
-    1500
-    >>> p.region_pool_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: region_pool_capacity must be > 0
-
-    >>> p.buffering_capacity
-    10
-
-    >>> p.buffering_capacity = 100
-    >>> p.buffering_capacity = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: buffering_capacity must be > 0
-
-    >>> p.tight_mbr
-    True
-
-    >>> p.tight_mbr = 100
-    >>> p.tight_mbr
-    True
-
-    >>> p.tight_mbr = False
-    >>> p.tight_mbr
-    False
-
-    >>> p.overwrite
-    True
-
-    >>> p.overwrite = 100
-    >>> p.overwrite
-    True
-
-    >>> p.overwrite = False
-    >>> p.overwrite
-    False
-
-    >>> p.near_minimum_overlap_factor
-    32
-
-    >>> p.near_minimum_overlap_factor = 100
-    >>> p.near_minimum_overlap_factor = -4321  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: near_minimum_overlap_factor must be > 0
-
-    >>> p.writethrough
-    False
-
-    >>> p.writethrough = 100
-    >>> p.writethrough
-    True
-
-    >>> p.writethrough = False
-    >>> p.writethrough
-    False
-
-    >>> '%.2f' % p.fill_factor
-    '0.70'
-
-    >>> p.fill_factor = 0.99
-    >>> '%.2f' % p.fill_factor
-    '0.99'
-
-    >>> '%.2f' % p.split_distribution_factor
-    '0.40'
-
-    >>> p.tpr_horizon
-    20.0
-
-    >>> '%.2f' % p.reinsert_factor
-    '0.30'
-
-    >>> p.filename
-    ''
-
-    >>> p.filename = 'testing123testing'
-    >>> p.filename
-    'testing123testing'
-
-    >>> p.dat_extension
-    'dat'
-
-    >>> p.dat_extension = r'data'
-    >>> p.dat_extension
-    'data'
-
-    >>> p.idx_extension
-    'idx'
-    >>> p.idx_extension = 'index'
-    >>> p.idx_extension
-    'index'
-
-    >>> p.index_id  #doctest: +IGNORE_EXCEPTION_DETAIL
-    Traceback (most recent call last):
-    ...
-    RTreeError: Error in "IndexProperty_GetIndexID": Property IndexIdentifier was empty
-    >>> p.index_id = -420
-    >>> int(p.index_id)
-    -420



View it on GitLab: https://salsa.debian.org/debian-gis-team/python-rtree/compare/d99a1c1aa759246165cd829c79f98ad08c43a291...9bab7b30c2709f32c6e71f00b7f5cd940c49d8bc

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/python-rtree/compare/d99a1c1aa759246165cd829c79f98ad08c43a291...9bab7b30c2709f32c6e71f00b7f5cd940c49d8bc
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20191210/567ca1a3/attachment-0001.html>


More information about the Pkg-grass-devel mailing list