[Git][debian-gis-team/fiona][experimental] 5 commits: New upstream version 1.9.4

Bas Couwenberg (@sebastic) gitlab at salsa.debian.org
Wed May 17 04:27:58 BST 2023



Bas Couwenberg pushed to branch experimental at Debian GIS Project / fiona


Commits:
a171a72e by Bas Couwenberg at 2023-05-17T05:14:55+02:00
New upstream version 1.9.4
- - - - -
2a16f9a6 by Bas Couwenberg at 2023-05-17T05:14:58+02:00
Update upstream source from tag 'upstream/1.9.4'

Update to upstream version '1.9.4'
with Debian dir c6a58cee22298dd329ff99993441287f6fdc2707
- - - - -
ff8d56c9 by Bas Couwenberg at 2023-05-17T05:15:13+02:00
New upstream release.

- - - - -
f0e82237 by Bas Couwenberg at 2023-05-17T05:18:04+02:00
Update copyright file.

- - - - -
122448fc by Bas Couwenberg at 2023-05-17T05:18:35+02:00
Set distribution to experimental.

- - - - -


21 changed files:

- CHANGES.txt
- README.rst
- debian/changelog
- debian/copyright
- docs/fiona.rst
- docs/install.rst
- fiona/__init__.py
- fiona/_geometry.pyx
- + fiona/_vendor/munch/LICENSE.txt
- + fiona/_vendor/munch/__init__.py
- + fiona/_vendor/munch/python3_compat.py
- fiona/drvsupport.py
- fiona/fio/helpers.py
- fiona/model.py
- pyproject.toml
- pytest.ini
- setup.py
- tests/test_feature.py
- tests/test_model.py
- tests/test_non_counting_layer.py
- tests/test_unicode.py


Changes:

=====================================
CHANGES.txt
=====================================
@@ -3,6 +3,23 @@ Changes
 
 All issue numbers are relative to https://github.com/Toblerity/Fiona/issues.
 
+1.9.4 (2023-05-16)
+------------------
+
+- The performance of Feature.from_dict() has been improved (#1267).
+- Several sources of meaningless log messages from fiona._geometry about NULL
+  geometries are avoided (#1264).
+- The Parquet driver has been added to the list of supported drivers and will
+  be available if your system's GDAL library links libarrow. Note that fiona
+  wheels on PyPI do not include libarrow as it is rather large.
+- Ensure that fiona._vendor modules are found and included.
+- Bytes type feature properties are now hex encoded when serializing to GeoJSON
+  (#1263).
+- Docstrings for listdir and listlayers have been clarified and harmonized.
+- Nose style test cases have been converted to unittest.TestCase (#1256).
+- The munch package used by fio-filter and fio-calc is now vendored and patched
+  to remove usage of the deprecated pkg_resources module (#1255).
+
 1.9.3 (2023-04-10)
 ------------------
 


=====================================
README.rst
=====================================
@@ -35,11 +35,13 @@ applications, not so much for production. They are not tested for compatibility
 with all other binary wheels, conda packages, or QGIS, and omit many of GDAL's
 optional format drivers. If you need, for example, GML support you will need to
 build and install Fiona from a source distribution. It is possible to install
-Fiona from source using pip and the `--no-binary` option. A specific GDAL
-installation can be selected by setting the GDAL_CONFIG environment variable.
+Fiona from source using pip (version >= 22.3) and the `--no-binary` option. A
+specific GDAL installation can be selected by setting the GDAL_CONFIG
+environment variable.
 
 .. code-block:: console
 
+    pip install -U pip
     pip install --no-binary fiona fiona
 
 Many users find Anaconda and conda-forge a good way to install Fiona and get


=====================================
debian/changelog
=====================================
@@ -1,3 +1,11 @@
+fiona (1.9.4-1~exp1) experimental; urgency=medium
+
+  * Team upload.
+  * New upstream release.
+  * Update copyright file.
+
+ -- Bas Couwenberg <sebastic at debian.org>  Wed, 17 May 2023 05:18:21 +0200
+
 fiona (1.9.3-1~exp1) experimental; urgency=medium
 
   * Team upload.


=====================================
debian/copyright
=====================================
@@ -35,9 +35,13 @@ Copyright: 2009-2017 Fiona Contributors
  * wilsaj <wilson.andrew.j+github at gmail.com>
 License: BSD-3-Clause
 
-Files: debian/*
-Copyright: 2014-2017 Johan Van de Wauw
-License: BSD-3-Clause
+Files: docs/manual.rst
+Copyright: 2014-2015 Sean C. Gillies
+License: CC-BY-3.0-US
+
+Files: fiona/_vendor/munch/*
+Copyright: 2010, David Schoonover
+License: Expat
 
 Files: tests/data/*
 Copyright: disclaimed
@@ -52,9 +56,9 @@ License: public-domain
     None.  Acknowledgment of the National Atlas of the United States of
     America would be appreciated in products derived from these data."
 
-Files: docs/manual.rst
-Copyright: 2014-2015 Sean C. Gillies
-License: CC-BY-3.0-US
+Files: debian/*
+Copyright: 2014-2017 Johan Van de Wauw
+License: BSD-3-Clause
 
 License: BSD-3-Clause
  Redistribution and use in source and binary forms, with or without
@@ -81,6 +85,25 @@ License: BSD-3-Clause
  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
  POSSIBILITY OF SUCH DAMAGE.
 
+License: Expat
+ Permission is hereby granted, free of charge, to any person obtaining a
+ copy of this software and associated documentation files (the "Software"),
+ to deal in the Software without restriction, including without limitation
+ the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ and/or sell copies of the Software, and to permit persons to whom the
+ Software is furnished to do so, subject to the following conditions:
+ .
+ The above copyright notice and this permission notice shall be included
+ in all copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ DEALINGS IN THE SOFTWARE.
+
 License: CC-BY-3.0-US
  http://creativecommons.org/licenses/by/3.0/us/legalcode
  .


=====================================
docs/fiona.rst
=====================================
@@ -140,8 +140,8 @@ fiona.vfs module
     :show-inheritance:
 
 
-Module contents
----------------
+fiona module
+------------
 
 .. automodule:: fiona
     :members:


=====================================
docs/install.rst
=====================================
@@ -44,10 +44,11 @@ Without pip:
 
     GDAL_CONFIG=/path/to/gdal-config python setup.py install
 
-With pip:
+With pip (version >= 22.3 is required):
 
 .. code-block:: console
 
+    python -m pip install --user -U pip
     GDAL_CONFIG=/path/to/gdal-config python -m pip install --user .
 
 These are pretty much equivalent. Pip will use setuptools as the build backend.


=====================================
fiona/__init__.py
=====================================
@@ -6,71 +6,26 @@ source GIS community's most trusted geodata access library and
 integrates readily with other Python GIS packages such as pyproj, Rtree
 and Shapely.
 
-How minimal? Fiona can read features as mappings from shapefiles or
-other GIS vector formats and write mappings as features to files using
-the same formats. That's all. There aren't any feature or geometry
-classes. Features and their geometries are just data.
-
 A Fiona feature is a Python mapping inspired by the GeoJSON format. It
-has `id`, 'geometry`, and `properties` keys. The value of `id` is
-a string identifier unique within the feature's parent collection. The
-`geometry` is another mapping with `type` and `coordinates` keys. The
-`properties` of a feature is another mapping corresponding to its
-attribute table. For example:
-
-  {'id': '1',
-   'geometry': {'type': 'Point', 'coordinates': (0.0, 0.0)},
-   'properties': {'label': 'Null Island'} }
-
-is a Fiona feature with a point geometry and one property.
-
-Features are read and written using objects returned by the
-``collection`` function. These ``Collection`` objects are a lot like
-Python ``file`` objects. A ``Collection`` opened in reading mode serves
-as an iterator over features. One opened in a writing mode provides
-a ``write`` method.
-
-Usage
------
-
-Here's an example of reading a select few polygon features from
-a shapefile and for each, picking off the first vertex of the exterior
-ring of the polygon and using that as the point geometry for a new
-feature writing to a "points.shp" file.
-
-  >>> import fiona
-  >>> with fiona.open('docs/data/test_uk.shp', 'r') as inp:
-  ...     output_schema = inp.schema.copy()
-  ...     output_schema['geometry'] = 'Point'
-  ...     with collection(
-  ...             "points.shp", "w",
-  ...             crs=inp.crs,
-  ...             driver="ESRI Shapefile",
-  ...             schema=output_schema
-  ...             ) as out:
-  ...         for f in inp.filter(
-  ...                 bbox=(-5.0, 55.0, 0.0, 60.0)
-  ...                 ):
-  ...             value = f['geometry']['coordinates'][0][0]
-  ...             f['geometry'] = {
-  ...                 'type': 'Point', 'coordinates': value}
-  ...             out.write(f)
-
-Because Fiona collections are context managers, they are closed and (in
-writing modes) flush contents to disk when their ``with`` blocks end.
+has ``id``, ``geometry``, and ``properties`` attributes. The value of
+``id`` is a string identifier unique within the feature's parent
+collection. The ``geometry`` is another mapping with ``type`` and
+``coordinates`` keys. The ``properties`` of a feature is another mapping
+corresponding to its attribute table.
+
+Features are read and written using the ``Collection`` class.  These
+``Collection`` objects are a lot like Python ``file`` objects. A
+``Collection`` opened in reading mode serves as an iterator over
+features. One opened in a writing mode provides a ``write`` method.
+
 """
 
 import glob
 import logging
 import os
-import warnings
+from pathlib import Path
 import platform
-
-try:
-    from pathlib import Path
-except ImportError:  # pragma: no cover
-    class Path:
-        pass
+import warnings
 
 if platform.system() == "Windows":
     _whl_dir = os.path.join(os.path.dirname(__file__), ".libs")
@@ -124,9 +79,10 @@ __all__ = [
     "open",
     "prop_type",
     "prop_width",
+    "remove",
 ]
 
-__version__ = "1.9.3"
+__version__ = "1.9.4"
 __gdal_version__ = get_gdal_release_name()
 
 gdal_version = get_gdal_version_tuple()
@@ -366,19 +322,31 @@ def open(
 collection = open
 
 
+ at ensure_env_with_credentials
 def remove(path_or_collection, driver=None, layer=None):
-    """Deletes an OGR data source
+    """Delete an OGR data source or one of its layers.
 
-    The required ``path`` argument may be an absolute or relative file path.
-    Alternatively, a Collection can be passed instead in which case the path
-    and driver are automatically determined. Otherwise the ``driver`` argument
-    must be specified.
+    If no layer is specified, the entire dataset and all of its layers
+    and associated sidecar files will be deleted.
 
-    Raises a ``RuntimeError`` if the data source cannot be deleted.
+    Parameters
+    ----------
+    path_or_collection : str, pathlib.Path, or Collection
+        The target Collection or its path.
+    driver : str, optional
+        The name of a driver to be used for deletion, optional. Can
+        usually be detected.
+    layer : str or int, optional
+        The name or index of a specific layer.
 
-    Example usage:
+    Returns
+    -------
+    None
 
-      fiona.remove('test.shp', 'ESRI Shapefile')
+    Raises
+    ------
+    DatasetDeleteError
+        If the data source cannot be deleted.
 
     """
     if isinstance(path_or_collection, Collection):
@@ -386,6 +354,8 @@ def remove(path_or_collection, driver=None, layer=None):
         path = collection.path
         driver = collection.driver
         collection.close()
+    elif isinstance(path_or_collection, Path):
+        path = str(path_or_collection)
     else:
         path = path_or_collection
     if layer is None:
@@ -394,33 +364,48 @@ def remove(path_or_collection, driver=None, layer=None):
         _remove_layer(path, layer, driver)
 
 
-def listdir(path):
-    """List files in a directory
+ at ensure_env_with_credentials
+def listdir(fp):
+    """Lists the datasets in a directory or archive file.
+
+    Archive files must be prefixed like "zip://" or "tar://".
+
     Parameters
     ----------
-    path : URI (str or pathlib.Path)
-        A dataset resource identifier.
+    fp : str or pathlib.Path
+        Directory or archive path.
+
     Returns
     -------
-    list
-        A list of filename strings.
+    list of str
+        A list of datasets.
+
+    Raises
+    ------
+    TypeError
+        If the input is not a str or Path.
+
     """
-    if isinstance(path, Path):
-        path = str(path)
-    if not isinstance(path, str):
-        raise TypeError("invalid path: %r" % path)
-    pobj = parse_path(path)
+    if isinstance(fp, Path):
+        fp = str(fp)
+
+    if not isinstance(fp, str):
+        raise TypeError("invalid path: %r" % fp)
+
+    pobj = parse_path(fp)
     return _listdir(vsi_path(pobj))
 
 
 @ensure_env_with_credentials
 def listlayers(fp, vfs=None, **kwargs):
-    """List layer names in their index order
+    """Lists the layers (collections) in a dataset.
+
+    Archive files must be prefixed like "zip://" or "tar://".
 
     Parameters
     ----------
-    fp : URI (str or pathlib.Path), or file-like object
-        A dataset resource identifier or file object.
+    fp : str, pathlib.Path, or file-like object
+        A dataset identifier or file object containing a dataset.
     vfs : str
         This is a deprecated parameter. A URI scheme such as "zip://"
         should be used instead.
@@ -429,9 +414,14 @@ def listlayers(fp, vfs=None, **kwargs):
 
     Returns
     -------
-    list
+    list of str
         A list of layer name strings.
 
+    Raises
+    ------
+    TypeError
+        If the input is not a str, Path, or file object.
+
     """
     if hasattr(fp, 'read'):
         with MemoryFile(fp.read()) as memfile:
@@ -446,7 +436,12 @@ def listlayers(fp, vfs=None, **kwargs):
             raise TypeError("invalid vfs: %r" % vfs)
 
         if vfs:
-            warnings.warn("The vfs keyword argument is deprecated. Instead, pass a URL that uses a zip or tar (for example) scheme.", FionaDeprecationWarning, stacklevel=2)
+            warnings.warn(
+                "The vfs keyword argument is deprecated and will be removed in 2.0. "
+                "Instead, pass a URL that uses a zip or tar (for example) scheme.",
+                FionaDeprecationWarning,
+                stacklevel=2,
+            )
             pobj_vfs = parse_path(vfs)
             pobj_path = parse_path(fp)
             pobj = ParsedPath(pobj_path.path, pobj_vfs.path, pobj_vfs.scheme)
@@ -459,12 +454,24 @@ def listlayers(fp, vfs=None, **kwargs):
 def prop_width(val):
     """Returns the width of a str type property.
 
-    Undefined for non-str properties. Example:
+    Undefined for non-str properties.
+
+    Parameters
+    ----------
+    val : str
+        A type:width string from a collection schema.
+
+    Returns
+    -------
+    int or None
+
+    Examples
+    --------
+    >>> prop_width('str:25')
+    25
+    >>> prop_width('str')
+    80
 
-      >>> prop_width('str:25')
-      25
-      >>> prop_width('str')
-      80
     """
     if val.startswith('str'):
         return int((val.split(":")[1:] or ["80"])[0])
@@ -474,12 +481,23 @@ def prop_width(val):
 def prop_type(text):
     """Returns a schema property's proper Python type.
 
-    Example:
+    Parameters
+    ----------
+    text : str
+        A type name, with or without width.
+
+    Returns
+    -------
+    obj
+        A Python class.
+
+    Examples
+    --------
+    >>> prop_type('int')
+    <class 'int'>
+    >>> prop_type('str:25')
+    <class 'str'>
 
-      >>> prop_type('int')
-      <class 'int'>
-      >>> prop_type('str:25')
-      <class 'str'>
     """
     key = text.split(':')[0]
     return FIELD_TYPES_MAP[key]


=====================================
fiona/_geometry.pyx
=====================================
@@ -22,6 +22,12 @@ log.addHandler(NullHandler())
 # mapping of GeoJSON type names to OGR integer geometry types
 GEOJSON2OGR_GEOMETRY_TYPES = dict((v, k) for k, v in GEOMETRY_TYPES.iteritems())
 
+cdef int ogr_get_geometry_type(void *geometry):
+    # OGR_G_GetGeometryType with NULL geometry support
+    if geometry == NULL:
+        return 0 # unknown
+    return OGR_G_GetGeometryType(geometry)
+
 
 cdef unsigned int geometry_type_code(name) except? 9999:
     """Map OGC geometry type names to integer codes."""
@@ -131,7 +137,7 @@ cdef class GeomBuilder:
         count = OGR_G_GetGeometryCount(geom)
         while j < count:
             part = OGR_G_GetGeometryRef(geom, j)
-            code = base_geometry_type_code(OGR_G_GetGeometryType(part))
+            code = base_geometry_type_code(ogr_get_geometry_type(part))
             if code in (
                 OGRGeometryType.PolyhedralSurface.value,
                 OGRGeometryType.TIN.value,
@@ -174,7 +180,7 @@ cdef class GeomBuilder:
         cdef int code
 
         cogr_geometry = OGR_F_GetGeometryRef(feature)
-        code = base_geometry_type_code(OGR_G_GetGeometryType(cogr_geometry))
+        code = base_geometry_type_code(ogr_get_geometry_type(cogr_geometry))
 
         # We need to take ownership of the geometry before we can call 
         # OGR_G_ForceToPolygon or OGR_G_ForceToMultiPolygon
@@ -194,7 +200,7 @@ cdef class GeomBuilder:
         if geom == NULL:
             return None
 
-        code = base_geometry_type_code(OGR_G_GetGeometryType(geom))
+        code = base_geometry_type_code(ogr_get_geometry_type(geom))
 
         # We convert special geometries (Curves, TIN, Triangle, ...)
         # to GeoJSON compatible geometries (LineStrings, Polygons, MultiPolygon, ...)
@@ -208,7 +214,7 @@ cdef class GeomBuilder:
             # OGRGeometryType.Surface.value,  # Abstract type
         ):
             geometry_to_dealloc = OGR_G_GetLinearGeometry(geom, 0.0, NULL)
-            code = base_geometry_type_code(OGR_G_GetGeometryType(geometry_to_dealloc))
+            code = base_geometry_type_code(ogr_get_geometry_type(geometry_to_dealloc))
             geom = geometry_to_dealloc
         elif code in (
             OGRGeometryType.PolyhedralSurface.value,
@@ -219,7 +225,7 @@ cdef class GeomBuilder:
                 geometry_to_dealloc = OGR_G_ForceToMultiPolygon(geom)
             elif code == OGRGeometryType.Triangle.value:
                 geometry_to_dealloc = OGR_G_ForceToPolygon(geom)
-            code = base_geometry_type_code(OGR_G_GetGeometryType(geometry_to_dealloc))
+            code = base_geometry_type_code(ogr_get_geometry_type(geometry_to_dealloc))
             geom = geometry_to_dealloc
         self.ndims = OGR_G_GetCoordinateDimension(geom)
 
@@ -250,7 +256,7 @@ cdef class GeomBuilder:
         if geometry_to_dealloc is not NULL:
            OGR_G_DestroyGeometry(geometry_to_dealloc)
 
-        return Geometry.from_dict(**built)
+        return Geometry.from_dict(built)
 
     cpdef build_wkb(self, object wkb):
         # Build geometry from wkb


=====================================
fiona/_vendor/munch/LICENSE.txt
=====================================
@@ -0,0 +1,19 @@
+Copyright (c) 2010 David Schoonover
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.


=====================================
fiona/_vendor/munch/__init__.py
=====================================
@@ -0,0 +1,534 @@
+""" Munch is a subclass of dict with attribute-style access.
+
+    >>> b = Munch()
+    >>> b.hello = 'world'
+    >>> b.hello
+    'world'
+    >>> b['hello'] += "!"
+    >>> b.hello
+    'world!'
+    >>> b.foo = Munch(lol=True)
+    >>> b.foo.lol
+    True
+    >>> b.foo is b['foo']
+    True
+
+    It is safe to import * from this module:
+
+        __all__ = ('Munch', 'munchify','unmunchify')
+
+    un/munchify provide dictionary conversion; Munches can also be
+    converted via Munch.to/fromDict().
+"""
+
+from .python3_compat import iterkeys, iteritems, Mapping  #, u
+
+__version__ = "2.5.0"
+VERSION = tuple(map(int, __version__.split('.')[:3]))
+
+__all__ = ('Munch', 'munchify', 'DefaultMunch', 'DefaultFactoryMunch', 'unmunchify')
+
+
+
+class Munch(dict):
+    """ A dictionary that provides attribute-style access.
+
+        >>> b = Munch()
+        >>> b.hello = 'world'
+        >>> b.hello
+        'world'
+        >>> b['hello'] += "!"
+        >>> b.hello
+        'world!'
+        >>> b.foo = Munch(lol=True)
+        >>> b.foo.lol
+        True
+        >>> b.foo is b['foo']
+        True
+
+        A Munch is a subclass of dict; it supports all the methods a dict does...
+
+        >>> sorted(b.keys())
+        ['foo', 'hello']
+
+        Including update()...
+
+        >>> b.update({ 'ponies': 'are pretty!' }, hello=42)
+        >>> print (repr(b))
+        Munch({'ponies': 'are pretty!', 'foo': Munch({'lol': True}), 'hello': 42})
+
+        As well as iteration...
+
+        >>> sorted([ (k,b[k]) for k in b ])
+        [('foo', Munch({'lol': True})), ('hello', 42), ('ponies', 'are pretty!')]
+
+        And "splats".
+
+        >>> "The {knights} who say {ni}!".format(**Munch(knights='lolcats', ni='can haz'))
+        'The lolcats who say can haz!'
+
+        See unmunchify/Munch.toDict, munchify/Munch.fromDict for notes about conversion.
+    """
+    def __init__(self, *args, **kwargs):  # pylint: disable=super-init-not-called
+        self.update(*args, **kwargs)
+
+    # only called if k not found in normal places
+    def __getattr__(self, k):
+        """ Gets key if it exists, otherwise throws AttributeError.
+
+            nb. __getattr__ is only called if key is not found in normal places.
+
+            >>> b = Munch(bar='baz', lol={})
+            >>> b.foo
+            Traceback (most recent call last):
+                ...
+            AttributeError: foo
+
+            >>> b.bar
+            'baz'
+            >>> getattr(b, 'bar')
+            'baz'
+            >>> b['bar']
+            'baz'
+
+            >>> b.lol is b['lol']
+            True
+            >>> b.lol is getattr(b, 'lol')
+            True
+        """
+        try:
+            # Throws exception if not in prototype chain
+            return object.__getattribute__(self, k)
+        except AttributeError:
+            try:
+                return self[k]
+            except KeyError:
+                raise AttributeError(k)
+
+    def __setattr__(self, k, v):
+        """ Sets attribute k if it exists, otherwise sets key k. A KeyError
+            raised by set-item (only likely if you subclass Munch) will
+            propagate as an AttributeError instead.
+
+            >>> b = Munch(foo='bar', this_is='useful when subclassing')
+            >>> hasattr(b.values, '__call__')
+            True
+            >>> b.values = 'uh oh'
+            >>> b.values
+            'uh oh'
+            >>> b['values']
+            Traceback (most recent call last):
+                ...
+            KeyError: 'values'
+        """
+        try:
+            # Throws exception if not in prototype chain
+            object.__getattribute__(self, k)
+        except AttributeError:
+            try:
+                self[k] = v
+            except:
+                raise AttributeError(k)
+        else:
+            object.__setattr__(self, k, v)
+
+    def __delattr__(self, k):
+        """ Deletes attribute k if it exists, otherwise deletes key k. A KeyError
+            raised by deleting the key--such as when the key is missing--will
+            propagate as an AttributeError instead.
+
+            >>> b = Munch(lol=42)
+            >>> del b.lol
+            >>> b.lol
+            Traceback (most recent call last):
+                ...
+            AttributeError: lol
+        """
+        try:
+            # Throws exception if not in prototype chain
+            object.__getattribute__(self, k)
+        except AttributeError:
+            try:
+                del self[k]
+            except KeyError:
+                raise AttributeError(k)
+        else:
+            object.__delattr__(self, k)
+
+    def toDict(self):
+        """ Recursively converts a munch back into a dictionary.
+
+            >>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')
+            >>> sorted(b.toDict().items())
+            [('foo', {'lol': True}), ('hello', 42), ('ponies', 'are pretty!')]
+
+            See unmunchify for more info.
+        """
+        return unmunchify(self)
+
+    @property
+    def __dict__(self):
+        return self.toDict()
+
+    def __repr__(self):
+        """ Invertible* string-form of a Munch.
+
+            >>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')
+            >>> print (repr(b))
+            Munch({'ponies': 'are pretty!', 'foo': Munch({'lol': True}), 'hello': 42})
+            >>> eval(repr(b))
+            Munch({'ponies': 'are pretty!', 'foo': Munch({'lol': True}), 'hello': 42})
+
+            >>> with_spaces = Munch({1: 2, 'a b': 9, 'c': Munch({'simple': 5})})
+            >>> print (repr(with_spaces))
+            Munch({'a b': 9, 1: 2, 'c': Munch({'simple': 5})})
+            >>> eval(repr(with_spaces))
+            Munch({'a b': 9, 1: 2, 'c': Munch({'simple': 5})})
+
+            (*) Invertible so long as collection contents are each repr-invertible.
+        """
+        return '{0}({1})'.format(self.__class__.__name__, dict.__repr__(self))
+
+    def __dir__(self):
+        return list(iterkeys(self))
+
+    def __getstate__(self):
+        """ Implement a serializable interface used for pickling.
+
+        See https://docs.python.org/3.6/library/pickle.html.
+        """
+        return {k: v for k, v in self.items()}
+
+    def __setstate__(self, state):
+        """ Implement a serializable interface used for pickling.
+
+        See https://docs.python.org/3.6/library/pickle.html.
+        """
+        self.clear()
+        self.update(state)
+
+    __members__ = __dir__  # for python2.x compatibility
+
+    @classmethod
+    def fromDict(cls, d):
+        """ Recursively transforms a dictionary into a Munch via copy.
+
+            >>> b = Munch.fromDict({'urmom': {'sez': {'what': 'what'}}})
+            >>> b.urmom.sez.what
+            'what'
+
+            See munchify for more info.
+        """
+        return munchify(d, cls)
+
+    def copy(self):
+        return type(self).fromDict(self)
+
+    def update(self, *args, **kwargs):
+        """
+        Override built-in method to call custom __setitem__ method that may
+        be defined in subclasses.
+        """
+        for k, v in iteritems(dict(*args, **kwargs)):
+            self[k] = v
+
+    def get(self, k, d=None):
+        """
+        D.get(k[,d]) -> D[k] if k in D, else d.  d defaults to None.
+        """
+        if k not in self:
+            return d
+        return self[k]
+
+    def setdefault(self, k, d=None):
+        """
+        D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D
+        """
+        if k not in self:
+            self[k] = d
+        return self[k]
+
+
+class AutoMunch(Munch):
+    def __setattr__(self, k, v):
+        """ Works the same as Munch.__setattr__ but if you supply
+            a dictionary as value it will convert it to another Munch.
+        """
+        if isinstance(v, Mapping) and not isinstance(v, (AutoMunch, Munch)):
+            v = munchify(v, AutoMunch)
+        super(AutoMunch, self).__setattr__(k, v)
+
+
+class DefaultMunch(Munch):
+    """
+    A Munch that returns a user-specified value for missing keys.
+    """
+
+    def __init__(self, *args, **kwargs):
+        """ Construct a new DefaultMunch. Like collections.defaultdict, the
+            first argument is the default value; subsequent arguments are the
+            same as those for dict.
+        """
+        # Mimic collections.defaultdict constructor
+        if args:
+            default = args[0]
+            args = args[1:]
+        else:
+            default = None
+        super(DefaultMunch, self).__init__(*args, **kwargs)
+        self.__default__ = default
+
+    def __getattr__(self, k):
+        """ Gets key if it exists, otherwise returns the default value."""
+        try:
+            return super(DefaultMunch, self).__getattr__(k)
+        except AttributeError:
+            return self.__default__
+
+    def __setattr__(self, k, v):
+        if k == '__default__':
+            object.__setattr__(self, k, v)
+        else:
+            super(DefaultMunch, self).__setattr__(k, v)
+
+    def __getitem__(self, k):
+        """ Gets key if it exists, otherwise returns the default value."""
+        try:
+            return super(DefaultMunch, self).__getitem__(k)
+        except KeyError:
+            return self.__default__
+
+    def __getstate__(self):
+        """ Implement a serializable interface used for pickling.
+
+        See https://docs.python.org/3.6/library/pickle.html.
+        """
+        return (self.__default__, {k: v for k, v in self.items()})
+
+    def __setstate__(self, state):
+        """ Implement a serializable interface used for pickling.
+
+        See https://docs.python.org/3.6/library/pickle.html.
+        """
+        self.clear()
+        default, state_dict = state
+        self.update(state_dict)
+        self.__default__ = default
+
+    @classmethod
+    def fromDict(cls, d, default=None):
+        # pylint: disable=arguments-differ
+        return munchify(d, factory=lambda d_: cls(default, d_))
+
+    def copy(self):
+        return type(self).fromDict(self, default=self.__default__)
+
+    def __repr__(self):
+        return '{0}({1!r}, {2})'.format(
+            type(self).__name__, self.__undefined__, dict.__repr__(self))
+
+
+class DefaultFactoryMunch(Munch):
+    """ A Munch that calls a user-specified function to generate values for
+        missing keys like collections.defaultdict.
+
+        >>> b = DefaultFactoryMunch(list, {'hello': 'world!'})
+        >>> b.hello
+        'world!'
+        >>> b.foo
+        []
+        >>> b.bar.append('hello')
+        >>> b.bar
+        ['hello']
+    """
+
+    def __init__(self, default_factory, *args, **kwargs):
+        super(DefaultFactoryMunch, self).__init__(*args, **kwargs)
+        self.default_factory = default_factory
+
+    @classmethod
+    def fromDict(cls, d, default_factory):
+        # pylint: disable=arguments-differ
+        return munchify(d, factory=lambda d_: cls(default_factory, d_))
+
+    def copy(self):
+        return type(self).fromDict(self, default_factory=self.default_factory)
+
+    def __repr__(self):
+        factory = self.default_factory.__name__
+        return '{0}({1}, {2})'.format(
+            type(self).__name__, factory, dict.__repr__(self))
+
+    def __setattr__(self, k, v):
+        if k == 'default_factory':
+            object.__setattr__(self, k, v)
+        else:
+            super(DefaultFactoryMunch, self).__setattr__(k, v)
+
+    def __missing__(self, k):
+        self[k] = self.default_factory()
+        return self[k]
+
+
+# While we could convert abstract types like Mapping or Iterable, I think
+# munchify is more likely to "do what you mean" if it is conservative about
+# casting (ex: isinstance(str,Iterable) == True ).
+#
+# Should you disagree, it is not difficult to duplicate this function with
+# more aggressive coercion to suit your own purposes.
+
+def munchify(x, factory=Munch):
+    """ Recursively transforms a dictionary into a Munch via copy.
+
+        >>> b = munchify({'urmom': {'sez': {'what': 'what'}}})
+        >>> b.urmom.sez.what
+        'what'
+
+        munchify can handle intermediary dicts, lists and tuples (as well as
+        their subclasses), but ymmv on custom datatypes.
+
+        >>> b = munchify({ 'lol': ('cats', {'hah':'i win again'}),
+        ...         'hello': [{'french':'salut', 'german':'hallo'}] })
+        >>> b.hello[0].french
+        'salut'
+        >>> b.lol[1].hah
+        'i win again'
+
+        nb. As dicts are not hashable, they cannot be nested in sets/frozensets.
+    """
+    # Munchify x, using `seen` to track object cycles
+    seen = dict()
+
+    def munchify_cycles(obj):
+        # If we've already begun munchifying obj, just return the already-created munchified obj
+        try:
+            return seen[id(obj)]
+        except KeyError:
+            pass
+
+        # Otherwise, first partly munchify obj (but without descending into any lists or dicts) and save that
+        seen[id(obj)] = partial = pre_munchify(obj)
+        # Then finish munchifying lists and dicts inside obj (reusing munchified obj if cycles are encountered)
+        return post_munchify(partial, obj)
+
+    def pre_munchify(obj):
+        # Here we return a skeleton of munchified obj, which is enough to save for later (in case
+        # we need to break cycles) but it needs to filled out in post_munchify
+        if isinstance(obj, Mapping):
+            return factory({})
+        elif isinstance(obj, list):
+            return type(obj)()
+        elif isinstance(obj, tuple):
+            type_factory = getattr(obj, "_make", type(obj))
+            return type_factory(munchify_cycles(item) for item in obj)
+        else:
+            return obj
+
+    def post_munchify(partial, obj):
+        # Here we finish munchifying the parts of obj that were deferred by pre_munchify because they
+        # might be involved in a cycle
+        if isinstance(obj, Mapping):
+            partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
+        elif isinstance(obj, list):
+            partial.extend(munchify_cycles(item) for item in obj)
+        elif isinstance(obj, tuple):
+            for (item_partial, item) in zip(partial, obj):
+                post_munchify(item_partial, item)
+
+        return partial
+
+    return munchify_cycles(x)
+
+
+def unmunchify(x):
+    """ Recursively converts a Munch into a dictionary.
+
+        >>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')
+        >>> sorted(unmunchify(b).items())
+        [('foo', {'lol': True}), ('hello', 42), ('ponies', 'are pretty!')]
+
+        unmunchify will handle intermediary dicts, lists and tuples (as well as
+        their subclasses), but ymmv on custom datatypes.
+
+        >>> b = Munch(foo=['bar', Munch(lol=True)], hello=42,
+        ...         ponies=('are pretty!', Munch(lies='are trouble!')))
+        >>> sorted(unmunchify(b).items()) #doctest: +NORMALIZE_WHITESPACE
+        [('foo', ['bar', {'lol': True}]), ('hello', 42), ('ponies', ('are pretty!', {'lies': 'are trouble!'}))]
+
+        nb. As dicts are not hashable, they cannot be nested in sets/frozensets.
+    """
+
+    # Munchify x, using `seen` to track object cycles
+    seen = dict()
+
+    def unmunchify_cycles(obj):
+        # If we've already begun unmunchifying obj, just return the already-created unmunchified obj
+        try:
+            return seen[id(obj)]
+        except KeyError:
+            pass
+
+        # Otherwise, first partly unmunchify obj (but without descending into any lists or dicts) and save that
+        seen[id(obj)] = partial = pre_unmunchify(obj)
+        # Then finish unmunchifying lists and dicts inside obj (reusing unmunchified obj if cycles are encountered)
+        return post_unmunchify(partial, obj)
+
+    def pre_unmunchify(obj):
+        # Here we return a skeleton of unmunchified obj, which is enough to save for later (in case
+        # we need to break cycles) but it needs to filled out in post_unmunchify
+        if isinstance(obj, Mapping):
+            return dict()
+        elif isinstance(obj, list):
+            return type(obj)()
+        elif isinstance(obj, tuple):
+            type_factory = getattr(obj, "_make", type(obj))
+            return type_factory(unmunchify_cycles(item) for item in obj)
+        else:
+            return obj
+
+    def post_unmunchify(partial, obj):
+        # Here we finish unmunchifying the parts of obj that were deferred by pre_unmunchify because they
+        # might be involved in a cycle
+        if isinstance(obj, Mapping):
+            partial.update((k, unmunchify_cycles(obj[k])) for k in iterkeys(obj))
+        elif isinstance(obj, list):
+            partial.extend(unmunchify_cycles(v) for v in obj)
+        elif isinstance(obj, tuple):
+            for (value_partial, value) in zip(partial, obj):
+                post_unmunchify(value_partial, value)
+
+        return partial
+
+    return unmunchify_cycles(x)
+
+
+# Serialization
+
+try:
+    try:
+        import json
+    except ImportError:
+        import simplejson as json
+
+    def toJSON(self, **options):
+        """ Serializes this Munch to JSON. Accepts the same keyword options as `json.dumps()`.
+
+            >>> b = Munch(foo=Munch(lol=True), hello=42, ponies='are pretty!')
+            >>> json.dumps(b) == b.toJSON()
+            True
+        """
+        return json.dumps(self, **options)
+
+    def fromJSON(cls, stream, *args, **kwargs):
+        """ Deserializes JSON to Munch or any of its subclasses.
+        """
+        factory = lambda d: cls(*(args + (d,)), **kwargs)
+        return munchify(json.loads(stream), factory=factory)
+
+    Munch.toJSON = toJSON
+    Munch.fromJSON = classmethod(fromJSON)
+
+except ImportError:
+    pass
+
+


=====================================
fiona/_vendor/munch/python3_compat.py
=====================================
@@ -0,0 +1,6 @@
+from six import u, iteritems, iterkeys # pylint: disable=unused-import
+try:
+    from collections.abc import Mapping  # pylint: disable=unused-import
+except ImportError:
+    # Legacy Python
+    from collections import Mapping  # pylint: disable=unused-import


=====================================
fiona/drvsupport.py
=====================================
@@ -102,6 +102,8 @@ supported_drivers = dict(
         # OpenAir 	OpenAir 	No 	Yes 	Yes
         # multi-layer
         #   ("OpenAir", "r"),
+        # (Geo)Parquet
+        ("Parquet", "raw"),
         # PCI Geomatics Database File 	PCIDSK 	No 	No 	Yes, using internal PCIDSK SDK (from GDAL 1.7.0)
         ("PCIDSK", "raw"),
         # PDS 	PDS 	No 	Yes 	Yes


=====================================
fiona/fio/helpers.py
=====================================
@@ -7,9 +7,8 @@ import json
 import math
 import warnings
 
-from munch import munchify
-
 from fiona.model import Geometry, to_dict
+from fiona._vendor.munch import munchify
 
 
 warnings.simplefilter("default")


=====================================
fiona/model.py
=====================================
@@ -1,9 +1,10 @@
 """Fiona data model"""
 
-import itertools
-from collections import OrderedDict
+from binascii import hexlify
 from collections.abc import MutableMapping
+from collections import OrderedDict
 from enum import Enum
+import itertools
 from json import JSONEncoder
 from warnings import warn
 
@@ -85,7 +86,7 @@ class OGRGeometryType(Enum):
 
 
 # Mapping of OGR integer geometry types to GeoJSON type names.
-GEOMETRY_TYPES = {
+_GEO_TYPES = {
     OGRGeometryType.Unknown.value: "Unknown",
     OGRGeometryType.Point.value: "Point",
     OGRGeometryType.LineString.value: "LineString",
@@ -93,7 +94,11 @@ GEOMETRY_TYPES = {
     OGRGeometryType.MultiPoint.value: "MultiPoint",
     OGRGeometryType.MultiLineString.value: "MultiLineString",
     OGRGeometryType.MultiPolygon.value: "MultiPolygon",
-    OGRGeometryType.GeometryCollection.value: "GeometryCollection",
+    OGRGeometryType.GeometryCollection.value: "GeometryCollection"
+}
+
+GEOMETRY_TYPES = {
+    **_GEO_TYPES,
     OGRGeometryType.NONE.value: "None",
     OGRGeometryType.LinearRing.value: "LinearRing",
     OGRGeometryType.Point25D.value: "3D Point",
@@ -195,7 +200,11 @@ class Geometry(Object):
 
     @classmethod
     def from_dict(cls, ob=None, **kwargs):
-        data = dict(getattr(ob, "__geo_interface__", ob) or {}, **kwargs)
+        if ob is not None:
+            data = dict(getattr(ob, "__geo_interface__", ob))
+            data.update(kwargs)
+        else:
+            data = kwargs
 
         if "geometries" in data and data["type"] == "GeometryCollection":
             _ = data.pop("coordinates", None)
@@ -203,7 +212,7 @@ class Geometry(Object):
             return Geometry(
                 type="GeometryCollection",
                 geometries=[
-                    Geometry.from_dict(**part) for part in data.pop("geometries")
+                    Geometry.from_dict(part) for part in data.pop("geometries")
                 ],
                 **data
             )
@@ -280,13 +289,17 @@ class Feature(Object):
 
     @classmethod
     def from_dict(cls, ob=None, **kwargs):
-        data = dict(getattr(ob, "__geo_interface__", ob) or {}, **kwargs)
+        if ob is not None:
+            data = dict(getattr(ob, "__geo_interface__", ob))
+            data.update(kwargs)
+        else:
+            data = kwargs
         geom_data = data.pop("geometry", None)
 
         if isinstance(geom_data, Geometry):
             geom = geom_data
         else:
-            geom = Geometry.from_dict(**geom_data) if geom_data is not None else None
+            geom = Geometry.from_dict(geom_data) if geom_data is not None else None
 
         props_data = data.pop("properties", None)
 
@@ -362,26 +375,29 @@ class Properties(Object):
 
     @classmethod
     def from_dict(cls, mapping=None, **kwargs):
-        data = dict(mapping or {}, **kwargs)
-        return Properties(**data)
+        if mapping:
+            return Properties(**mapping, **kwargs)
+        return Properties(**kwargs)
 
 
 class ObjectEncoder(JSONEncoder):
-    """Encodes Geometry and Feature"""
+    """Encodes Geometry, Feature, and Properties."""
 
     def default(self, o):
         if isinstance(o, (Geometry, Properties)):
-            return {k: v for k, v in o.items() if v is not None}
+            return {k: self.default(v) for k, v in o.items() if v is not None}
         elif isinstance(o, Feature):
-            o_dict = dict(**o)
+            o_dict = dict(o)
             o_dict["type"] = "Feature"
             if o.geometry is not None:
-                o_dict["geometry"] = ObjectEncoder().default(o.geometry)
+                o_dict["geometry"] = self.default(o.geometry)
             if o.properties is not None:
-                o_dict["properties"] = ObjectEncoder().default(o.properties)
+                o_dict["properties"] = self.default(o.properties)
             return o_dict
+        elif isinstance(o, bytes):
+            return hexlify(o)
         else:
-            return JSONEncoder().default(o)
+            return o
 
 
 def decode_object(obj):
@@ -402,10 +418,11 @@ def decode_object(obj):
     else:
         obj = obj.get("__geo_interface__", obj)
 
-        if (obj.get("type", None) == "Feature") or "geometry" in obj:
-            return Feature.from_dict(**obj)
-        elif obj.get("type", None) in list(GEOMETRY_TYPES.values())[:8]:
-            return Geometry.from_dict(**obj)
+        _type = obj.get("type", None)
+        if (_type == "Feature") or "geometry" in obj:
+            return Feature.from_dict(obj)
+        elif _type in _GEO_TYPES.values():
+            return Geometry.from_dict(obj)
         else:
             return obj
 


=====================================
pyproject.toml
=====================================
@@ -40,7 +40,7 @@ dependencies = [
     "click-plugins>=1.0",
     "cligj>=0.5",
     'importlib-metadata;python_version<"3.10"',
-    "munch>=2.3.2",
+    "six",
 ]
 
 [project.optional-dependencies]
@@ -78,8 +78,10 @@ Repository = "https://github.com/Toblerity/Fiona"
 
 [tool.setuptools]
 include-package-data = false
-packages = ["fiona", "fiona.fio"]
 
 [tool.setuptools.dynamic]
 version = {attr = "fiona.__version__"}
 readme = {file = ["README.rst", "CHANGES.txt", "CREDITS.txt"]}
+
+[tool.setuptools.packages]
+find = {}


=====================================
pytest.ini
=====================================
@@ -1,11 +1,12 @@
 [pytest]
 filterwarnings =
+    error
     ignore:.*Sequential read of iterator was interrupted*:RuntimeWarning
     ignore:.*negative slices or start values other than zero may be slow*:RuntimeWarning
     ignore:.*negative step size may be slow*:RuntimeWarning
     ignore:.*is buggy and will be removed in Fiona 2.0.*
-
-markers = 
+    ignore:.*unclosed <socket.socket
+markers =
     iconv: marks tests that require gdal to be compiled with iconv
     network: marks tests that require a network connection
     wheel: marks tests that only works when installed from wheel


=====================================
setup.py
=====================================
@@ -1,8 +1,11 @@
+# Fiona build script.
+
 import logging
 import os
 import shutil
 import subprocess
 import sys
+
 from setuptools import setup
 from setuptools.extension import Extension
 
@@ -21,6 +24,7 @@ except ImportError:
 def check_output(cmd):
     return subprocess.check_output(cmd).decode('utf')
 
+
 def copy_data_tree(datadir, destdir):
     try:
         shutil.rmtree(destdir)


=====================================
tests/test_feature.py
=====================================
@@ -5,6 +5,8 @@ import os
 import shutil
 import sys
 import tempfile
+import unittest
+
 import pytest
 
 import fiona
@@ -14,8 +16,8 @@ from fiona.model import Feature
 from fiona.ogrext import featureRT
 
 
-class TestPointRoundTrip:
-    def setup(self):
+class TestPointRoundTrip(unittest.TestCase):
+    def setUp(self):
         self.tempdir = tempfile.mkdtemp()
         schema = {"geometry": "Point", "properties": {"title": "str"}}
         self.c = Collection(
@@ -25,7 +27,7 @@ class TestPointRoundTrip:
             schema=schema,
         )
 
-    def teardown(self):
+    def tearDdown(self):
         self.c.close()
         shutil.rmtree(self.tempdir)
 
@@ -62,15 +64,15 @@ class TestPointRoundTrip:
         assert g.properties["title"] is None
 
 
-class TestLineStringRoundTrip:
-    def setup(self):
+class TestLineStringRoundTrip(unittest.TestCase):
+    def setUp(self):
         self.tempdir = tempfile.mkdtemp()
         schema = {"geometry": "LineString", "properties": {"title": "str"}}
         self.c = Collection(
             os.path.join(self.tempdir, "foo.shp"), "w", "ESRI Shapefile", schema=schema
         )
 
-    def teardown(self):
+    def tearDown(self):
         self.c.close()
         shutil.rmtree(self.tempdir)
 
@@ -101,15 +103,15 @@ class TestLineStringRoundTrip:
         assert g.properties["title"] == "foo"
 
 
-class TestPolygonRoundTrip:
-    def setup(self):
+class TestPolygonRoundTrip(unittest.TestCase):
+    def setUp(self):
         self.tempdir = tempfile.mkdtemp()
         schema = {"geometry": "Polygon", "properties": {"title": "str"}}
         self.c = Collection(
             os.path.join(self.tempdir, "foo.shp"), "w", "ESRI Shapefile", schema=schema
         )
 
-    def teardown(self):
+    def tearDown(self):
         self.c.close()
         shutil.rmtree(self.tempdir)
 


=====================================
tests/test_model.py
=====================================
@@ -243,25 +243,19 @@ def test_geometry_encode():
     }
 
 
- at pytest.mark.parametrize("value", [100, "foo"])
-def test_encode_error(value):
-    """Raises TypeError"""
-    with pytest.raises(TypeError):
-        ObjectEncoder().default(value)
-
-
 def test_feature_encode():
     """Can encode a feature"""
     o_dict = ObjectEncoder().default(
         Feature(
             id="foo",
             geometry=Geometry(type="Point", coordinates=(0, 0)),
-            properties=Properties(a=1, foo="bar"),
+            properties=Properties(a=1, foo="bar", bytes=b"01234"),
         )
     )
     assert o_dict["id"] == "foo"
     assert o_dict["geometry"]["type"] == "Point"
     assert o_dict["geometry"]["coordinates"] == (0, 0)
+    assert o_dict["properties"]["bytes"] == b'3031323334'
 
 
 def test_decode_object_hook():
@@ -314,3 +308,9 @@ def test_feature_gi():
     assert gi["id"] == "foo"
     assert gi["geometry"]["type"] == "Point"
     assert gi["geometry"]["coordinates"] == (0, 0)
+
+
+def test_encode_bytes():
+    """Bytes are encoded using base64."""
+    assert ObjectEncoder().default(b"01234") == b'3031323334'
+


=====================================
tests/test_non_counting_layer.py
=====================================
@@ -1,3 +1,5 @@
+import unittest
+
 import pytest
 
 import fiona
@@ -5,11 +7,11 @@ from fiona.errors import FionaDeprecationWarning
 
 
 @pytest.mark.usefixtures('uttc_path_gpx')
-class TestNonCountingLayer:
-    def setup(self):
+class TestNonCountingLayer(unittest.TestCase):
+    def setUp(self):
         self.c = fiona.open(self.path_gpx, "r", layer="track_points")
 
-    def teardown(self):
+    def tearDown(self):
         self.c.close()
 
     def test_len_fail(self):


=====================================
tests/test_unicode.py
=====================================
@@ -4,6 +4,7 @@ import shutil
 import sys
 import tempfile
 from collections import OrderedDict
+import unittest
 
 import pytest
 
@@ -12,13 +13,13 @@ from fiona.errors import SchemaError
 from fiona.model import Feature
 
 
-class TestUnicodePath:
-    def setup(self):
+class TestUnicodePath(unittest.TestCase):
+    def setUp(self):
         tempdir = tempfile.mkdtemp()
         self.dir = os.path.join(tempdir, "français")
         shutil.copytree(os.path.join(os.path.dirname(__file__), "data"), self.dir)
 
-    def teardown(self):
+    def tearDown(self):
         shutil.rmtree(os.path.dirname(self.dir))
 
     def test_unicode_path(self):
@@ -39,11 +40,11 @@ class TestUnicodePath:
                 assert len(c) == 67
 
 
-class TestUnicodeStringField:
-    def setup(self):
+class TestUnicodeStringField(unittest.TestCase):
+    def setUp(self):
         self.tempdir = tempfile.mkdtemp()
 
-    def teardown(self):
+    def tearDown(self):
         shutil.rmtree(self.tempdir)
 
     @pytest.mark.xfail(reason="OGR silently fails to convert strings")



View it on GitLab: https://salsa.debian.org/debian-gis-team/fiona/-/compare/4323a30db3baf28f44a2ab07586d66ebccb7648c...122448fcd999c30772d3c527279044a35a502ee9

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/fiona/-/compare/4323a30db3baf28f44a2ab07586d66ebccb7648c...122448fcd999c30772d3c527279044a35a502ee9
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20230517/c218df89/attachment-0001.htm>


More information about the Pkg-grass-devel mailing list