[Git][debian-gis-team/netcdf4-python][master] 4 commits: New upstream version 1.5.2

Bas Couwenberg gitlab at salsa.debian.org
Wed Sep 4 05:07:23 BST 2019



Bas Couwenberg pushed to branch master at Debian GIS Project / netcdf4-python


Commits:
93a42b17 by Bas Couwenberg at 2019-09-04T03:55:06Z
New upstream version 1.5.2
- - - - -
1bc993ba by Bas Couwenberg at 2019-09-04T03:55:10Z
Update upstream source from tag 'upstream/1.5.2'

Update to upstream version '1.5.2'
with Debian dir 265aa714cf7782f3de407679b4f6d0a092482451
- - - - -
4a39dc90 by Bas Couwenberg at 2019-09-04T03:55:23Z
New upstream release.

- - - - -
16dcd4ba by Bas Couwenberg at 2019-09-04T03:56:33Z
Set distribution to unstable.

- - - - -


11 changed files:

- .appveyor.yml
- .travis.yml
- Changelog
- README.md
- debian/changelog
- docs/netCDF4/index.html
- netCDF4/_netCDF4.pyx
- setup.py
- test/tst_atts.py
- test/tst_endian.py
- test/tst_netcdftime.py


Changes:

=====================================
.appveyor.yml
=====================================
@@ -27,8 +27,8 @@ install:
   - cmd: call %CONDA_INSTALL_LOCN%\Scripts\activate.bat
   - cmd: conda config --set always_yes yes --set changeps1 no --set show_channel_urls true
   - cmd: conda update conda
-  - cmd: conda config --remove channels defaults --force
   - cmd: conda config --add channels conda-forge --force
+  - cmd: conda config --set channel_priority strict
   - cmd: set PYTHONUNBUFFERED=1
   - cmd: conda install conda-build vs2008_express_vc_python_patch
   - cmd: call setup_x64


=====================================
.travis.yml
=====================================
@@ -1,6 +1,6 @@
 language: python
 dist: xenial
-sudo: true
+cache: pip
 
 addons:
   apt:
@@ -17,6 +17,7 @@ env:
 
 python:
   - "2.7"
+  - "3.5"
   - "3.6"
   - "3.7"
   - "3.8-dev"
@@ -39,7 +40,6 @@ matrix:
         - DEPENDS="numpy==1.10.0 cython==0.21 ordereddict==1.1 setuptools==18.0 cftime"
     # test MPI with latest released version
     - python: 3.7
-      dist: xenial
       env: 
         - MPI=1
         - CC=mpicc.mpich
@@ -55,7 +55,6 @@ matrix:
             - libhdf5-mpich-dev
     # test MPI with latest released version
     - python: 3.7
-      dist: xenial
       env: 
         - MPI=1
         - CC=mpicc.mpich
@@ -72,7 +71,6 @@ matrix:
             - libhdf5-mpich-dev
     # test with netcdf-c from github master
     - python: 3.7
-      dist: xenial
       env:
         - MPI=1
         - CC=mpicc.mpich


=====================================
Changelog
=====================================
@@ -1,3 +1,17 @@
+ version 1.5.2 (not yet released)
+==============================
+ * fix for scaling bug when _Unsigned attribute is set and byteorder of data
+   does not match native byteorder (issue #930).
+ * revise documentation for Python 3 (issue #946).
+ * establish support for Python 2.7, 3.5, 3.6 and 3.7 (issue #948).
+ * use dict built-in instead of OrderedDict for Python 3.7+
+   (pull request #955).
+ * remove underline ANSI in Dataset string representation (pull request #956).
+ * remove newlines from string representation (pull request #960).
+ * fix for issue #957 (size of scalar var is a float since numpy.prod(())=1.0).
+ * make sure Variable.setncattr fails to set _FillValue (issue #959).
+ * fix detection of parallel HDF5 support with netcdf-c 4.6.1 (issue #964).
+
  version 1.5.1.2 (tag v1.5.1.2rel)
 ==================================
  * fix another slicing bug introduced by the fix to issue #906 (issue #922).


=====================================
README.md
=====================================
@@ -10,8 +10,10 @@
 ## News
 For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
 
+09/03/2019: Version [1.5.2](https://pypi.python.org/pypi/netCDF4/1.5.2) released. Bugfixes, no new features.
+
 05/06/2019: Version [1.5.1.2](https://pypi.python.org/pypi/netCDF4/1.5.1.2) released. Fixes another slicing
-slicing regression ([issue #922)](https://github.com/Unidata/netcdf4-python/issues/922)) introduced in the 1.5.1 release.
+regression ([issue #922)](https://github.com/Unidata/netcdf4-python/issues/922)) introduced in the 1.5.1 release.
 
 05/02/2019: Version [1.5.1.1](https://pypi.python.org/pypi/netCDF4/1.5.1.1) released. Fixes incorrect `__version__`
 module variable in 1.5.1 release, plus a slicing bug ([issue #919)](https://github.com/Unidata/netcdf4-python/issues/919)).


=====================================
debian/changelog
=====================================
@@ -1,3 +1,9 @@
+netcdf4-python (1.5.2-1) unstable; urgency=medium
+
+  * New upstream release.
+
+ -- Bas Couwenberg <sebastic at debian.org>  Wed, 04 Sep 2019 05:56:15 +0200
+
 netcdf4-python (1.5.1.2-4) unstable; urgency=medium
 
   * Drop Python 2 support.


=====================================
docs/netCDF4/index.html
=====================================
The diff for this file was not included because it is too large.

=====================================
netCDF4/_netCDF4.pyx
=====================================
@@ -1,5 +1,5 @@
 """
-Version 1.5.1.2
+Version 1.5.2
 ---------------
 - - -
 
@@ -151,7 +151,7 @@ Here's an example:
     :::python
     >>> from netCDF4 import Dataset
     >>> rootgrp = Dataset("test.nc", "w", format="NETCDF4")
-    >>> print rootgrp.data_model
+    >>> print(rootgrp.data_model)
     NETCDF4
     >>> rootgrp.close()
 
@@ -182,11 +182,18 @@ in a netCDF 3 file you will get an error message.
     >>> rootgrp = Dataset("test.nc", "a")
     >>> fcstgrp = rootgrp.createGroup("forecasts")
     >>> analgrp = rootgrp.createGroup("analyses")
-    >>> print rootgrp.groups
-    OrderedDict([("forecasts",
-                  <netCDF4._netCDF4.Group object at 0x1b4b7b0>),
-                 ("analyses",
-                  <netCDF4._netCDF4.Group object at 0x1b4b970>)])
+    >>> print(rootgrp.groups)
+    {'forecasts': <class 'netCDF4._netCDF4.Group'>
+    group /forecasts:
+        dimensions(sizes): 
+        variables(dimensions): 
+        groups: , 'analyses': <class 'netCDF4._netCDF4.Group'>
+    group /analyses:
+        dimensions(sizes): 
+        variables(dimensions): 
+        groups: }
+
+
 
 Groups can exist within groups in a `netCDF4.Dataset`, just as directories
 exist within directories in a unix filesystem. Each `netCDF4.Group` instance
@@ -212,40 +219,40 @@ object yields summary information about it's contents.
 
     :::python
     >>> def walktree(top):
-    >>>     values = top.groups.values()
-    >>>     yield values
-    >>>     for value in top.groups.values():
-    >>>         for children in walktree(value):
-    >>>             yield children
-    >>> print rootgrp
-    >>> for children in walktree(rootgrp):
-    >>>      for child in children:
-    >>>          print child
-    <type "netCDF4._netCDF4.Dataset">
-    root group (NETCDF4 file format):
-        dimensions:
-        variables:
+    ...     values = top.groups.values()
+    ...     yield values
+    ...     for value in top.groups.values():
+    ...         for children in walktree(value):
+    ...             yield children
+    >>> print(rootgrp)
+    <class 'netCDF4._netCDF4.Dataset'>
+    root group (NETCDF4 data model, file format HDF5):
+        dimensions(sizes): 
+        variables(dimensions): 
         groups: forecasts, analyses
-    <type "netCDF4._netCDF4.Group">
+    >>> for children in walktree(rootgrp):
+    ...     for child in children:
+    ...         print(child)
+    <class 'netCDF4._netCDF4.Group'>
     group /forecasts:
-        dimensions:
-        variables:
+        dimensions(sizes): 
+        variables(dimensions): 
         groups: model1, model2
-    <type "netCDF4._netCDF4.Group">
+    <class 'netCDF4._netCDF4.Group'>
     group /analyses:
-        dimensions:
-        variables:
-        groups:
-    <type "netCDF4._netCDF4.Group">
+        dimensions(sizes): 
+        variables(dimensions): 
+        groups: 
+    <class 'netCDF4._netCDF4.Group'>
     group /forecasts/model1:
-        dimensions:
-        variables:
-        groups:
-    <type "netCDF4._netCDF4.Group">
+        dimensions(sizes): 
+        variables(dimensions): 
+        groups: 
+    <class 'netCDF4._netCDF4.Group'>
     group /forecasts/model2:
-        dimensions:
-        variables:
-        groups:
+        dimensions(sizes): 
+        variables(dimensions): 
+        groups: 
 
 ## <div id='section3'>3) Dimensions in a netCDF file.
 
@@ -272,11 +279,8 @@ one, and it must be the first (leftmost) dimension of the variable.
 All of the `netCDF4.Dimension` instances are stored in a python dictionary.
 
     :::python
-    >>> print rootgrp.dimensions
-    OrderedDict([("level", <netCDF4._netCDF4.Dimension object at 0x1b48030>),
-                 ("time", <netCDF4._netCDF4.Dimension object at 0x1b481c0>),
-                 ("lat", <netCDF4._netCDF4.Dimension object at 0x1b480f8>),
-                 ("lon", <netCDF4._netCDF4.Dimension object at 0x1b48a08>)])
+    >>> print(rootgrp.dimensions)
+    {'level': <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'level', size = 0, 'time': <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 0, 'lat': <class 'netCDF4._netCDF4.Dimension'>: name = 'lat', size = 73, 'lon': <class 'netCDF4._netCDF4.Dimension'>: name = 'lon', size = 144}
 
 Calling the python `len` function with a `netCDF4.Dimension` instance returns
 the current size of that dimension.
@@ -284,11 +288,11 @@ The `netCDF4.Dimension.isunlimited` method of a `netCDF4.Dimension` instance
 can be used to determine if the dimensions is unlimited, or appendable.
 
     :::python
-    >>> print len(lon)
+    >>> print(len(lon))
     144
-    >>> print lon.isunlimited()
+    >>> print(lon.isunlimited())
     False
-    >>> print time.isunlimited()
+    >>> print(time.isunlimited())
     True
 
 Printing the `netCDF4.Dimension` object
@@ -297,12 +301,11 @@ and whether it is unlimited.
 
     :::python
     >>> for dimobj in rootgrp.dimensions.values():
-    >>>    print dimobj
-    <type "netCDF4._netCDF4.Dimension"> (unlimited): name = "level", size = 0
-    <type "netCDF4._netCDF4.Dimension"> (unlimited): name = "time", size = 0
-    <type "netCDF4._netCDF4.Dimension">: name = "lat", size = 73
-    <type "netCDF4._netCDF4.Dimension">: name = "lon", size = 144
-    <type "netCDF4._netCDF4.Dimension"> (unlimited): name = "time", size = 0
+    ...     print(dimobj)
+    <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'level', size = 0
+    <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 0
+    <class 'netCDF4._netCDF4.Dimension'>: name = 'lat', size = 73
+    <class 'netCDF4._netCDF4.Dimension'>: name = 'lon', size = 144
 
 `netCDF4.Dimension` names can be changed using the
 `netCDF4.Datatset.renameDimension` method of a `netCDF4.Dataset` or
@@ -348,17 +351,19 @@ used later to access and set variable data and attributes.
     >>> longitudes = rootgrp.createVariable("lon","f4",("lon",))
     >>> # two dimensions unlimited
     >>> temp = rootgrp.createVariable("temp","f4",("time","level","lat","lon",))
+    >>> temp.units = "K"
 
-To get summary info on a `netCDF4.Variable` instance in an interactive session, just print it.
+To get summary info on a `netCDF4.Variable` instance in an interactive session,
+just print it.
 
     :::python
-    >>> print temp
-    <type "netCDF4._netCDF4.Variable">
+    >>> print(temp)
+    <class 'netCDF4._netCDF4.Variable'>
     float32 temp(time, level, lat, lon)
-        least_significant_digit: 3
         units: K
     unlimited dimensions: time, level
     current shape = (0, 0, 73, 144)
+    filling on, default _FillValue of 9.969209968386869e+36 used
 
 You can use a path to create a Variable inside a hierarchy of groups.
 
@@ -371,30 +376,48 @@ You can also query a `netCDF4.Dataset` or `netCDF4.Group` instance directly to o
 `netCDF4.Variable` instances using paths.
 
     :::python
-    >>> print rootgrp["/forecasts/model1"] # a Group instance
-    <type "netCDF4._netCDF4.Group">
+    >>> print(rootgrp["/forecasts/model1"])  # a Group instance
+    <class 'netCDF4._netCDF4.Group'>
     group /forecasts/model1:
-        dimensions(sizes):
+        dimensions(sizes): 
         variables(dimensions): float32 temp(time,level,lat,lon)
-        groups:
-    >>> print rootgrp["/forecasts/model1/temp"] # a Variable instance
-    <type "netCDF4._netCDF4.Variable">
+        groups: 
+    >>> print(rootgrp["/forecasts/model1/temp"])  # a Variable instance
+    <class 'netCDF4._netCDF4.Variable'>
     float32 temp(time, level, lat, lon)
     path = /forecasts/model1
     unlimited dimensions: time, level
     current shape = (0, 0, 73, 144)
-    filling on, default _FillValue of 9.96920996839e+36 used
+    filling on, default _FillValue of 9.969209968386869e+36 used
+
 
 All of the variables in the `netCDF4.Dataset` or `netCDF4.Group` are stored in a
 Python dictionary, in the same way as the dimensions:
 
     :::python
-    >>> print rootgrp.variables
-    OrderedDict([("time", <netCDF4.Variable object at 0x1b4ba70>),
-                 ("level", <netCDF4.Variable object at 0x1b4bab0>),
-                 ("lat", <netCDF4.Variable object at 0x1b4baf0>),
-                 ("lon", <netCDF4.Variable object at 0x1b4bb30>),
-                 ("temp", <netCDF4.Variable object at 0x1b4bb70>)])
+    >>> print(rootgrp.variables)
+    {'time': <class 'netCDF4._netCDF4.Variable'>
+    float64 time(time)
+    unlimited dimensions: time
+    current shape = (0,)
+    filling on, default _FillValue of 9.969209968386869e+36 used, 'level': <class 'netCDF4._netCDF4.Variable'>
+    int32 level(level)
+    unlimited dimensions: level
+    current shape = (0,)
+    filling on, default _FillValue of -2147483647 used, 'lat': <class 'netCDF4._netCDF4.Variable'>
+    float32 lat(lat)
+    unlimited dimensions: 
+    current shape = (73,)
+    filling on, default _FillValue of 9.969209968386869e+36 used, 'lon': <class 'netCDF4._netCDF4.Variable'>
+    float32 lon(lon)
+    unlimited dimensions: 
+    current shape = (144,)
+    filling on, default _FillValue of 9.969209968386869e+36 used, 'temp': <class 'netCDF4._netCDF4.Variable'>
+    float32 temp(time, level, lat, lon)
+        units: K
+    unlimited dimensions: time, level
+    current shape = (0, 0, 73, 144)
+    filling on, default _FillValue of 9.969209968386869e+36 used}
 
 `netCDF4.Variable` names can be changed using the
 `netCDF4.Dataset.renameVariable` method of a `netCDF4.Dataset`
@@ -432,9 +455,9 @@ and attributes that cannot (or should not) be modified by the user.
 
     :::python
     >>> for name in rootgrp.ncattrs():
-    >>>     print "Global attr", name, "=", getattr(rootgrp,name)
+    ...     print("Global attr {} = {}".format(name, getattr(rootgrp, name)))
     Global attr description = bogus example script
-    Global attr history = Created Mon Nov  7 10.30:56 2005
+    Global attr history = Created Mon Jul  8 14:19:41 2019
     Global attr source = netCDF4 python module tutorial
 
 The `__dict__` attribute of a `netCDF4.Dataset`, `netCDF4.Group` or `netCDF4.Variable`
@@ -442,10 +465,8 @@ instance provides all the netCDF attribute name/value pairs in a python
 dictionary:
 
     :::python
-    >>> print rootgrp.__dict__
-    OrderedDict([(u"description", u"bogus example script"),
-                 (u"history", u"Created Thu Mar  3 19:30:33 2011"),
-                 (u"source", u"netCDF4 python module tutorial")])
+    >>> print(rootgrp.__dict__)
+    {'description': 'bogus example script', 'history': 'Created Mon Jul  8 14:19:41 2019', 'source': 'netCDF4 python module tutorial'}
 
 Attributes can be deleted from a netCDF `netCDF4.Dataset`, `netCDF4.Group` or
 `netCDF4.Variable` using the python `del` statement (i.e. `del grp.foo`
@@ -462,7 +483,7 @@ into it? You can just treat it like an array and assign data to a slice.
     >>> lons =  numpy.arange(-180,180,2.5)
     >>> latitudes[:] = lats
     >>> longitudes[:] = lons
-    >>> print "latitudes =\\n",latitudes[:]
+    >>> print("latitudes =\\n{}".format(latitudes[:]))
     latitudes =
     [-90.  -87.5 -85.  -82.5 -80.  -77.5 -75.  -72.5 -70.  -67.5 -65.  -62.5
      -60.  -57.5 -55.  -52.5 -50.  -47.5 -45.  -42.5 -40.  -37.5 -35.  -32.5
@@ -480,17 +501,17 @@ assign data outside the currently defined range of indices.
     >>> # append along two unlimited dimensions by assigning to slice.
     >>> nlats = len(rootgrp.dimensions["lat"])
     >>> nlons = len(rootgrp.dimensions["lon"])
-    >>> print "temp shape before adding data = ",temp.shape
-    temp shape before adding data =  (0, 0, 73, 144)
+    >>> print("temp shape before adding data = {}".format(temp.shape))
+    temp shape before adding data = (0, 0, 73, 144)
     >>>
     >>> from numpy.random import uniform
-    >>> temp[0:5,0:10,:,:] = uniform(size=(5,10,nlats,nlons))
-    >>> print "temp shape after adding data = ",temp.shape
-    temp shape after adding data =  (6, 10, 73, 144)
+    >>> temp[0:5, 0:10, :, :] = uniform(size=(5, 10, nlats, nlons))
+    >>> print("temp shape after adding data = {}".format(temp.shape))
+    temp shape after adding data = (5, 10, 73, 144)
     >>>
     >>> # levels have grown, but no values yet assigned.
-    >>> print "levels shape after adding pressure data = ",levels.shape
-    levels shape after adding pressure data =  (10,)
+    >>> print("levels shape after adding pressure data = {}".format(levels.shape))
+    levels shape after adding pressure data = (10,)
 
 Note that the size of the levels variable grows when data is appended
 along the `level` dimension of the variable `temp`, even though no
@@ -510,7 +531,8 @@ allowed, and these indices work independently along each dimension (similar
 to the way vector subscripts work in fortran).  This means that
 
     :::python
-    >>> temp[0, 0, [0,1,2,3], [0,1,2,3]]
+    >>> temp[0, 0, [0,1,2,3], [0,1,2,3]].shape
+    (4, 4)
 
 returns an array of shape (4,4) when slicing a netCDF variable, but for a
 numpy array it returns an array of shape (4,).
@@ -534,12 +556,12 @@ will extract time indices 0,2 and 4, pressure levels
 Hemisphere longitudes, resulting in a numpy array of shape  (3, 3, 36, 71).
 
     :::python
-    >>> print "shape of fancy temp slice = ",tempdat.shape
-    shape of fancy temp slice =  (3, 3, 36, 71)
+    >>> print("shape of fancy temp slice = {}".format(tempdat.shape))
+    shape of fancy temp slice = (3, 3, 36, 71)
 
 ***Special note for scalar variables***: To extract data from a scalar variable
-`v` with no associated dimensions, use `numpy.asarray(v)` or `v[...]`. The result
-will be a numpy scalar array.
+`v` with no associated dimensions, use `numpy.asarray(v)` or `v[...]`.
+The result will be a numpy scalar array.
 
 By default, netcdf4-python returns numpy masked arrays with values equal to the
 `missing_value` or `_FillValue` variable attributes masked.  The
@@ -572,14 +594,15 @@ can be used:
     >>> from netCDF4 import num2date, date2num
     >>> dates = [datetime(2001,3,1)+n*timedelta(hours=12) for n in range(temp.shape[0])]
     >>> times[:] = date2num(dates,units=times.units,calendar=times.calendar)
-    >>> print "time values (in units %s): " % times.units+"\\n",times[:]
-    time values (in units hours since January 1, 0001):
-    [ 17533056.  17533068.  17533080.  17533092.  17533104.]
+    >>> print("time values (in units {}):\\n{}".format(times.units, times[:]))
+    time values (in units hours since 0001-01-01 00:00:00.0):
+    [17533104. 17533116. 17533128. 17533140. 17533152.]
     >>> dates = num2date(times[:],units=times.units,calendar=times.calendar)
-    >>> print "dates corresponding to time values:\\n",dates
+    >>> print("dates corresponding to time values:\\n{}".format(dates))
     dates corresponding to time values:
-    [2001-03-01 00:00:00 2001-03-01 12:00:00 2001-03-02 00:00:00
-     2001-03-02 12:00:00 2001-03-03 00:00:00]
+    [real_datetime(2001, 3, 1, 0, 0) real_datetime(2001, 3, 1, 12, 0)
+     real_datetime(2001, 3, 2, 0, 0) real_datetime(2001, 3, 2, 12, 0)
+     real_datetime(2001, 3, 3, 0, 0)]
 
 `netCDF4.num2date` converts numeric values of time in the specified `units`
 and `calendar` to datetime objects, and `netCDF4.date2num` does the reverse.
@@ -607,22 +630,22 @@ datasets are not supported).
 
     :::python
     >>> for nf in range(10):
-    >>>     f = Dataset("mftest%s.nc" % nf,"w")
-    >>>     f.createDimension("x",None)
-    >>>     x = f.createVariable("x","i",("x",))
-    >>>     x[0:10] = numpy.arange(nf*10,10*(nf+1))
-    >>>     f.close()
+    ...     with Dataset("mftest%s.nc" % nf, "w", format="NETCDF4_CLASSIC") as f:
+    ...         _ = f.createDimension("x",None)
+    ...         x = f.createVariable("x","i",("x",))
+    ...         x[0:10] = numpy.arange(nf*10,10*(nf+1))
 
 Now read all the files back in at once with `netCDF4.MFDataset`
 
     :::python
     >>> from netCDF4 import MFDataset
     >>> f = MFDataset("mftest*nc")
-    >>> print f.variables["x"][:]
-    [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
-     25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
-     50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
-     75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
+    >>> print(f.variables["x"][:])
+    [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
+     24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
+     48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
+     72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
+     96 97 98 99]
 
 Note that `netCDF4.MFDataset` can only be used to read, not write, multi-file
 datasets.
@@ -673,12 +696,12 @@ In our example, try replacing the line
 with
 
     :::python
-    >>> temp = dataset.createVariable("temp","f4",("time","level","lat","lon",),zlib=True)
+    >>> temp = rootgrp.createVariable("temp","f4",("time","level","lat","lon",),zlib=True)
 
 and then
 
     :::python
-    >>> temp = dataset.createVariable("temp","f4",("time","level","lat","lon",),zlib=True,least_significant_digit=3)
+    >>> temp = rootgrp.createVariable("temp","f4",("time","level","lat","lon",),zlib=True,least_significant_digit=3)
 
 and see how much smaller the resulting files are.
 
@@ -707,7 +730,7 @@ for storing numpy complex arrays.  Here's an example:
     >>> complex128 = numpy.dtype([("real",numpy.float64),("imag",numpy.float64)])
     >>> complex128_t = f.createCompoundType(complex128,"complex128")
     >>> # create a variable with this data type, write some data to it.
-    >>> f.createDimension("x_dim",None)
+    >>> x_dim = f.createDimension("x_dim",None)
     >>> v = f.createVariable("cmplx_var",complex128_t,"x_dim")
     >>> data = numpy.empty(size,complex128) # numpy structured array
     >>> data["real"] = datac.real; data["imag"] = datac.imag
@@ -720,11 +743,11 @@ for storing numpy complex arrays.  Here's an example:
     >>> datac2 = numpy.empty(datain.shape,numpy.complex128)
     >>> # .. fill it with contents of structured array.
     >>> datac2.real = datain["real"]; datac2.imag = datain["imag"]
-    >>> print datac.dtype,datac # original data
-    complex128 [ 0.54030231+0.84147098j -0.84147098+0.54030231j  -0.54030231-0.84147098j]
+    >>> print('{}: {}'.format(datac.dtype, datac)) # original data
+    complex128: [ 0.54030231+0.84147098j -0.84147098+0.54030231j -0.54030231-0.84147098j]
     >>>
-    >>> print datac2.dtype,datac2 # data from file
-    complex128 [ 0.54030231+0.84147098j -0.84147098+0.54030231j  -0.54030231-0.84147098j]
+    >>> print('{}: {}'.format(datac2.dtype, datac2)) # data from file
+    complex128: [ 0.54030231+0.84147098j -0.84147098+0.54030231j -0.54030231-0.84147098j]
 
 Compound types can be nested, but you must create the 'inner'
 ones first. All possible numpy structured arrays cannot be
@@ -735,22 +758,22 @@ in a Python dictionary, just like variables and dimensions. As always, printing
 objects gives useful summary information in an interactive session:
 
     :::python
-    >>> print f
-    <type "netCDF4._netCDF4.Dataset">
-    root group (NETCDF4 file format):
-        dimensions: x_dim
-        variables: cmplx_var
-        groups:
-    <type "netCDF4._netCDF4.Variable">
-    >>> print f.variables["cmplx_var"]
+    >>> print(f)
+    <class 'netCDF4._netCDF4.Dataset'>
+    root group (NETCDF4 data model, file format HDF5):
+        dimensions(sizes): x_dim(3)
+        variables(dimensions): {'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True} cmplx_var(x_dim)
+        groups: 
+    >>> print(f.variables["cmplx_var"])
+    <class 'netCDF4._netCDF4.Variable'>
     compound cmplx_var(x_dim)
-    compound data type: [("real", "<f8"), ("imag", "<f8")]
+    compound data type: {'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True}
     unlimited dimensions: x_dim
     current shape = (3,)
-    >>> print f.cmptypes
-    OrderedDict([("complex128", <netCDF4.CompoundType object at 0x1029eb7e8>)])
-    >>> print f.cmptypes["complex128"]
-    <type "netCDF4._netCDF4.CompoundType">: name = "complex128", numpy dtype = [(u"real","<f8"), (u"imag", "<f8")]
+    >>> print(f.cmptypes)
+    {'complex128': <class 'netCDF4._netCDF4.CompoundType'>: name = 'complex128', numpy dtype = {'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True}}
+    >>> print(f.cmptypes["complex128"])
+    <class 'netCDF4._netCDF4.CompoundType'>: name = 'complex128', numpy dtype = {'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True}
 
 ## <div id='section11'>11) Variable-length (vlen) data types.
 
@@ -784,32 +807,37 @@ In this case, they contain 1-D numpy `int32` arrays of random length between
 
     :::python
     >>> import random
+    >>> random.seed(54321)
     >>> data = numpy.empty(len(y)*len(x),object)
     >>> for n in range(len(y)*len(x)):
-    >>>    data[n] = numpy.arange(random.randint(1,10),dtype="int32")+1
+    ...     data[n] = numpy.arange(random.randint(1,10),dtype="int32")+1
     >>> data = numpy.reshape(data,(len(y),len(x)))
     >>> vlvar[:] = data
-    >>> print "vlen variable =\\n",vlvar[:]
+    >>> print("vlen variable =\\n{}".format(vlvar[:]))
     vlen variable =
-    [[[ 1  2  3  4  5  6  7  8  9 10] [1 2 3 4 5] [1 2 3 4 5 6 7 8]]
-     [[1 2 3 4 5 6 7] [1 2 3 4 5 6] [1 2 3 4 5]]
-     [[1 2 3 4 5] [1 2 3 4] [1]]
-     [[ 1  2  3  4  5  6  7  8  9 10] [ 1  2  3  4  5  6  7  8  9 10]
-      [1 2 3 4 5 6 7 8]]]
-    >>> print f
-    <type "netCDF4._netCDF4.Dataset">
-    root group (NETCDF4 file format):
-        dimensions: x, y
-        variables: phony_vlen_var
-        groups:
-    >>> print f.variables["phony_vlen_var"]
-    <type "netCDF4._netCDF4.Variable">
+    [[array([1, 2, 3, 4, 5, 6, 7, 8], dtype=int32) array([1, 2], dtype=int32)
+      array([1, 2, 3, 4], dtype=int32)]
+     [array([1, 2, 3], dtype=int32)
+      array([1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)
+      array([1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)]
+     [array([1, 2, 3, 4, 5, 6, 7], dtype=int32) array([1, 2, 3], dtype=int32)
+      array([1, 2, 3, 4, 5, 6], dtype=int32)]
+     [array([1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)
+      array([1, 2, 3, 4, 5], dtype=int32) array([1, 2], dtype=int32)]]
+    >>> print(f)
+    <class 'netCDF4._netCDF4.Dataset'>
+    root group (NETCDF4 data model, file format HDF5):
+        dimensions(sizes): x(3), y(4)
+        variables(dimensions): int32 phony_vlen_var(y,x)
+        groups: 
+    >>> print(f.variables["phony_vlen_var"])
+    <class 'netCDF4._netCDF4.Variable'>
     vlen phony_vlen_var(y, x)
     vlen data type: int32
-    unlimited dimensions:
+    unlimited dimensions: 
     current shape = (4, 3)
-    >>> print f.VLtypes["phony_vlen"]
-    <type "netCDF4._netCDF4.VLType">: name = "phony_vlen", numpy dtype = int32
+    >>> print(f.vltypes["phony_vlen"])
+    <class 'netCDF4._netCDF4.VLType'>: name = 'phony_vlen', numpy dtype = int32
 
 Numpy object arrays containing python strings can also be written as vlen
 variables,  For vlen strings, you don't need to create a vlen data type.
@@ -819,7 +847,7 @@ with fixed length greater than 1) when calling the
 
     :::python
     >>> z = f.createDimension("z",10)
-    >>> strvar = rootgrp.createVariable("strvar", str, "z")
+    >>> strvar = f.createVariable("strvar", str, "z")
 
 In this example, an object array is filled with random python strings with
 random lengths between 2 and 12 characters, and the data in the object
@@ -829,24 +857,25 @@ array is assigned to the vlen string variable.
     >>> chars = "1234567890aabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
     >>> data = numpy.empty(10,"O")
     >>> for n in range(10):
-    >>>     stringlen = random.randint(2,12)
-    >>>     data[n] = "".join([random.choice(chars) for i in range(stringlen)])
+    ...     stringlen = random.randint(2,12)
+    ...     data[n] = "".join([random.choice(chars) for i in range(stringlen)])
     >>> strvar[:] = data
-    >>> print "variable-length string variable:\\n",strvar[:]
+    >>> print("variable-length string variable:\\n{}".format(strvar[:]))
     variable-length string variable:
-    [aDy29jPt 5DS9X8 jd7aplD b8t4RM jHh8hq KtaPWF9cQj Q1hHN5WoXSiT MMxsVeq tdLUzvVTzj]
-    >>> print f
-    <type "netCDF4._netCDF4.Dataset">
-    root group (NETCDF4 file format):
-        dimensions: x, y, z
-        variables: phony_vlen_var, strvar
-        groups:
-    >>> print f.variables["strvar"]
-    <type "netCDF4._netCDF4.Variable">
+    ['Lh' '25F8wBbMI' '53rmM' 'vvjnb3t63ao' 'qjRBQk6w' 'aJh' 'QF'
+     'jtIJbJACaQk4' '3Z5' 'bftIIq']
+    >>> print(f)
+    <class 'netCDF4._netCDF4.Dataset'>
+    root group (NETCDF4 data model, file format HDF5):
+        dimensions(sizes): x(3), y(4), z(10)
+        variables(dimensions): int32 phony_vlen_var(y,x), <class 'str'> strvar(z)
+        groups: 
+    >>> print(f.variables["strvar"])
+    <class 'netCDF4._netCDF4.Variable'>
     vlen strvar(z)
-    vlen data type: <type "str">
-    unlimited dimensions:
-    current size = (10,)
+    vlen data type: <class 'str'>
+    unlimited dimensions: 
+    current shape = (10,)
 
 It is also possible to set contents of vlen string variables with numpy arrays
 of any string or unicode data type. Note, however, that accessing the contents
@@ -866,19 +895,14 @@ values and their names are used to define an Enum data type using
     :::python
     >>> nc = Dataset('clouds.nc','w')
     >>> # python dict with allowed values and their names.
-    >>> enum_dict = {u'Altocumulus': 7, u'Missing': 255,
-    >>> u'Stratus': 2, u'Clear': 0,
-    >>> u'Nimbostratus': 6, u'Cumulus': 4, u'Altostratus': 5,
-    >>> u'Cumulonimbus': 1, u'Stratocumulus': 3}
+    >>> enum_dict = {'Altocumulus': 7, 'Missing': 255,
+    ... 'Stratus': 2, 'Clear': 0,
+    ... 'Nimbostratus': 6, 'Cumulus': 4, 'Altostratus': 5,
+    ... 'Cumulonimbus': 1, 'Stratocumulus': 3}
     >>> # create the Enum type called 'cloud_t'.
     >>> cloud_type = nc.createEnumType(numpy.uint8,'cloud_t',enum_dict)
-    >>> print cloud_type
-    <type 'netCDF4._netCDF4.EnumType'>: name = 'cloud_t',
-    numpy dtype = uint8, fields/values ={u'Cumulus': 4,
-    u'Altocumulus': 7, u'Missing': 255,
-    u'Stratus': 2, u'Clear': 0,
-    u'Cumulonimbus': 1, u'Stratocumulus': 3,
-    u'Nimbostratus': 6, u'Altostratus': 5}
+    >>> print(cloud_type)
+    <class 'netCDF4._netCDF4.EnumType'>: name = 'cloud_t', numpy dtype = uint8, fields/values ={'Altocumulus': 7, 'Missing': 255, 'Stratus': 2, 'Clear': 0, 'Nimbostratus': 6, 'Cumulus': 4, 'Altostratus': 5, 'Cumulonimbus': 1, 'Stratocumulus': 3}
 
 A new variable can be created in the usual way using this data type.
 Integer data is written to the variable that represents the named
@@ -890,30 +914,25 @@ specified names.
     >>> time = nc.createDimension('time',None)
     >>> # create a 1d variable of type 'cloud_type'.
     >>> # The fill_value is set to the 'Missing' named value.
-    >>> cloud_var =
-    >>> nc.createVariable('primary_cloud',cloud_type,'time',
-    >>> fill_value=enum_dict['Missing'])
+    >>> cloud_var = nc.createVariable('primary_cloud',cloud_type,'time',
+    ...                               fill_value=enum_dict['Missing'])
     >>> # write some data to the variable.
-    >>> cloud_var[:] = [enum_dict['Clear'],enum_dict['Stratus'],
-    >>> enum_dict['Cumulus'],enum_dict['Missing'],
-    >>> enum_dict['Cumulonimbus']]
+    >>> cloud_var[:] = [enum_dict[k] for k in ['Clear', 'Stratus', 'Cumulus',
+    ...                                        'Missing', 'Cumulonimbus']]
     >>> nc.close()
     >>> # reopen the file, read the data.
     >>> nc = Dataset('clouds.nc')
     >>> cloud_var = nc.variables['primary_cloud']
-    >>> print cloud_var
-    <type 'netCDF4._netCDF4.Variable'>
+    >>> print(cloud_var)
+    <class 'netCDF4._netCDF4.Variable'>
     enum primary_cloud(time)
         _FillValue: 255
     enum data type: uint8
     unlimited dimensions: time
     current shape = (5,)
-    >>> print cloud_var.datatype.enum_dict
-    {u'Altocumulus': 7, u'Missing': 255, u'Stratus': 2,
-    u'Clear': 0, u'Nimbostratus': 6, u'Cumulus': 4,
-    u'Altostratus': 5, u'Cumulonimbus': 1,
-    u'Stratocumulus': 3}
-    >>> print cloud_var[:]
+    >>> print(cloud_var.datatype.enum_dict)
+    {'Altocumulus': 7, 'Missing': 255, 'Stratus': 2, 'Clear': 0, 'Nimbostratus': 6, 'Cumulus': 4, 'Altostratus': 5, 'Cumulonimbus': 1, 'Stratocumulus': 3}
+    >>> print(cloud_var[:])
     [0 2 4 -- 1]
     >>> nc.close()
 
@@ -941,7 +960,7 @@ when a new dataset is created or an existing dataset is opened,
 use the `parallel` keyword to enable parallel access.
 
     :::python
-    >>> nc = Dataset('parallel_tst.nc','w',parallel=True)
+    >>> nc = Dataset('parallel_test.nc','w',parallel=True)
 
 The optional `comm` keyword may be used to specify a particular
 MPI communicator (`MPI_COMM_WORLD` is used by default).  Each process (or rank)
@@ -950,7 +969,7 @@ written to a different variable index on each task
 
     :::python
     >>> d = nc.createDimension('dim',4)
-    >>> v = nc.createVariable('var', numpy.int, 'dim')
+    >>> v = nc.createVariable('var', np.int, 'dim')
     >>> v[rank] = rank
     >>> nc.close()
 
@@ -958,9 +977,9 @@ written to a different variable index on each task
     netcdf parallel_test {
     dimensions:
         dim = 4 ;
-        variables:
+    variables:
         int64 var(dim) ;
-        data:
+    data:
 
         var = 0, 1, 2, 3 ;
     }
@@ -1010,18 +1029,19 @@ fixed-width byte string array (dtype `S#`), otherwise a numpy unicode (dtype
 characters with one more dimension. For example,
 
     :::python
+    >>> from netCDF4 import stringtochar
     >>> nc = Dataset('stringtest.nc','w',format='NETCDF4_CLASSIC')
-    >>> nc.createDimension('nchars',3)
-    >>> nc.createDimension('nstrings',None)
+    >>> _ = nc.createDimension('nchars',3)
+    >>> _ = nc.createDimension('nstrings',None)
     >>> v = nc.createVariable('strings','S1',('nstrings','nchars'))
     >>> datain = numpy.array(['foo','bar'],dtype='S3')
     >>> v[:] = stringtochar(datain) # manual conversion to char array
-    >>> v[:] # data returned as char array
+    >>> print(v[:]) # data returned as char array
     [[b'f' b'o' b'o']
-    [b'b' b'a' b'r']]
+     [b'b' b'a' b'r']]
     >>> v._Encoding = 'ascii' # this enables automatic conversion
     >>> v[:] = datain # conversion to char array done internally
-    >>> v[:] # data returned in numpy string array
+    >>> print(v[:])  # data returned in numpy string array
     ['foo' 'bar']
     >>> nc.close()
 
@@ -1044,25 +1064,25 @@ Here's an example:
     :::python
     >>> nc = Dataset('compoundstring_example.nc','w')
     >>> dtype = numpy.dtype([('observation', 'f4'),
-                      ('station_name','S80')])
+    ...                      ('station_name','S10')])
     >>> station_data_t = nc.createCompoundType(dtype,'station_data')
-    >>> nc.createDimension('station',None)
+    >>> _ = nc.createDimension('station',None)
     >>> statdat = nc.createVariable('station_obs', station_data_t, ('station',))
     >>> data = numpy.empty(2,dtype)
     >>> data['observation'][:] = (123.,3.14)
     >>> data['station_name'][:] = ('Boulder','New York')
-    >>> statdat.dtype # strings actually stored as character arrays
-    {'names':['observation','station_name'], 'formats':['<f4',('S1', (80,))], 'offsets':[0,4], 'itemsize':84, 'aligned':True}
+    >>> print(statdat.dtype) # strings actually stored as character arrays
+    {'names':['observation','station_name'], 'formats':['<f4',('S1', (10,))], 'offsets':[0,4], 'itemsize':16, 'aligned':True}
     >>> statdat[:] = data # strings converted to character arrays internally
-    >>> statdat[:] # character arrays converted back to strings
-    [(123.  , 'Boulder') (  3.14, 'New York')]
-    >>> statdat[:].dtype
-    {'names':['observation','station_name'], 'formats':['<f4','S80'], 'offsets':[0,4], 'itemsize':84, 'aligned':True}
+    >>> print(statdat[:])  # character arrays converted back to strings
+    [(123.  , b'Boulder') (  3.14, b'New York')]
+    >>> print(statdat[:].dtype)
+    {'names':['observation','station_name'], 'formats':['<f4','S10'], 'offsets':[0,4], 'itemsize':16, 'aligned':True}
     >>> statdat.set_auto_chartostring(False) # turn off auto-conversion
     >>> statdat[:] = data.view(dtype=[('observation', 'f4'),('station_name','S1',10)])
-    >>> statdat[:] # now structured array with char array subtype is returned
-    [(123.  , ['B', 'o', 'u', 'l', 'd', 'e', 'r', '', '', ''])
-    (  3.14, ['N', 'e', 'w', ' ', 'Y', 'o', 'r', 'k', '', ''])]
+    >>> print(statdat[:])  # now structured array with char array subtype is returned
+    [(123.  , [b'B', b'o', b'u', b'l', b'd', b'e', b'r', b'', b'', b''])
+     (  3.14, [b'N', b'e', b'w', b' ', b'Y', b'o', b'r', b'k', b'', b''])]
     >>> nc.close()
 
 Note that there is currently no support for mapping numpy structured arrays with
@@ -1094,11 +1114,11 @@ approaches.
     >>> v = nc.createVariable('v',numpy.int32,'x')
     >>> v[0:5] = numpy.arange(5)
     >>> print(nc)
-    <type 'netCDF4._netCDF4.Dataset'>
+    <class 'netCDF4._netCDF4.Dataset'>
     root group (NETCDF4 data model, file format HDF5):
-    dimensions(sizes): x(5)
-    variables(dimensions): int32 v(x)
-    groups:
+        dimensions(sizes): x(5)
+        variables(dimensions): int32 v(x)
+        groups: 
     >>> print(nc['v'][:])
     [0 1 2 3 4]
     >>> nc.close() # file saved to disk
@@ -1106,16 +1126,16 @@ approaches.
     >>> # python memory buffer.
     >>> # read the newly created netcdf file into a python
     >>> # bytes object.
-    >>> f = open('diskless_example.nc', 'rb')
-    >>> nc_bytes = f.read(); f.close()
+    >>> with open('diskless_example.nc', 'rb') as f:
+    ...     nc_bytes = f.read()
     >>> # create a netCDF in-memory dataset from the bytes object.
     >>> nc = Dataset('inmemory.nc', memory=nc_bytes)
     >>> print(nc)
-    <type 'netCDF4._netCDF4.Dataset'>
+    <class 'netCDF4._netCDF4.Dataset'>
     root group (NETCDF4 data model, file format HDF5):
-    dimensions(sizes): x(5)
-    variables(dimensions): int32 v(x)
-    groups:
+        dimensions(sizes): x(5)
+        variables(dimensions): int32 v(x)
+        groups: 
     >>> print(nc['v'][:])
     [0 1 2 3 4]
     >>> nc.close()
@@ -1129,17 +1149,17 @@ approaches.
     >>> v[0:5] = numpy.arange(5)
     >>> nc_buf = nc.close() # close returns memoryview
     >>> print(type(nc_buf))
-    <type 'memoryview'>
+    <class 'memoryview'>
     >>> # save nc_buf to disk, read it back in and check.
-    >>> f = open('inmemory.nc', 'wb')
-    >>> f.write(nc_buf); f.close()
+    >>> with open('inmemory.nc', 'wb') as f:
+    ...     f.write(nc_buf)
     >>> nc = Dataset('inmemory.nc')
     >>> print(nc)
-    <type 'netCDF4._netCDF4.Dataset'>
+    <class 'netCDF4._netCDF4.Dataset'>
     root group (NETCDF4 data model, file format HDF5):
-    dimensions(sizes): x(5)
-    variables(dimensions): int32 v(x)
-    groups:
+        dimensions(sizes): x(5)
+        variables(dimensions): int32 v(x)
+        groups:
     >>> print(nc['v'][:])
     [0 1 2 3 4]
     >>> nc.close()
@@ -1176,28 +1196,23 @@ from cpython.bytes cimport PyBytes_FromStringAndSize
 # pure python utilities
 from .utils import (_StartCountStride, _quantize, _find_dim, _walk_grps,
                     _out_array_shape, _sortbylist, _tostr, _safecast, _is_int)
-# try to use built-in ordered dict in python >= 2.7
-try:
+import sys
+if sys.version_info[0:2] < (3, 7):
+    # Python 3.7+ guarantees order; older versions need OrderedDict
     from collections import OrderedDict
-except ImportError: # or else use drop-in substitute
-    try:
-        from ordereddict import OrderedDict
-    except ImportError:
-        raise ImportError('please install ordereddict (https://pypi.python.org/pypi/ordereddict)')
 try:
     from itertools import izip as zip
 except ImportError:
     # python3: zip is already python2's itertools.izip
     pass
 
-__version__ = "1.5.1.2"
+__version__ = "1.5.2"
 
 # Initialize numpy
 import posixpath
 from cftime import num2date, date2num, date2index
 import numpy
 import weakref
-import sys
 import warnings
 from glob import glob
 from numpy import ma
@@ -1620,9 +1635,15 @@ cdef _get_types(group):
             ierr = nc_inq_typeids(_grpid, &ntypes, typeids)
         _ensure_nc_success(ierr)
     # create empty dictionary for CompoundType instances.
-    cmptypes = OrderedDict()
-    vltypes = OrderedDict()
-    enumtypes = OrderedDict()
+    if sys.version_info[0:2] < (3, 7):
+        cmptypes = OrderedDict()
+        vltypes = OrderedDict()
+        enumtypes = OrderedDict()
+    else:
+        cmptypes = dict()
+        vltypes = dict()
+        enumtypes = dict()
+
     if ntypes > 0:
         for n from 0 <= n < ntypes:
             xtype = typeids[n]
@@ -1678,7 +1699,10 @@ cdef _get_dims(group):
         ierr = nc_inq_ndims(_grpid, &numdims)
     _ensure_nc_success(ierr)
     # create empty dictionary for dimensions.
-    dimensions = OrderedDict()
+    if sys.version_info[0:2] < (3, 7):
+        dimensions = OrderedDict()
+    else:
+        dimensions = dict()
     if numdims > 0:
         dimids = <int *>malloc(sizeof(int) * numdims)
         if group.data_model == 'NETCDF4':
@@ -1709,7 +1733,10 @@ cdef _get_grps(group):
         ierr = nc_inq_grps(_grpid, &numgrps, NULL)
     _ensure_nc_success(ierr)
     # create dictionary containing `netCDF4.Group` instances for groups in this group
-    groups = OrderedDict()
+    if sys.version_info[0:2] < (3, 7):
+        groups = OrderedDict()
+    else:
+        groups = dict()
     if numgrps > 0:
         grpids = <int *>malloc(sizeof(int) * numgrps)
         with nogil:
@@ -1739,7 +1766,10 @@ cdef _get_vars(group):
         ierr = nc_inq_nvars(_grpid, &numvars)
     _ensure_nc_success(ierr, err_cls=AttributeError)
     # create empty dictionary for variables.
-    variables = OrderedDict()
+    if sys.version_info[0:2] < (3, 7):
+        variables = OrderedDict()
+    else:
+        variables = dict()
     if numvars > 0:
         # get variable ids.
         varids = <int *>malloc(sizeof(int) * numvars)
@@ -2316,7 +2346,10 @@ strings.
         if self.data_model == 'NETCDF4':
             self.groups = _get_grps(self)
         else:
-            self.groups = OrderedDict()
+            if sys.version_info[0:2] < (3, 7):
+                self.groups = OrderedDict()
+            else:
+                self.groups = dict()
 
     # these allow Dataset objects to be used via a "with" statement.
     def __enter__(self):
@@ -2386,29 +2419,28 @@ version 4.1.2 or higher of the netcdf C lib, and rebuild netcdf4-python."""
             return unicode(self).encode('utf-8')
 
     def __unicode__(self):
-        ncdump = ['%r\n' % type(self)]
-        dimnames = tuple([_tostr(dimname)+'(%s)'%len(self.dimensions[dimname])\
-        for dimname in self.dimensions.keys()])
+        ncdump = [repr(type(self))]
+        dimnames = tuple(_tostr(dimname)+'(%s)'%len(self.dimensions[dimname])\
+        for dimname in self.dimensions.keys())
         varnames = tuple(\
-        [_tostr(self.variables[varname].dtype)+' \033[4m'+_tostr(varname)+'\033[0m'+
+        [_tostr(self.variables[varname].dtype)+' '+_tostr(varname)+
         (((_tostr(self.variables[varname].dimensions)
         .replace("u'",""))\
         .replace("'",""))\
         .replace(", ",","))\
         .replace(",)",")") for varname in self.variables.keys()])
-        grpnames = tuple([_tostr(grpname) for grpname in self.groups.keys()])
+        grpnames = tuple(_tostr(grpname) for grpname in self.groups.keys())
         if self.path == '/':
-            ncdump.append('root group (%s data model, file format %s):\n' %
+            ncdump.append('root group (%s data model, file format %s):' %
                     (self.data_model, self.disk_format))
         else:
-            ncdump.append('group %s:\n' % self.path)
-        attrs = ['    %s: %s\n' % (name,self.getncattr(name)) for name in\
-                self.ncattrs()]
-        ncdump = ncdump + attrs
-        ncdump.append('    dimensions(sizes): %s\n' % ', '.join(dimnames))
-        ncdump.append('    variables(dimensions): %s\n' % ', '.join(varnames))
-        ncdump.append('    groups: %s\n' % ', '.join(grpnames))
-        return ''.join(ncdump)
+            ncdump.append('group %s:' % self.path)
+        for name in self.ncattrs():
+            ncdump.append('    %s: %s' % (name, self.getncattr(name)))
+        ncdump.append('    dimensions(sizes): %s' % ', '.join(dimnames))
+        ncdump.append('    variables(dimensions): %s' % ', '.join(varnames))
+        ncdump.append('    groups: %s' % ', '.join(grpnames))
+        return '\n'.join(ncdump)
 
     def _close(self, check_err):
         cdef int ierr = nc_close(self._grpid)
@@ -2897,7 +2929,11 @@ attributes."""
                 values = []
                 for name in names:
                     values.append(_get_att(self, NC_GLOBAL, name))
-                return OrderedDict(zip(names,values))
+                gen = zip(names, values)
+                if sys.version_info[0:2] < (3, 7):
+                    return OrderedDict(gen)
+                else:
+                    return dict(gen)
             else:
                 raise AttributeError
         elif name in _private_atts:
@@ -3058,8 +3094,10 @@ this `netCDF4.Dataset` or `netCDF4.Group`, as well as for all
 variables in all its subgroups.
 
 **`True_or_False`**: Boolean determining if automatic conversion of
-masked arrays with no missing values to regular ararys shall be
-applied for all variables.
+masked arrays with no missing values to regular numpy arrays shall be
+applied for all variables. Default True. Set to False to restore the default behaviour
+in versions prior to 1.4.1 (numpy array returned unless missing values are present,
+otherwise masked array returned).
 
 ***Note***: Calling this function only affects existing
 variables. Variables created after calling this function will follow
@@ -3223,12 +3261,21 @@ Additional read-only class variables:
             bytestr = _strencode(name)
             groupname = bytestr
             _ensure_nc_success(nc_def_grp(parent._grpid, groupname, &self._grpid))
-            self.cmptypes = OrderedDict()
-            self.vltypes = OrderedDict()
-            self.enumtypes = OrderedDict()
-            self.dimensions = OrderedDict()
-            self.variables = OrderedDict()
-            self.groups = OrderedDict()
+            if sys.version_info[0:2] < (3, 7):
+                self.cmptypes = OrderedDict()
+                self.vltypes = OrderedDict()
+                self.enumtypes = OrderedDict()
+                self.dimensions = OrderedDict()
+                self.variables = OrderedDict()
+                self.groups = OrderedDict()
+            else:
+                self.cmptypes = dict()
+                self.vltypes = dict()
+                self.enumtypes = dict()
+                self.dimensions = dict()
+                self.variables = dict()
+                self.groups = dict()
+
 
     def close(self):
         """
@@ -3356,9 +3403,11 @@ Read-only class variables:
         if not dir(self._grp):
             return 'Dimension object no longer valid'
         if self.isunlimited():
-            return repr(type(self))+" (unlimited): name = '%s', size = %s\n" % (self._name,len(self))
+            return "%r (unlimited): name = '%s', size = %s" %\
+                (type(self), self._name, len(self))
         else:
-            return repr(type(self))+": name = '%s', size = %s\n" % (self._name,len(self))
+            return "%r: name = '%s', size = %s" %\
+                (type(self), self._name, len(self))
 
     def __len__(self):
         # len(`netCDF4.Dimension` instance) returns current size of dimension
@@ -3906,37 +3955,32 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         cdef int ierr, no_fill
         if not dir(self._grp):
             return 'Variable object no longer valid'
-        ncdump_var = ['%r\n' % type(self)]
-        dimnames = tuple([_tostr(dimname) for dimname in self.dimensions])
-        attrs = ['    %s: %s\n' % (name,self.getncattr(name)) for name in\
-                self.ncattrs()]
+        ncdump = [repr(type(self))]
+        show_more_dtype = True
         if self._iscompound:
-            ncdump_var.append('%s %s(%s)\n' %\
-            ('compound',self._name,', '.join(dimnames)))
+            kind = 'compound'
         elif self._isvlen:
-            ncdump_var.append('%s %s(%s)\n' %\
-            ('vlen',self._name,', '.join(dimnames)))
+            kind = 'vlen'
         elif self._isenum:
-            ncdump_var.append('%s %s(%s)\n' %\
-            ('enum',self._name,', '.join(dimnames)))
+            kind = 'enum'
         else:
-            ncdump_var.append('%s %s(%s)\n' %\
-            (self.dtype,self._name,', '.join(dimnames)))
-        ncdump_var = ncdump_var + attrs
-        if self._iscompound:
-            ncdump_var.append('compound data type: %s\n' % self.dtype)
-        elif self._isvlen:
-            ncdump_var.append('vlen data type: %s\n' % self.dtype)
-        elif self._isenum:
-            ncdump_var.append('enum data type: %s\n' % self.dtype)
+            show_more_dtype = False
+            kind = str(self.dtype)
+        dimnames = tuple(_tostr(dimname) for dimname in self.dimensions)
+        ncdump.append('%s %s(%s)' %\
+            (kind, self._name, ', '.join(dimnames)))
+        for name in self.ncattrs():
+            ncdump.append('    %s: %s' % (name, self.getncattr(name)))
+        if show_more_dtype:
+            ncdump.append('%s data type: %s' % (kind, self.dtype))
         unlimdims = []
         for dimname in self.dimensions:
             dim = _find_dim(self._grp, dimname)
             if dim.isunlimited():
                 unlimdims.append(dimname)
-        if (self._grp.path != '/'): ncdump_var.append('path = %s\n' % self._grp.path)
-        ncdump_var.append('unlimited dimensions: %s\n' % ', '.join(unlimdims))
-        ncdump_var.append('current shape = %s\n' % repr(self.shape))
+        if (self._grp.path != '/'): ncdump.append('path = %s' % self._grp.path)
+        ncdump.append('unlimited dimensions: %s' % ', '.join(unlimdims))
+        ncdump.append('current shape = %r' % (self.shape,))
         if __netcdf4libversion__ < '4.5.1' and\
             self._grp.file_format.startswith('NETCDF3'):
             # issue #908: no_fill not correct for NETCDF3 files before 4.5.1
@@ -3955,15 +3999,15 @@ behavior is similar to Fortran or Matlab, but different than numpy.
                 except AttributeError:
                     fillval = default_fillvals[self.dtype.str[1:]]
                     if self.dtype.str[1:] in ['u1','i1']:
-                        msg = 'filling on, default _FillValue of %s ignored\n' % fillval
+                        msg = 'filling on, default _FillValue of %s ignored' % fillval
                     else:
-                        msg = 'filling on, default _FillValue of %s used\n' % fillval
-                ncdump_var.append(msg)
+                        msg = 'filling on, default _FillValue of %s used' % fillval
+                ncdump.append(msg)
             else:
-                ncdump_var.append('filling off\n')
+                ncdump.append('filling off')
 
 
-        return ''.join(ncdump_var)
+        return '\n'.join(ncdump)
 
     def _getdims(self):
         # Private method to get variables's dimension names
@@ -4036,7 +4080,8 @@ behavior is similar to Fortran or Matlab, but different than numpy.
     property size:
         """Return the number of stored elements."""
         def __get__(self):
-            return numpy.prod(self.shape)
+            # issue #957: add int since prod(())=1.0
+            return int(numpy.prod(self.shape))
 
     property dimensions:
         """get variables's dimension names"""
@@ -4069,6 +4114,13 @@ netCDF attribute with the same name as one of the reserved python
 attributes."""
         cdef nc_type xtype
         xtype=-99
+        # issue #959 - trying to set _FillValue results in mysterious
+        # error when close method is called so catch it here. It is
+        # already caught in __setattr__.
+        if name == '_FillValue':
+            msg='_FillValue attribute must be set when variable is '+\
+            'created (using fill_value keyword to createVariable)'
+            raise AttributeError(msg)
         if self._grp.data_model != 'NETCDF4': self._grp._redef()
         _set_att(self._grp, self._varid, name, value, xtype=xtype, force_ncstring=self._ncstring_attrs__)
         if self._grp.data_model != 'NETCDF4': self._grp._enddef()
@@ -4294,7 +4346,12 @@ details."""
                 values = []
                 for name in names:
                     values.append(_get_att(self._grp, self._varid, name))
-                return OrderedDict(zip(names,values))
+                gen = zip(names, values)
+                if sys.version_info[0:2] < (3, 7):
+                    return OrderedDict(gen)
+                else:
+                    return dict(gen)
+
             else:
                 raise AttributeError
         elif name in _private_atts:
@@ -4393,7 +4450,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
             if self.scale:  # only do this if autoscale option is on.
                 is_unsigned = getattr(self, '_Unsigned', False)
                 if is_unsigned and data.dtype.kind == 'i':
-                    data = data.view('u%s' % data.dtype.itemsize)
+                    data=data.view('%su%s'%(data.dtype.byteorder,data.dtype.itemsize))
 
         if self.scale and self._isprimitive and valid_scaleoffset:
             # if variable has scale_factor and add_offset attributes, apply
@@ -4447,7 +4504,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         is_unsigned = getattr(self, '_Unsigned', False)
         is_unsigned_int = is_unsigned and data.dtype.kind == 'i'
         if self.scale and is_unsigned_int:  # only do this if autoscale option is on.
-            dtype_unsigned_int = 'u%s' % data.dtype.itemsize
+            dtype_unsigned_int='%su%s' % (data.dtype.byteorder,data.dtype.itemsize)
             data = data.view(dtype_unsigned_int)
         # private function for creating a masked array, masking missing_values
         # and/or _FillValues.
@@ -5080,12 +5137,11 @@ The default value of `mask` is `True`
 turn on or off conversion of data without missing values to regular
 numpy arrays.
 
-If `always_mask` is set to `True` then a masked array with no missing
-values is converted to a regular numpy array.
-
-The default value of `always_mask` is `True` (conversions to regular
-numpy arrays are not performed).
-
+`always_mask` is a Boolean determining if automatic conversion of
+masked arrays with no missing values to regular numpy arrays shall be
+applied. Default is True. Set to False to restore the default behaviour
+in versions prior to 1.4.1 (numpy array returned unless missing values are present,
+otherwise masked array returned).
         """
         self.always_mask = bool(always_mask)
 
@@ -5498,8 +5554,8 @@ the user.
             return unicode(self).encode('utf-8')
 
     def __unicode__(self):
-        return repr(type(self))+": name = '%s', numpy dtype = %s\n" %\
-        (self.name,self.dtype)
+        return "%r: name = '%s', numpy dtype = %s" %\
+            (type(self), self.name, self.dtype)
 
     def __reduce__(self):
         # raise error is user tries to pickle a CompoundType object.
@@ -5788,10 +5844,10 @@ the user.
 
     def __unicode__(self):
         if self.dtype == str:
-            return repr(type(self))+': string type'
+            return '%r: string type' % (type(self),)
         else:
-            return repr(type(self))+": name = '%s', numpy dtype = %s\n" %\
-            (self.name, self.dtype)
+            return "%r: name = '%s', numpy dtype = %s" %\
+                (type(self), self.name, self.dtype)
 
     def __reduce__(self):
         # raise error is user tries to pickle a VLType object.
@@ -5906,9 +5962,8 @@ the user.
             return unicode(self).encode('utf-8')
 
     def __unicode__(self):
-        return repr(type(self))+\
-        ": name = '%s', numpy dtype = %s, fields/values =%s\n" %\
-        (self.name, self.dtype, self.enum_dict)
+        return "%r: name = '%s', numpy dtype = %s, fields/values =%s" %\
+            (type(self), self.name, self.dtype, self.enum_dict)
 
     def __reduce__(self):
         # raise error is user tries to pickle a EnumType object.
@@ -6089,18 +6144,18 @@ Example usage (See `netCDF4.MFDataset.__init__` for more details):
     >>> # create a series of netCDF files with a variable sharing
     >>> # the same unlimited dimension.
     >>> for nf in range(10):
-    >>>     f = Dataset("mftest%s.nc" % nf,"w",format='NETCDF4_CLASSIC')
-    >>>     f.createDimension("x",None)
-    >>>     x = f.createVariable("x","i",("x",))
-    >>>     x[0:10] = np.arange(nf*10,10*(nf+1))
-    >>>     f.close()
+    ...     with Dataset("mftest%s.nc" % nf, "w", format='NETCDF4_CLASSIC') as f:
+    ...         f.createDimension("x",None)
+    ...         x = f.createVariable("x","i",("x",))
+    ...         x[0:10] = np.arange(nf*10,10*(nf+1))
     >>> # now read all those files in at once, in one Dataset.
     >>> f = MFDataset("mftest*nc")
-    >>> print f.variables["x"][:]
-    [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
-     25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
-     50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
-     75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
+    >>> print(f.variables["x"][:])
+    [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
+     24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
+     48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
+     72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
+     96 97 98 99]
     """
 
     def __init__(self, files, check=False, aggdim=None, exclude=[],
@@ -6336,22 +6391,21 @@ Example usage (See `netCDF4.MFDataset.__init__` for more details):
             dset.close()
 
     def __repr__(self):
-        ncdump = ['%r\n' % type(self)]
-        dimnames = tuple([str(dimname) for dimname in self.dimensions.keys()])
-        varnames = tuple([str(varname) for varname in self.variables.keys()])
+        ncdump = [repr(type(self))]
+        dimnames = tuple(str(dimname) for dimname in self.dimensions.keys())
+        varnames = tuple(str(varname) for varname in self.variables.keys())
         grpnames = ()
         if self.path == '/':
-            ncdump.append('root group (%s data model, file format %s):\n' %
+            ncdump.append('root group (%s data model, file format %s):' %
                     (self.data_model[0], self.disk_format[0]))
         else:
-            ncdump.append('group %s:\n' % self.path)
-        attrs = ['    %s: %s\n' % (name,self.__dict__[name]) for name in\
-                self.ncattrs()]
-        ncdump = ncdump + attrs
-        ncdump.append('    dimensions = %s\n' % str(dimnames))
-        ncdump.append('    variables = %s\n' % str(varnames))
-        ncdump.append('    groups = %s\n' % str(grpnames))
-        return ''.join(ncdump)
+            ncdump.append('group %s:' % self.path)
+        for name in self.ncattrs():
+            ncdump.append('    %s: %s' % (name, self.__dict__[name]))
+        ncdump.append('    dimensions = %s' % str(dimnames))
+        ncdump.append('    variables = %s' % str(varnames))
+        ncdump.append('    groups = %s' % str(grpnames))
+        return '\n'.join(ncdump)
 
     def __reduce__(self):
         # raise error is user tries to pickle a MFDataset object.
@@ -6368,9 +6422,11 @@ class _Dimension(object):
         return True
     def __repr__(self):
         if self.isunlimited():
-            return repr(type(self))+" (unlimited): name = '%s', size = %s\n" % (self._name,len(self))
+            return "%r (unlimited): name = '%s', size = %s" %\
+                (type(self), self._name, len(self))
         else:
-            return repr(type(self))+": name = '%s', size = %s\n" % (self._name,len(self))
+            return "%r: name = '%s', size = %s" %\
+                (type(self), self._name, len(self))
 
 class _Variable(object):
     def __init__(self, dset, varname, var, recdimname):
@@ -6398,21 +6454,19 @@ class _Variable(object):
         except:
             raise AttributeError(name)
     def __repr__(self):
-        ncdump_var = ['%r\n' % type(self)]
-        dimnames = tuple([str(dimname) for dimname in self.dimensions])
-        attrs = ['    %s: %s\n' % (name,self.__dict__[name]) for name in\
-                self.ncattrs()]
-        ncdump_var.append('%s %s%s\n' %\
-        (self.dtype,self._name,dimnames))
-        ncdump_var = ncdump_var + attrs
+        ncdump = [repr(type(self))]
+        dimnames = tuple(str(dimname) for dimname in self.dimensions)
+        ncdump.append('%s %s%s' % (self.dtype, self._name, dimnames))
+        for name in self.ncattrs():
+            ncdump.append('    %s: %s' % (name, self.__dict__[name]))
         unlimdims = []
         for dimname in self.dimensions:
             dim = _find_dim(self._grp, dimname)
             if dim.isunlimited():
                 unlimdims.append(str(dimname))
-        ncdump_var.append('unlimited dimensions = %s\n' % repr(tuple(unlimdims)))
-        ncdump_var.append('current size = %s\n' % repr(self.shape))
-        return ''.join(ncdump_var)
+        ncdump.append('unlimited dimensions = %r' % (tuple(unlimdims),))
+        ncdump.append('current size = %r' % (self.shape,))
+        return '\n'.join(ncdump)
     def __len__(self):
         if not self._shape:
             raise TypeError('len() of unsized object')
@@ -6564,14 +6618,14 @@ Example usage (See `netCDF4.MFTime.__init__` for more details):
     >>> f1.close()
     >>> f2.close()
     >>> # Read the two files in at once, in one Dataset.
-    >>> f = MFDataset("mftest*nc")
+    >>> f = MFDataset("mftest_*nc")
     >>> t = f.variables["time"]
-    >>> print t.units
+    >>> print(t.units)
     days since 2000-01-01
-    >>> print t[32] # The value written in the file, inconsistent with the MF time units.
+    >>> print(t[32])  # The value written in the file, inconsistent with the MF time units.
     1
     >>> T = MFTime(t)
-    >>> print T[32]
+    >>> print(T[32])
     32
     """
 


=====================================
setup.py
=====================================
@@ -49,13 +49,14 @@ def check_ifnetcdf4(netcdf4_includedir):
     return isnetcdf4
 
 
-def check_api(inc_dirs):
+def check_api(inc_dirs,netcdf_lib_version):
     has_rename_grp = False
     has_nc_inq_path = False
     has_nc_inq_format_extended = False
     has_cdf5_format = False
     has_nc_open_mem = False
     has_nc_create_mem = False
+    has_parallel_support = False
     has_parallel4_support = False
     has_pnetcdf_support = False
 
@@ -91,10 +92,20 @@ def check_api(inc_dirs):
             for line in open(ncmetapath):
                 if line.startswith('#define NC_HAS_CDF5'):
                     has_cdf5_format = bool(int(line.split()[2]))
-                elif line.startswith('#define NC_HAS_PARALLEL4'):
+                if line.startswith('#define NC_HAS_PARALLEL'):
+                    has_parallel_support = bool(int(line.split()[2]))
+                if line.startswith('#define NC_HAS_PARALLEL4'):
                     has_parallel4_support = bool(int(line.split()[2]))
-                elif line.startswith('#define NC_HAS_PNETCDF'):
+                if line.startswith('#define NC_HAS_PNETCDF'):
                     has_pnetcdf_support = bool(int(line.split()[2]))
+        # NC_HAS_PARALLEL4 missing in 4.6.1 (issue #964)
+        if not has_parallel4_support and has_parallel_support and not has_pnetcdf_support:
+            has_parallel4_support = True
+        # for 4.6.1, if NC_HAS_PARALLEL=NC_HAS_PNETCDF=1, guess that
+        # parallel HDF5 is enabled (must guess since there is no
+        # NC_HAS_PARALLEL4)
+        elif netcdf_lib_version == "4.6.1" and not has_parallel4_support and has_parallel_support:
+            has_parallel4_support = True
         break
 
     return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
@@ -182,7 +193,7 @@ ncconfig = None
 use_ncconfig = None
 if USE_SETUPCFG and os.path.exists(setup_cfg):
     sys.stdout.write('reading from setup.cfg...\n')
-    config = configparser.SafeConfigParser()
+    config = configparser.ConfigParser()
     config.read(setup_cfg)
     try:
         HDF5_dir = config.get("directories", "HDF5_dir")
@@ -494,7 +505,8 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
     # this determines whether renameGroup and filepath methods will work.
     has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
     has_cdf5_format, has_nc_open_mem, has_nc_create_mem, \
-    has_parallel4_support, has_pnetcdf_support = check_api(inc_dirs)
+    has_parallel4_support, has_pnetcdf_support = \
+    check_api(inc_dirs,netcdf_lib_version)
     # for netcdf 4.4.x CDF5 format is always enabled.
     if netcdf_lib_version is not None and\
        (netcdf_lib_version > "4.4" and netcdf_lib_version < "4.5"):
@@ -584,7 +596,7 @@ else:
 
 setup(name="netCDF4",
       cmdclass=cmdclass,
-      version="1.5.1.2",
+      version="1.5.2",
       long_description="netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types.  It is implemented on top of HDF5.  This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library.  The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project is hosted on a `GitHub repository <https://github.com/Unidata/netcdf4-python>`_ where you may access the most up-to-date source.",
       author="Jeff Whitaker",
       author_email="jeffrey.s.whitaker at noaa.gov",
@@ -597,12 +609,11 @@ setup(name="netCDF4",
                 'meteorology', 'climate'],
       classifiers=["Development Status :: 3 - Alpha",
                    "Programming Language :: Python :: 2",
-                   "Programming Language :: Python :: 2.6",
                    "Programming Language :: Python :: 2.7",
                    "Programming Language :: Python :: 3",
-                   "Programming Language :: Python :: 3.3",
-                   "Programming Language :: Python :: 3.4",
                    "Programming Language :: Python :: 3.5",
+                   "Programming Language :: Python :: 3.6",
+                   "Programming Language :: Python :: 3.7",
                    "Intended Audience :: Science/Research",
                    "License :: OSI Approved",
                    "Topic :: Software Development :: Libraries :: Python Modules",


=====================================
test/tst_atts.py
=====================================
@@ -7,13 +7,10 @@ import tempfile
 import warnings
 
 import numpy as NP
+from collections import OrderedDict
 from numpy.random.mtrand import uniform
-import netCDF4
 
-try:
-    from collections import OrderedDict
-except ImportError: # or else use drop-in substitute
-    from ordereddict import OrderedDict
+import netCDF4
 
 # test attribute creation.
 FILE_NAME = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
@@ -94,6 +91,19 @@ class VariablesTestCase(unittest.TestCase):
         v1.seqatt = SEQATT
         v1.stringseqatt = STRINGSEQATT
         v1.setncattr_string('stringseqatt_array',STRINGSEQATT) # array of NC_STRING
+        # issue #959: should not be able to set _FillValue after var creation
+        try:
+            v1._FillValue(-999.)
+        except AttributeError:
+            pass
+        else:
+            raise ValueError('This test should have failed.')
+        try:
+            v1.setncattr('_FillValue',-999.)
+        except AttributeError:
+            pass
+        else:
+            raise ValueError('This test should have failed.')
         # issue #485 (triggers segfault in C lib
         # with version 1.2.1 without pull request #486)
         f.foo = NP.array('bar','S')


=====================================
test/tst_endian.py
=====================================
@@ -121,6 +121,27 @@ def issue346(file):
     assert_array_equal(datal,xl)
     nc.close()
 
+def issue930(file):
+    # make sure view to unsigned data type (triggered
+    # by _Unsigned attribute being set) is correct when
+    # data byte order is non-native.
+    nc = netCDF4.Dataset(file,'w')
+    d = nc.createDimension('x',2)
+    v1 = nc.createVariable('v1','i2','x',endian='big')
+    v2 = nc.createVariable('v2','i2','x',endian='little')
+    v1[0] = 255; v1[1] = 1
+    v2[0] = 255; v2[1] = 1
+    v1._Unsigned="TRUE"; v1.missing_value=np.int16(1)
+    v2._Unsigned="TRUE"; v2.missing_value=np.int16(1)
+    nc.close()
+    nc = netCDF4.Dataset(file)
+    assert_array_equal(nc['v1'][:],np.ma.masked_array([255,1],mask=[False,True]))
+    assert_array_equal(nc['v2'][:],np.ma.masked_array([255,1],mask=[False,True]))
+    nc.set_auto_mask(False)
+    assert_array_equal(nc['v1'][:],np.array([255,1]))
+    assert_array_equal(nc['v2'][:],np.array([255,1]))
+    nc.close()
+
 class EndianTestCase(unittest.TestCase):
 
     def setUp(self):
@@ -141,6 +162,7 @@ class EndianTestCase(unittest.TestCase):
         check_byteswap(self.file3, data)
         issue310(self.file)
         issue346(self.file2)
+        issue930(self.file2)
 
 if __name__ == '__main__':
     unittest.main()


=====================================
test/tst_netcdftime.py
=====================================
@@ -523,7 +523,7 @@ class TestDate2index(unittest.TestCase):
 
             :Example:
             >>> t = TestTime(datetime(1989, 2, 18), 45, 6, 'hours since 1979-01-01')
-            >>> print num2date(t[1], t.units)
+            >>> print(num2date(t[1], t.units))
             1989-02-18 06:00:00
             """
             self.units = units



View it on GitLab: https://salsa.debian.org/debian-gis-team/netcdf4-python/compare/96f6fdf0814ed57daa26d0e75deac0b4950f8fc9...16dcd4ba1e04ad1935a02a2410cb5caaa6b2607f

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/netcdf4-python/compare/96f6fdf0814ed57daa26d0e75deac0b4950f8fc9...16dcd4ba1e04ad1935a02a2410cb5caaa6b2607f
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20190904/7874c98b/attachment-0001.html>


More information about the Pkg-grass-devel mailing list