[Git][debian-gis-team/netcdf4-python][master] 5 commits: New upstream version 1.6.1
Bas Couwenberg (@sebastic)
gitlab at salsa.debian.org
Fri Sep 16 06:19:02 BST 2022
Bas Couwenberg pushed to branch master at Debian GIS Project / netcdf4-python
Commits:
91a22244 by Bas Couwenberg at 2022-09-16T06:49:20+02:00
New upstream version 1.6.1
- - - - -
73551991 by Bas Couwenberg at 2022-09-16T06:49:24+02:00
Update upstream source from tag 'upstream/1.6.1'
Update to upstream version '1.6.1'
with Debian dir a1e25de6cbe72ae41040c54dc7eb3317574d3576
- - - - -
48cdb31f by Bas Couwenberg at 2022-09-16T06:55:38+02:00
New upstream release.
- - - - -
4db99c7a by Bas Couwenberg at 2022-09-16T07:06:24+02:00
Update lintian overrides.
- - - - -
f6584b73 by Bas Couwenberg at 2022-09-16T07:06:24+02:00
Set distribution to unstable.
- - - - -
16 changed files:
- .github/workflows/miniconda.yml
- Changelog
- README.md
- debian/changelog
- + debian/python3-netcdf4.lintian-overrides
- docs/index.html
- include/netCDF4.pxi
- setup.py
- src/netCDF4/__init__.py
- src/netCDF4/_netCDF4.pyx
- test/run_all.py
- + test/tst_alignment.py
- test/tst_compression_blosc.py
- test/tst_compression_bzip2.py
- test/tst_compression_szip.py
- test/tst_compression_zstd.py
Changes:
=====================================
.github/workflows/miniconda.yml
=====================================
@@ -18,29 +18,35 @@ jobs:
exclude:
- os: macos-latest
platform: x32
+ fail-fast: false
+
steps:
- - uses: actions/checkout at v2
+ - uses: actions/checkout at v3
- - name: Setup Conda
- uses: s-weigand/setup-conda at v1
+ - name: Setup Micromamba
+ uses: mamba-org/provision-with-micromamba at v13
with:
- activate-conda: false
- conda-channels: conda-forge
+ environment-file: false
- name: Python ${{ matrix.python-version }}
shell: bash -l {0}
run: |
- conda create --name TEST python=${{ matrix.python-version }} numpy cython pip pytest hdf5 libnetcdf cftime --strict-channel-priority
- source activate TEST
+ micromamba create --name TEST python=${{ matrix.python-version }} numpy cython pip pytest hdf5 libnetcdf cftime zlib --channel conda-forge
+ micromamba activate TEST
export PATH="${CONDA_PREFIX}/bin:${CONDA_PREFIX}/Library/bin:$PATH" # so setup.py finds nc-config
pip install -e . --no-deps --force-reinstall
- conda info --all
- conda list
+
+ - name: Debug conda
+ shell: bash -l {0}
+ run: |
+ micromamba activate TEST
+ micromamba info --all
+ micromamba list
- name: Tests
shell: bash -l {0}
run: |
- source activate TEST
+ micromamba activate TEST
cd test && python run_all.py
run-mpi:
@@ -53,26 +59,30 @@ jobs:
steps:
- uses: actions/checkout at v2
- - name: Setup Conda
- uses: s-weigand/setup-conda at v1
+ - name: Setup Micromamba
+ uses: mamba-org/provision-with-micromamba at main
with:
- activate-conda: false
- conda-channels: conda-forge
+ environment-file: false
- name: Python ${{ matrix.python-version }}
shell: bash -l {0}
run: |
- conda create --name TEST python=${{ matrix.python-version }} numpy cython pip pytest mpi4py hdf5=*=mpi* libnetcdf=*=mpi* cftime --strict-channel-priority
- source activate TEST
+ micromamba create --name TEST python=${{ matrix.python-version }} numpy cython pip pytest mpi4py hdf5=*=mpi* libnetcdf=*=mpi* cftime zlib --channel conda-forge
+ micromamba activate TEST
export PATH="${CONDA_PREFIX}/bin:${CONDA_PREFIX}/Library/bin:$PATH" # so setup.py finds nc-config
pip install -e . --no-deps --force-reinstall
- conda info --all
- conda list
+
+ - name: Debug conda
+ shell: bash -l {0}
+ run: |
+ micromamba activate TEST
+ micromamba info --all
+ micromamba list
- name: Tests
shell: bash -l {0}
run: |
- source activate TEST
+ micromamba activate TEST
cd test && python run_all.py
cd ../examples
export PATH="${CONDA_PREFIX}/bin:${CONDA_PREFIX}/Library/bin:$PATH"
=====================================
Changelog
=====================================
@@ -1,3 +1,11 @@
+ version 1.6.1 (tag v1.6.1rel)
+==============================
+ * add Dataset methods has_<name>_filter (where <name>=zstd,blosc,bzip2,szip)
+ to check for availability of extra compression filters.
+ * release GIL for all C-lib calls (issue #1180).
+ * Add support for nc_set_alignment and nc_get_alignment to control alignment
+ of data within HDF5 files.
+
version 1.6.0 (tag v1.6.0rel)
==============================
* add support for new quantization functionality in netcdf-c 4.9.0 via "signficant_digits"
=====================================
README.md
=====================================
@@ -10,7 +10,12 @@
## News
For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
-??/??/2022: Version [1.6.0](https://pypi.python.org/pypi/netCDF4/1.6.0) released. Support
+09/18/2022: Version [1.6.1](https://pypi.python.org/pypi/netCDF4/1.6.1) released. GIL now
+released for all C lib calls, `set_alignment` and `get_alignment` module functions
+added to modify/retrieve HDF5 data alignment properties. Added `Dataset` methods to
+query availability of optional compression filters.
+
+06/24/2022: Version [1.6.0](https://pypi.python.org/pypi/netCDF4/1.6.0) released. Support
for quantization (bit-grooming and bit-rounding) functionality in netcdf-c 4.9.0 which can
dramatically improve compression. Dataset.createVariable now accepts dimension instances (instead
of just dimension names). 'compression' kwarg added to Dataset.createVariable to support szip as
=====================================
debian/changelog
=====================================
@@ -1,8 +1,10 @@
-netcdf4-python (1.6.0-2) UNRELEASED; urgency=medium
+netcdf4-python (1.6.1-1) unstable; urgency=medium
+ * New upstream release.
* Set NO_PLUGINS environment variable to fix test failures.
+ * Update lintian overrides.
- -- Bas Couwenberg <sebastic at debian.org> Sun, 26 Jun 2022 06:52:21 +0200
+ -- Bas Couwenberg <sebastic at debian.org> Fri, 16 Sep 2022 06:56:16 +0200
netcdf4-python (1.6.0-1) unstable; urgency=medium
=====================================
debian/python3-netcdf4.lintian-overrides
=====================================
@@ -0,0 +1,3 @@
+# False positive, lat/lon
+spelling-error-in-binary lon long *
+
=====================================
docs/index.html
=====================================
@@ -21,7 +21,7 @@
<h2>Contents</h2>
<ul>
- <li><a href="#version-160">Version 1.6.0</a></li>
+ <li><a href="#version-161">Version 1.6.1</a></li>
</ul></li>
<li><a href="#introduction">Introduction</a>
<ul>
@@ -150,6 +150,18 @@
<li>
<a class="function" href="#Dataset.tocdl">tocdl</a>
</li>
+ <li>
+ <a class="function" href="#Dataset.has_blosc_filter">has_blosc_filter</a>
+ </li>
+ <li>
+ <a class="function" href="#Dataset.has_zstd_filter">has_zstd_filter</a>
+ </li>
+ <li>
+ <a class="function" href="#Dataset.has_bzip2_filter">has_bzip2_filter</a>
+ </li>
+ <li>
+ <a class="function" href="#Dataset.has_szip_filter">has_szip_filter</a>
+ </li>
<li>
<a class="variable" href="#Dataset.name">name</a>
</li>
@@ -444,6 +456,12 @@
<li>
<a class="function" href="#set_chunk_cache">set_chunk_cache</a>
</li>
+ <li>
+ <a class="function" href="#set_alignment">set_alignment</a>
+ </li>
+ <li>
+ <a class="function" href="#get_alignment">get_alignment</a>
+ </li>
</ul>
@@ -460,7 +478,7 @@
<h1 class="modulename">
netCDF4 </h1>
- <div class="docstring"><h2 id="version-160">Version 1.6.0</h2>
+ <div class="docstring"><h2 id="version-161">Version 1.6.1</h2>
<h1 id="introduction">Introduction</h1>
@@ -521,10 +539,10 @@ If you go this route, set <code>USE_NCCONFIG</code> and <code>USE_SETUPCFG</code
If the dependencies are not found
in any of the paths specified by environment variables, then standard locations
(such as <code>/usr</code> and <code>/usr/local</code>) are searched.</li>
-<li>if the env var <code>NETCDF_PLUGIN_DIR</code> is set to point to the location netcdf-c compression
-plugin shared objects, they will be installed inside the package. In this
+<li>if the env var <code>NETCDF_PLUGIN_DIR</code> is set to point to the location of the netcdf-c compression
+plugins built by netcdf >= 4.9.0, they will be installed inside the package. In this
case <code>HDF5_PLUGIN_PATH</code> will be set to the package installation path on import,
-so the extra compression algorithms available in netcdf-c 4.9.0 will automatically
+so the extra compression algorithms available in netcdf-c >= 4.9.0 will automatically
be available. Otherwise, the user will have to set <code>HDF5_PLUGIN_PATH</code> explicitly
to have access to the extra compression plugins.</li>
<li>run <code>python setup.py build</code>, then <code>python setup.py install</code> (as root if
@@ -1624,7 +1642,7 @@ approaches.</p>
the parallel IO example, which is in <code>examples/mpi_example.py</code>.
Unit tests are in the <code>test</code> directory.</p>
-<p><strong>contact</strong>: Jeffrey Whitaker <a href="mailto:jeffrey.s.whitaker@noaa.gov">jeffrey.s.whitaker@noaa.gov</a></p>
+<p><strong>contact</strong>: Jeffrey Whitaker <a href="mailto:jeffrey.s.whitaker@noaa.gov">jeffrey.s.whitaker@noaa.gov</a></p>
<p><strong>copyright</strong>: 2008 by Jeffrey Whitaker.</p>
@@ -1651,10 +1669,13 @@ Unit tests are in the <code>test</code> directory.</p>
<span class="n">__has_bzip2_support__</span><span class="p">,</span> <span class="n">__has_blosc_support__</span><span class="p">,</span> <span class="n">__has_szip_support__</span><span class="p">)</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="n">__all__</span> <span class="o">=</span>\
-<span class="p">[</span><span class="s1">'Dataset'</span><span class="p">,</span><span class="s1">'Variable'</span><span class="p">,</span><span class="s1">'Dimension'</span><span class="p">,</span><span class="s1">'Group'</span><span class="p">,</span><span class="s1">'MFDataset'</span><span class="p">,</span><span class="s1">'MFTime'</span><span class="p">,</span><span class="s1">'CompoundType'</span><span class="p">,</span><span class="s1">'VLType'</span><span class="p">,</span><span class="s1">'date2num'</span><span class="p">,</span><span class="s1">'num2date'</span><span class="p">,</span><span class="s1">'date2index'</span><span class="p">,</span><span class="s1">'stringtochar'</span><span class="p">,</span><span class="s1">'chartostring'</span><span class="p">,</span><span class="s1">'stringtoarr'</span><span class="p">,</span><span class="s1">'getlibversion'</span><span class="p">,</span><span class="s1">'EnumType'</span><span class="p">,</span><span class="s1">'get_chunk_cache'</span><span class="p">,</span><span class="s1">'set_chunk_cache'</span><span class="p">]</span>
-<span class="c1"># if HDF5_PLUGIN_PATH not set, point to package path if libh5noop.so exists there</span>
-<span class="k">if</span> <span class="s1">'HDF5_PLUGIN_PATH'</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">environ</span> <span class="ow">and</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">exists</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">__path__</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span><span class="s1">'libh5noop.so'</span><span class="p">)):</span>
- <span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">'HDF5_PLUGIN_PATH'</span><span class="p">]</span><span class="o">=</span><span class="n">__path__</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
+<span class="p">[</span><span class="s1">'Dataset'</span><span class="p">,</span><span class="s1">'Variable'</span><span class="p">,</span><span class="s1">'Dimension'</span><span class="p">,</span><span class="s1">'Group'</span><span class="p">,</span><span class="s1">'MFDataset'</span><span class="p">,</span><span class="s1">'MFTime'</span><span class="p">,</span><span class="s1">'CompoundType'</span><span class="p">,</span><span class="s1">'VLType'</span><span class="p">,</span><span class="s1">'date2num'</span><span class="p">,</span><span class="s1">'num2date'</span><span class="p">,</span><span class="s1">'date2index'</span><span class="p">,</span><span class="s1">'stringtochar'</span><span class="p">,</span><span class="s1">'chartostring'</span><span class="p">,</span><span class="s1">'stringtoarr'</span><span class="p">,</span><span class="s1">'getlibversion'</span><span class="p">,</span><span class="s1">'EnumType'</span><span class="p">,</span><span class="s1">'get_chunk_cache'</span><span class="p">,</span><span class="s1">'set_chunk_cache'</span><span class="p">,</span><span class="s1">'set_alignment'</span><span class="p">,</span><span class="s1">'get_alignment'</span><span class="p">]</span>
+<span class="c1"># if HDF5_PLUGIN_PATH not set, point to package path if plugins live there</span>
+<span class="n">pluginpath</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">__path__</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span><span class="s1">'plugins'</span><span class="p">)</span>
+<span class="k">if</span> <span class="s1">'HDF5_PLUGIN_PATH'</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">environ</span> <span class="ow">and</span>\
+ <span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">exists</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">pluginpath</span><span class="p">,</span><span class="s1">'lib__nczhdf5filters.so'</span><span class="p">))</span> <span class="ow">or</span>\
+ <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">exists</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">pluginpath</span><span class="p">,</span><span class="s1">'lib__nczhdf5filters.dylib'</span><span class="p">))):</span>
+ <span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">'HDF5_PLUGIN_PATH'</span><span class="p">]</span><span class="o">=</span><span class="n">pluginpath</span>
</pre></div>
</details>
@@ -2648,6 +2669,66 @@ to be installed and in <code>$PATH</code>.</p>
</div>
+ </div>
+ <div id="Dataset.has_blosc_filter" class="classattr">
+ <div class="attr function"><a class="headerlink" href="#Dataset.has_blosc_filter">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">has_blosc_filter</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code>has_blosc_filter(self)</code></strong>
+returns True if blosc compression filter is available</p>
+</div>
+
+
+ </div>
+ <div id="Dataset.has_zstd_filter" class="classattr">
+ <div class="attr function"><a class="headerlink" href="#Dataset.has_zstd_filter">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">has_zstd_filter</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code>has_zstd_filter(self)</code></strong>
+returns True if zstd compression filter is available</p>
+</div>
+
+
+ </div>
+ <div id="Dataset.has_bzip2_filter" class="classattr">
+ <div class="attr function"><a class="headerlink" href="#Dataset.has_bzip2_filter">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">has_bzip2_filter</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code>has_bzip2_filter(self)</code></strong>
+returns True if bzip2 compression filter is available</p>
+</div>
+
+
+ </div>
+ <div id="Dataset.has_szip_filter" class="classattr">
+ <div class="attr function"><a class="headerlink" href="#Dataset.has_szip_filter">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">has_szip_filter</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code>has_szip_filter(self)</code></strong>
+returns True if szip compression filter is available</p>
+</div>
+
+
</div>
<div id="Dataset.name" class="classattr">
<div class="attr variable"><a class="headerlink" href="#Dataset.name">#  </a>
@@ -3901,6 +3982,10 @@ instances, raises IOError.</p>
<dd id="Group.get_variables_by_attributes" class="function"><a href="#Dataset.get_variables_by_attributes">get_variables_by_attributes</a></dd>
<dd id="Group.fromcdl" class="function"><a href="#Dataset.fromcdl">fromcdl</a></dd>
<dd id="Group.tocdl" class="function"><a href="#Dataset.tocdl">tocdl</a></dd>
+ <dd id="Group.has_blosc_filter" class="function"><a href="#Dataset.has_blosc_filter">has_blosc_filter</a></dd>
+ <dd id="Group.has_zstd_filter" class="function"><a href="#Dataset.has_zstd_filter">has_zstd_filter</a></dd>
+ <dd id="Group.has_bzip2_filter" class="function"><a href="#Dataset.has_bzip2_filter">has_bzip2_filter</a></dd>
+ <dd id="Group.has_szip_filter" class="function"><a href="#Dataset.has_szip_filter">has_szip_filter</a></dd>
<dd id="Group.name" class="variable"><a href="#Dataset.name">name</a></dd>
<dd id="Group.groups" class="variable"><a href="#Dataset.groups">groups</a></dd>
<dd id="Group.dimensions" class="variable"><a href="#Dataset.dimensions">dimensions</a></dd>
@@ -4067,6 +4152,10 @@ variables with an aggregation dimension and all global attributes.</p>
<dd id="MFDataset.get_variables_by_attributes" class="function"><a href="#Dataset.get_variables_by_attributes">get_variables_by_attributes</a></dd>
<dd id="MFDataset.fromcdl" class="function"><a href="#Dataset.fromcdl">fromcdl</a></dd>
<dd id="MFDataset.tocdl" class="function"><a href="#Dataset.tocdl">tocdl</a></dd>
+ <dd id="MFDataset.has_blosc_filter" class="function"><a href="#Dataset.has_blosc_filter">has_blosc_filter</a></dd>
+ <dd id="MFDataset.has_zstd_filter" class="function"><a href="#Dataset.has_zstd_filter">has_zstd_filter</a></dd>
+ <dd id="MFDataset.has_bzip2_filter" class="function"><a href="#Dataset.has_bzip2_filter">has_bzip2_filter</a></dd>
+ <dd id="MFDataset.has_szip_filter" class="function"><a href="#Dataset.has_szip_filter">has_szip_filter</a></dd>
<dd id="MFDataset.name" class="variable"><a href="#Dataset.name">name</a></dd>
<dd id="MFDataset.groups" class="variable"><a href="#Dataset.groups">groups</a></dd>
<dd id="MFDataset.dimensions" class="variable"><a href="#Dataset.dimensions">dimensions</a></dd>
@@ -4333,7 +4422,7 @@ VLEN data type.</p>
</div>
- <div class="docstring"><p>date2num(dates, units, calendar=None, has_year_zero=None)</p>
+ <div class="docstring"><p>date2num(dates, units, calendar=None, has_year_zero=None, longdouble=False)</p>
<p>Return numeric time values given datetime objects. The units
of the numeric time values are described by the <strong>units</strong> argument
@@ -4378,6 +4467,12 @@ always exists and the has_year_zero kwarg is ignored.
This kwarg is not needed to define calendar systems allowed by CF
(the calendar-specific defaults do this).</p>
+<p><strong>longdouble</strong>: If set True, output is in the long double float type
+(numpy.float128) instead of float (numpy.float64), allowing microsecond
+accuracy when converting a time value to a date and back again. Otherwise
+this is only possible if the discretization of the time variable is an
+integer multiple of the units.</p>
+
<p>returns a numeric time value, or an array of numeric time values
with approximately 1 microsecond accuracy.</p>
</div>
@@ -4725,6 +4820,47 @@ details.</p>
</div>
+ </section>
+ <section id="set_alignment">
+ <div class="attr function"><a class="headerlink" href="#set_alignment">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">set_alignment</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code>set_alignment(threshold,alignment)</code></strong></p>
+
+<p>Change the HDF5 file alignment.
+See netcdf C library documentation for <code>nc_set_alignment</code> for
+details.</p>
+
+<p>This function was added in netcdf 4.9.0.</p>
+</div>
+
+
+ </section>
+ <section id="get_alignment">
+ <div class="attr function"><a class="headerlink" href="#get_alignment">#  </a>
+
+
+ <span class="def">def</span>
+ <span class="name">get_alignment</span><span class="signature">(unknown)</span>:
+ </div>
+
+
+ <div class="docstring"><p><strong><code><a href="#get_alignment">get_alignment()</a></code></strong></p>
+
+<p>return current netCDF alignment within HDF5 files in a tuple
+(threshold,alignment). See netcdf C library documentation for
+<code>nc_get_alignment</code> for details. Values can be reset with
+<code><a href="#set_alignment">set_alignment</a></code>.</p>
+
+<p>This function was added in netcdf 4.9.0.</p>
+</div>
+
+
</section>
</main>
</body>
=====================================
include/netCDF4.pxi
=====================================
@@ -6,7 +6,7 @@ cdef extern from "stdlib.h":
# hdf5 version info.
cdef extern from "H5public.h":
ctypedef int herr_t
- int H5get_libversion( unsigned int *majnum, unsigned int *minnum, unsigned int *relnum )
+ int H5get_libversion( unsigned int *majnum, unsigned int *minnum, unsigned int *relnum ) nogil
cdef extern from *:
ctypedef char* const_char_ptr "const char*"
@@ -126,10 +126,7 @@ cdef extern from "netcdf.h":
NC_FORMAT_DAP4
NC_FORMAT_PNETCDF
NC_FORMAT_UNDEFINED
- # Let nc__create() or nc__open() figure out
- # as suitable chunk size.
NC_SIZEHINT_DEFAULT
- # In nc__enddef(), align to the chunk size.
NC_ALIGN_CHUNK
# 'size' argument to ncdimdef for an unlimited dimension
NC_UNLIMITED
@@ -218,10 +215,8 @@ cdef extern from "netcdf.h":
NC_ENDIAN_BIG
const_char_ptr *nc_inq_libvers() nogil
const_char_ptr *nc_strerror(int ncerr)
- int nc_create(char *path, int cmode, int *ncidp)
- int nc__create(char *path, int cmode, size_t initialsz, size_t *chunksizehintp, int *ncidp)
- int nc_open(char *path, int mode, int *ncidp)
- int nc__open(char *path, int mode, size_t *chunksizehintp, int *ncidp)
+ int nc_create(char *path, int cmode, int *ncidp) nogil
+ int nc_open(char *path, int mode, int *ncidp) nogil
int nc_inq_path(int ncid, size_t *pathlen, char *path) nogil
int nc_inq_format_extended(int ncid, int *formatp, int* modep) nogil
int nc_inq_ncid(int ncid, char *name, int *grp_ncid) nogil
@@ -230,13 +225,13 @@ cdef extern from "netcdf.h":
int nc_inq_grp_parent(int ncid, int *parent_ncid) nogil
int nc_inq_varids(int ncid, int *nvars, int *varids) nogil
int nc_inq_dimids(int ncid, int *ndims, int *dimids, int include_parents) nogil
- int nc_def_grp(int parent_ncid, char *name, int *new_ncid)
- int nc_def_compound(int ncid, size_t size, char *name, nc_type *typeidp)
+ int nc_def_grp(int parent_ncid, char *name, int *new_ncid) nogil
+ int nc_def_compound(int ncid, size_t size, char *name, nc_type *typeidp) nogil
int nc_insert_compound(int ncid, nc_type xtype, char *name,
- size_t offset, nc_type field_typeid)
+ size_t offset, nc_type field_typeid) nogil
int nc_insert_array_compound(int ncid, nc_type xtype, char *name,
size_t offset, nc_type field_typeid,
- int ndims, int *dim_sizes)
+ int ndims, int *dim_sizes) nogil
int nc_inq_type(int ncid, nc_type xtype, char *name, size_t *size) nogil
int nc_inq_compound(int ncid, nc_type xtype, char *name, size_t *size,
size_t *nfieldsp) nogil
@@ -258,83 +253,81 @@ cdef extern from "netcdf.h":
int *ndimsp) nogil
int nc_inq_compound_fielddim_sizes(int ncid, nc_type xtype, int fieldid,
int *dim_sizes) nogil
- int nc_def_vlen(int ncid, char *name, nc_type base_typeid, nc_type *xtypep)
+ int nc_def_vlen(int ncid, char *name, nc_type base_typeid, nc_type *xtypep) nogil
int nc_inq_vlen(int ncid, nc_type xtype, char *name, size_t *datum_sizep,
nc_type *base_nc_typep) nogil
int nc_inq_user_type(int ncid, nc_type xtype, char *name, size_t *size,
nc_type *base_nc_typep, size_t *nfieldsp, int *classp) nogil
int nc_inq_typeids(int ncid, int *ntypes, int *typeids) nogil
int nc_put_att(int ncid, int varid, char *name, nc_type xtype,
- size_t len, void *op)
+ size_t len, void *op) nogil
int nc_get_att(int ncid, int varid, char *name, void *ip) nogil
int nc_get_att_string(int ncid, int varid, char *name, char **ip) nogil
int nc_put_att_string(int ncid, int varid, char *name, size_t len, char **op) nogil
- int nc_def_opaque(int ncid, size_t size, char *name, nc_type *xtypep)
- int nc_inq_opaque(int ncid, nc_type xtype, char *name, size_t *sizep)
+ int nc_def_opaque(int ncid, size_t size, char *name, nc_type *xtypep) nogil
+ int nc_inq_opaque(int ncid, nc_type xtype, char *name, size_t *sizep) nogil
int nc_put_att_opaque(int ncid, int varid, char *name,
- size_t len, void *op)
+ size_t len, void *op) nogil
int nc_get_att_opaque(int ncid, int varid, char *name,
- void *ip)
+ void *ip) nogil
int nc_put_cmp_att_opaque(int ncid, nc_type xtype, int fieldid,
- char *name, size_t len, void *op)
+ char *name, size_t len, void *op) nogil
int nc_get_cmp_att_opaque(int ncid, nc_type xtype, int fieldid,
- char *name, void *ip)
+ char *name, void *ip) nogil
int nc_put_var1(int ncid, int varid, size_t *indexp,
- void *op)
+ void *op) nogil
int nc_get_var1(int ncid, int varid, size_t *indexp,
- void *ip)
+ void *ip) nogil
int nc_put_vara(int ncid, int varid, size_t *startp,
- size_t *countp, void *op)
+ size_t *countp, void *op) nogil
int nc_get_vara(int ncid, int varid, size_t *startp,
size_t *countp, void *ip) nogil
int nc_put_vars(int ncid, int varid, size_t *startp,
size_t *countp, ptrdiff_t *stridep,
- void *op)
+ void *op) nogil
int nc_get_vars(int ncid, int varid, size_t *startp,
size_t *countp, ptrdiff_t *stridep,
void *ip) nogil
int nc_put_varm(int ncid, int varid, size_t *startp,
size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, void *op)
+ ptrdiff_t *imapp, void *op) nogil
int nc_get_varm(int ncid, int varid, size_t *startp,
size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, void *ip)
- int nc_put_var(int ncid, int varid, void *op)
- int nc_get_var(int ncid, int varid, void *ip)
+ ptrdiff_t *imapp, void *ip) nogil
+ int nc_put_var(int ncid, int varid, void *op) nogil
+ int nc_get_var(int ncid, int varid, void *ip) nogil
int nc_def_var_deflate(int ncid, int varid, int shuffle, int deflate,
- int deflate_level)
- int nc_def_var_fletcher32(int ncid, int varid, int fletcher32)
+ int deflate_level) nogil
+ int nc_def_var_fletcher32(int ncid, int varid, int fletcher32) nogil
int nc_inq_var_fletcher32(int ncid, int varid, int *fletcher32p) nogil
- int nc_def_var_chunking(int ncid, int varid, int contiguous, size_t *chunksizesp)
- int nc_def_var_fill(int ncid, int varid, int no_fill, void *fill_value)
- int nc_def_var_endian(int ncid, int varid, int endian)
+ int nc_def_var_chunking(int ncid, int varid, int contiguous, size_t *chunksizesp) nogil
+ int nc_def_var_fill(int ncid, int varid, int no_fill, void *fill_value) nogil
+ int nc_def_var_endian(int ncid, int varid, int endian) nogil
int nc_inq_var_chunking(int ncid, int varid, int *contiguousp, size_t *chunksizesp) nogil
int nc_inq_var_deflate(int ncid, int varid, int *shufflep,
int *deflatep, int *deflate_levelp) nogil
int nc_inq_var_fill(int ncid, int varid, int *no_fill, void *fill_value) nogil
int nc_inq_var_endian(int ncid, int varid, int *endianp) nogil
- int nc_set_fill(int ncid, int fillmode, int *old_modep)
- int nc_set_default_format(int format, int *old_formatp)
- int nc_redef(int ncid)
- int nc__enddef(int ncid, size_t h_minfree, size_t v_align,
- size_t v_minfree, size_t r_align)
- int nc_enddef(int ncid)
- int nc_sync(int ncid)
- int nc_abort(int ncid)
- int nc_close(int ncid)
+ int nc_set_fill(int ncid, int fillmode, int *old_modep) nogil
+ int nc_set_default_format(int format, int *old_formatp) nogil
+ int nc_redef(int ncid) nogil
+ int nc_enddef(int ncid) nogil
+ int nc_sync(int ncid) nogil
+ int nc_abort(int ncid) nogil
+ int nc_close(int ncid) nogil
int nc_inq(int ncid, int *ndimsp, int *nvarsp, int *nattsp, int *unlimdimidp) nogil
- int nc_inq_ndims(int ncid, int *ndimsp) nogil
+ int nc_inq_ndims(int ncid, int *ndimsp) nogil
int nc_inq_nvars(int ncid, int *nvarsp) nogil
- int nc_inq_natts(int ncid, int *nattsp) nogil
+ int nc_inq_natts(int ncid, int *nattsp) nogil
int nc_inq_unlimdim(int ncid, int *unlimdimidp) nogil
int nc_inq_unlimdims(int ncid, int *nunlimdimsp, int *unlimdimidsp) nogil
int nc_inq_format(int ncid, int *formatp) nogil
- int nc_def_dim(int ncid, char *name, size_t len, int *idp)
+ int nc_def_dim(int ncid, char *name, size_t len, int *idp) nogil
int nc_inq_dimid(int ncid, char *name, int *idp) nogil
int nc_inq_dim(int ncid, int dimid, char *name, size_t *lenp) nogil
int nc_inq_dimname(int ncid, int dimid, char *name) nogil
int nc_inq_dimlen(int ncid, int dimid, size_t *lenp) nogil
- int nc_rename_dim(int ncid, int dimid, char *name)
+ int nc_rename_dim(int ncid, int dimid, char *name) nogil
int nc_inq_att(int ncid, int varid, char *name,
nc_type *xtypep, size_t *lenp) nogil
int nc_inq_attid(int ncid, int varid, char *name, int *idp) nogil
@@ -342,47 +335,13 @@ cdef extern from "netcdf.h":
int nc_inq_attlen(int ncid, int varid, char *name, size_t *lenp) nogil
int nc_inq_attname(int ncid, int varid, int attnum, char *name) nogil
int nc_copy_att(int ncid_in, int varid_in, char *name, int ncid_out, int varid_out)
- int nc_rename_att(int ncid, int varid, char *name, char *newname)
- int nc_del_att(int ncid, int varid, char *name)
+ int nc_rename_att(int ncid, int varid, char *name, char *newname) nogil
+ int nc_del_att(int ncid, int varid, char *name) nogil
int nc_put_att_text(int ncid, int varid, char *name,
- size_t len, char *op)
+ size_t len, char *op) nogil
int nc_get_att_text(int ncid, int varid, char *name, char *ip) nogil
- int nc_put_att_uchar(int ncid, int varid, char *name, nc_type xtype,
- size_t len, unsigned char *op)
- int nc_get_att_uchar(int ncid, int varid, char *name, unsigned char *ip)
- int nc_put_att_schar(int ncid, int varid, char *name, nc_type xtype,
- size_t len, signed char *op)
- int nc_get_att_schar(int ncid, int varid, char *name, signed char *ip)
- int nc_put_att_short(int ncid, int varid, char *name, nc_type xtype,
- size_t len, short *op)
- int nc_get_att_short(int ncid, int varid, char *name, short *ip)
- int nc_put_att_int(int ncid, int varid, char *name, nc_type xtype,
- size_t len, int *op)
- int nc_get_att_int(int ncid, int varid, char *name, int *ip)
- int nc_put_att_long(int ncid, int varid, char *name, nc_type xtype,
- size_t len, long *op)
- int nc_get_att_long(int ncid, int varid, char *name, long *ip)
- int nc_put_att_float(int ncid, int varid, char *name, nc_type xtype,
- size_t len, float *op)
- int nc_get_att_float(int ncid, int varid, char *name, float *ip)
- int nc_put_att_double(int ncid, int varid, char *name, nc_type xtype,
- size_t len, double *op)
- int nc_get_att_double(int ncid, int varid, char *name, double *ip)
- int nc_put_att_ushort(int ncid, int varid, char *name, nc_type xtype,
- size_t len, unsigned short *op)
- int nc_get_att_ushort(int ncid, int varid, char *name, unsigned short *ip)
- int nc_put_att_uint(int ncid, int varid, char *name, nc_type xtype,
- size_t len, unsigned int *op)
- int nc_get_att_uint(int ncid, int varid, char *name, unsigned int *ip)
- int nc_put_att_longlong(int ncid, int varid, char *name, nc_type xtype,
- size_t len, long long *op)
- int nc_get_att_longlong(int ncid, int varid, char *name, long long *ip)
- int nc_put_att_ulonglong(int ncid, int varid, char *name, nc_type xtype,
- size_t len, unsigned long long *op)
- int nc_get_att_ulonglong(int ncid, int varid, char *name,
- unsigned long long *ip)
int nc_def_var(int ncid, char *name, nc_type xtype, int ndims,
- int *dimidsp, int *varidp)
+ int *dimidsp, int *varidp) nogil
int nc_inq_var(int ncid, int varid, char *name, nc_type *xtypep,
int *ndimsp, int *dimidsp, int *nattsp) nogil
int nc_inq_varid(int ncid, char *name, int *varidp) nogil
@@ -391,297 +350,17 @@ cdef extern from "netcdf.h":
int nc_inq_varndims(int ncid, int varid, int *ndimsp) nogil
int nc_inq_vardimid(int ncid, int varid, int *dimidsp) nogil
int nc_inq_varnatts(int ncid, int varid, int *nattsp) nogil
- int nc_rename_var(int ncid, int varid, char *name)
- int nc_copy_var(int ncid_in, int varid, int ncid_out)
- int nc_put_var1_text(int ncid, int varid, size_t *indexp, char *op)
- int nc_get_var1_text(int ncid, int varid, size_t *indexp, char *ip)
- int nc_put_var1_uchar(int ncid, int varid, size_t *indexp,
- unsigned char *op)
- int nc_get_var1_uchar(int ncid, int varid, size_t *indexp,
- unsigned char *ip)
- int nc_put_var1_schar(int ncid, int varid, size_t *indexp,
- signed char *op)
- int nc_get_var1_schar(int ncid, int varid, size_t *indexp,
- signed char *ip)
- int nc_put_var1_short(int ncid, int varid, size_t *indexp,
- short *op)
- int nc_get_var1_short(int ncid, int varid, size_t *indexp,
- short *ip)
- int nc_put_var1_int(int ncid, int varid, size_t *indexp, int *op)
- int nc_get_var1_int(int ncid, int varid, size_t *indexp, int *ip)
- int nc_put_var1_long(int ncid, int varid, size_t *indexp, long *op)
- int nc_get_var1_long(int ncid, int varid, size_t *indexp, long *ip)
- int nc_put_var1_float(int ncid, int varid, size_t *indexp, float *op)
- int nc_get_var1_float(int ncid, int varid, size_t *indexp, float *ip)
- int nc_put_var1_double(int ncid, int varid, size_t *indexp, double *op)
- int nc_get_var1_double(int ncid, int varid, size_t *indexp, double *ip)
- int nc_put_var1_ubyte(int ncid, int varid, size_t *indexp,
- unsigned char *op)
- int nc_get_var1_ubyte(int ncid, int varid, size_t *indexp,
- unsigned char *ip)
- int nc_put_var1_ushort(int ncid, int varid, size_t *indexp,
- unsigned short *op)
- int nc_get_var1_ushort(int ncid, int varid, size_t *indexp,
- unsigned short *ip)
- int nc_put_var1_uint(int ncid, int varid, size_t *indexp,
- unsigned int *op)
- int nc_get_var1_uint(int ncid, int varid, size_t *indexp,
- unsigned int *ip)
- int nc_put_var1_longlong(int ncid, int varid, size_t *indexp,
- long long *op)
- int nc_get_var1_longlong(int ncid, int varid, size_t *indexp,
- long long *ip)
- int nc_put_var1_ulonglong(int ncid, int varid, size_t *indexp,
- unsigned long long *op)
- int nc_get_var1_ulonglong(int ncid, int varid, size_t *indexp,
- unsigned long long *ip)
- int nc_put_vara_text(int ncid, int varid,
- size_t *startp, size_t *countp, char *op)
- int nc_get_vara_text(int ncid, int varid,
- size_t *startp, size_t *countp, char *ip)
- int nc_put_vara_uchar(int ncid, int varid,
- size_t *startp, size_t *countp, unsigned char *op)
- int nc_get_vara_uchar(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned char *ip)
- int nc_put_vara_schar(int ncid, int varid, size_t *startp,
- size_t *countp, signed char *op)
- int nc_get_vara_schar(int ncid, int varid, size_t *startp,
- size_t *countp, signed char *ip)
- int nc_put_vara_short(int ncid, int varid, size_t *startp,
- size_t *countp, short *op)
- int nc_get_vara_short(int ncid, int varid, size_t *startp,
- size_t *countp, short *ip)
- int nc_put_vara_int(int ncid, int varid, size_t *startp,
- size_t *countp, int *op)
- int nc_get_vara_int(int ncid, int varid, size_t *startp,
- size_t *countp, int *ip)
- int nc_put_vara_long(int ncid, int varid, size_t *startp,
- size_t *countp, long *op)
- int nc_get_vara_long(int ncid, int varid,
- size_t *startp, size_t *countp, long *ip)
- int nc_put_vara_float(int ncid, int varid,
- size_t *startp, size_t *countp, float *op)
- int nc_get_vara_float(int ncid, int varid,
- size_t *startp, size_t *countp, float *ip)
- int nc_put_vara_double(int ncid, int varid, size_t *startp,
- size_t *countp, double *op)
- int nc_get_vara_double(int ncid, int varid, size_t *startp,
- size_t *countp, double *ip)
- int nc_put_vara_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned char *op)
- int nc_get_vara_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned char *ip)
- int nc_put_vara_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned short *op)
- int nc_get_vara_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned short *ip)
- int nc_put_vara_uint(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned int *op)
- int nc_get_vara_uint(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned int *ip)
- int nc_put_vara_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, long long *op)
- int nc_get_vara_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, long long *ip)
- int nc_put_vara_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned long long *op)
- int nc_get_vara_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, unsigned long long *ip)
- int nc_put_vars_text(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- char *op)
- int nc_get_vars_text(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- char *ip)
- int nc_put_vars_uchar(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- unsigned char *op)
- int nc_get_vars_uchar(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- unsigned char *ip)
- int nc_put_vars_schar(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- signed char *op)
- int nc_get_vars_schar(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- signed char *ip)
- int nc_put_vars_short(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- short *op)
- int nc_get_vars_short(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- short *ip)
- int nc_put_vars_int(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- int *op)
- int nc_get_vars_int(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- int *ip)
- int nc_put_vars_long(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- long *op)
- int nc_get_vars_long(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- long *ip)
- int nc_put_vars_float(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- float *op)
- int nc_get_vars_float(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- float *ip)
- int nc_put_vars_double(int ncid, int varid,
- size_t *startp, size_t *countp, ptrdiff_t *stridep,
- double *op)
- int nc_get_vars_double(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- double *ip)
- int nc_put_vars_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned char *op)
- int nc_get_vars_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned char *ip)
- int nc_put_vars_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned short *op)
- int nc_get_vars_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned short *ip)
- int nc_put_vars_uint(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned int *op)
- int nc_get_vars_uint(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned int *ip)
- int nc_put_vars_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- long long *op)
- int nc_get_vars_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- long long *ip)
- int nc_put_vars_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned long long *op)
- int nc_get_vars_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- unsigned long long *ip)
- int nc_put_varm_text(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, char *op)
- int nc_get_varm_text(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, char *ip)
- int nc_put_varm_uchar(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, unsigned char *op)
- int nc_get_varm_uchar(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, unsigned char *ip)
- int nc_put_varm_schar(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, signed char *op)
- int nc_get_varm_schar(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, signed char *ip)
- int nc_put_varm_short(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, short *op)
- int nc_get_varm_short(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, short *ip)
- int nc_put_varm_int(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, int *op)
- int nc_get_varm_int(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, int *ip)
- int nc_put_varm_long(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, long *op)
- int nc_get_varm_long(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, long *ip)
- int nc_put_varm_float(int ncid, int varid,size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, float *op)
- int nc_get_varm_float(int ncid, int varid,size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, float *ip)
- int nc_put_varm_double(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t *imapp, double *op)
- int nc_get_varm_double(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, double *ip)
- int nc_put_varm_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned char *op)
- int nc_get_varm_ubyte(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned char *ip)
- int nc_put_varm_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned short *op)
- int nc_get_varm_ushort(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned short *ip)
- int nc_put_varm_uint(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned int *op)
- int nc_get_varm_uint(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned int *ip)
- int nc_put_varm_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, long long *op)
- int nc_get_varm_longlong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, long long *ip)
- int nc_put_varm_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned long long *op)
- int nc_get_varm_ulonglong(int ncid, int varid, size_t *startp,
- size_t *countp, ptrdiff_t *stridep,
- ptrdiff_t * imapp, unsigned long long *ip)
- int nc_put_var_text(int ncid, int varid, char *op)
- int nc_get_var_text(int ncid, int varid, char *ip)
- int nc_put_var_uchar(int ncid, int varid, unsigned char *op)
- int nc_get_var_uchar(int ncid, int varid, unsigned char *ip)
- int nc_put_var_schar(int ncid, int varid, signed char *op)
- int nc_get_var_schar(int ncid, int varid, signed char *ip)
- int nc_put_var_short(int ncid, int varid, short *op)
- int nc_get_var_short(int ncid, int varid, short *ip)
- int nc_put_var_int(int ncid, int varid, int *op)
- int nc_get_var_int(int ncid, int varid, int *ip)
- int nc_put_var_long(int ncid, int varid, long *op)
- int nc_get_var_long(int ncid, int varid, long *ip)
- int nc_put_var_float(int ncid, int varid, float *op)
- int nc_get_var_float(int ncid, int varid, float *ip)
- int nc_put_var_double(int ncid, int varid, double *op)
- int nc_get_var_double(int ncid, int varid, double *ip)
- int nc_put_var_ubyte(int ncid, int varid, unsigned char *op)
- int nc_get_var_ubyte(int ncid, int varid, unsigned char *ip)
- int nc_put_var_ushort(int ncid, int varid, unsigned short *op)
- int nc_get_var_ushort(int ncid, int varid, unsigned short *ip)
- int nc_put_var_uint(int ncid, int varid, unsigned int *op)
- int nc_get_var_uint(int ncid, int varid, unsigned int *ip)
- int nc_put_var_longlong(int ncid, int varid, long long *op)
- int nc_get_var_longlong(int ncid, int varid, long long *ip)
- int nc_put_var_ulonglong(int ncid, int varid, unsigned long long *op)
- int nc_get_var_ulonglong(int ncid, int varid, unsigned long long *ip)
- # set logging verbosity level.
- void nc_set_log_level(int new_level)
- int nc_show_metadata(int ncid)
- int nc_free_vlen(nc_vlen_t *vl)
- int nc_free_vlens(size_t len, nc_vlen_t *vl)
- int nc_free_string(size_t len, char **data)
- int nc_set_chunk_cache(size_t size, size_t nelems, float preemption)
- int nc_get_chunk_cache(size_t *sizep, size_t *nelemsp, float *preemptionp)
- int nc_set_var_chunk_cache(int ncid, int varid, size_t size, size_t nelems, float preemption)
+ int nc_rename_var(int ncid, int varid, char *name) nogil
+ int nc_free_vlen(nc_vlen_t *vl) nogil
+ int nc_free_vlens(size_t len, nc_vlen_t *vl) nogil
+ int nc_free_string(size_t len, char **data) nogil
+ int nc_get_chunk_cache(size_t *sizep, size_t *nelemsp, float *preemptionp) nogil
+ int nc_set_chunk_cache(size_t size, size_t nelems, float preemption) nogil
+ int nc_set_var_chunk_cache(int ncid, int varid, size_t size, size_t nelems, float preemption) nogil
int nc_get_var_chunk_cache(int ncid, int varid, size_t *sizep, size_t *nelemsp, float *preemptionp) nogil
- int nc_rename_grp(int grpid, char *name)
- int nc_def_enum(int ncid, nc_type base_typeid, char *name, nc_type *typeidp)
- int nc_insert_enum(int ncid, nc_type xtype, char *name, void *value)
+ int nc_rename_grp(int grpid, char *name) nogil
+ int nc_def_enum(int ncid, nc_type base_typeid, char *name, nc_type *typeidp) nogil
+ int nc_insert_enum(int ncid, nc_type xtype, char *name, void *value) nogil
int nc_inq_enum(int ncid, nc_type xtype, char *name, nc_type *base_nc_typep,\
size_t *base_sizep, size_t *num_membersp) nogil
int nc_inq_enum_member(int ncid, nc_type xtype, int idx, char *name, void *value) nogil
@@ -696,57 +375,63 @@ IF HAS_QUANTIZATION_SUPPORT:
NC_QUANTIZE_BITGROOM
NC_QUANTIZE_GRANULARBR
NC_QUANTIZE_BITROUND
- int nc_def_var_quantize(int ncid, int varid, int quantize_mode, int nsd)
+ int nc_def_var_quantize(int ncid, int varid, int quantize_mode, int nsd) nogil
int nc_inq_var_quantize(int ncid, int varid, int *quantize_modep, int *nsdp) nogil
+ cdef extern from "netcdf_filter.h":
+ int nc_inq_filter_avail(int ncid, unsigned filterid) nogil
IF HAS_SZIP_SUPPORT:
cdef extern from "netcdf.h":
- int nc_def_var_quantize(int ncid, int varid, int quantize_mode, int nsd)
+ cdef enum:
+ H5Z_FILTER_SZIP
+ int nc_def_var_quantize(int ncid, int varid, int quantize_mode, int nsd) nogil
int nc_inq_var_quantize(int ncid, int varid, int *quantize_modep, int *nsdp) nogil
- int nc_def_var_szip(int ncid, int varid, int options_mask, int pixels_per_bloc)
- int nc_inq_var_szip(int ncid, int varid, int *options_maskp, int *pixels_per_blockp)
+ int nc_def_var_szip(int ncid, int varid, int options_mask, int pixels_per_bloc) nogil
+ int nc_inq_var_szip(int ncid, int varid, int *options_maskp, int *pixels_per_blockp) nogil
IF HAS_ZSTANDARD_SUPPORT:
cdef extern from "netcdf_filter.h":
cdef enum:
- H5Z_FILTER_ZSTANDARD
- int nc_def_var_zstandard(int ncid, int varid, int level)
- int nc_inq_var_zstandard(int ncid, int varid, int* hasfilterp, int *levelp)
- int nc_inq_filter_avail(int ncid, unsigned id)
+ H5Z_FILTER_ZSTD
+ int nc_def_var_zstandard(int ncid, int varid, int level) nogil
+ int nc_inq_var_zstandard(int ncid, int varid, int* hasfilterp, int *levelp) nogil
+ int nc_inq_filter_avail(int ncid, unsigned id) nogil
IF HAS_BZIP2_SUPPORT:
cdef extern from "netcdf_filter.h":
cdef enum:
H5Z_FILTER_BZIP2
- int nc_def_var_bzip2(int ncid, int varid, int level)
- int nc_inq_var_bzip2(int ncid, int varid, int* hasfilterp, int *levelp)
+ int nc_def_var_bzip2(int ncid, int varid, int level) nogil
+ int nc_inq_var_bzip2(int ncid, int varid, int* hasfilterp, int *levelp) nogil
IF HAS_BLOSC_SUPPORT:
cdef extern from "netcdf_filter.h":
- int nc_def_var_blosc(int ncid, int varid, unsigned subcompressor, unsigned level, unsigned blocksize, unsigned addshuffle)
- int nc_inq_var_blosc(int ncid, int varid, int* hasfilterp, unsigned* subcompressorp, unsigned* levelp, unsigned* blocksizep, unsigned* addshufflep)
+ cdef enum:
+ H5Z_FILTER_BLOSC
+ int nc_def_var_blosc(int ncid, int varid, unsigned subcompressor, unsigned level, unsigned blocksize, unsigned addshuffle) nogil
+ int nc_inq_var_blosc(int ncid, int varid, int* hasfilterp, unsigned* subcompressorp, unsigned* levelp, unsigned* blocksizep, unsigned* addshufflep) nogil
IF HAS_NC_OPEN_MEM:
cdef extern from "netcdf_mem.h":
- int nc_open_mem(const char *path, int mode, size_t size, void* memory, int *ncidp)
+ int nc_open_mem(const char *path, int mode, size_t size, void* memory, int *ncidp) nogil
IF HAS_NC_CREATE_MEM:
cdef extern from "netcdf_mem.h":
- int nc_create_mem(const char *path, int mode, size_t initialize, int *ncidp);
+ int nc_create_mem(const char *path, int mode, size_t initialize, int *ncidp) nogil
ctypedef struct NC_memio:
size_t size
void* memory
int flags
- int nc_close_memio(int ncid, NC_memio* info);
+ int nc_close_memio(int ncid, NC_memio* info) nogil
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
cdef extern from "mpi-compat.h": pass
cdef extern from "netcdf_par.h":
ctypedef int MPI_Comm
ctypedef int MPI_Info
- int nc_create_par(char *path, int cmode, MPI_Comm comm, MPI_Info info, int *ncidp);
- int nc_open_par(char *path, int mode, MPI_Comm comm, MPI_Info info, int *ncidp);
- int nc_var_par_access(int ncid, int varid, int par_access);
+ int nc_create_par(char *path, int cmode, MPI_Comm comm, MPI_Info info, int *ncidp) nogil
+ int nc_open_par(char *path, int mode, MPI_Comm comm, MPI_Info info, int *ncidp) nogil
+ int nc_var_par_access(int ncid, int varid, int par_access) nogil
cdef enum:
NC_COLLECTIVE
NC_INDEPENDENT
@@ -756,14 +441,19 @@ IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
NC_MPIPOSIX
NC_PNETCDF
+IF HAS_SET_ALIGNMENT:
+ cdef extern from "netcdf.h":
+ int nc_set_alignment(int threshold, int alignment)
+ int nc_get_alignment(int *threshold, int *alignment)
+
# taken from numpy.pxi in numpy 1.0rc2.
cdef extern from "numpy/arrayobject.h":
ctypedef int npy_intp
ctypedef extern class numpy.ndarray [object PyArrayObject]:
pass
- npy_intp PyArray_SIZE(ndarray arr)
- npy_intp PyArray_ISCONTIGUOUS(ndarray arr)
- npy_intp PyArray_ISALIGNED(ndarray arr)
+ npy_intp PyArray_SIZE(ndarray arr) nogil
+ npy_intp PyArray_ISCONTIGUOUS(ndarray arr) nogil
+ npy_intp PyArray_ISALIGNED(ndarray arr) nogil
void* PyArray_DATA(ndarray) nogil
char* PyArray_BYTES(ndarray) nogil
npy_intp* PyArray_STRIDES(ndarray) nogil
=====================================
setup.py
=====================================
@@ -71,6 +71,7 @@ def check_api(inc_dirs,netcdf_lib_version):
has_zstandard = False
has_bzip2 = False
has_blosc = False
+ has_set_alignment = False
for d in inc_dirs:
try:
@@ -92,6 +93,8 @@ def check_api(inc_dirs,netcdf_lib_version):
has_cdf5_format = True
if line.startswith('nc_def_var_quantize'):
has_quantize = True
+ if line.startswith('nc_set_alignment'):
+ has_set_alignment = True
if has_nc_open_mem:
try:
@@ -141,7 +144,7 @@ def check_api(inc_dirs,netcdf_lib_version):
return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
has_cdf5_format, has_nc_open_mem, has_nc_create_mem, \
has_parallel4_support, has_pnetcdf_support, has_szip_support, has_quantize, \
- has_zstandard, has_bzip2, has_blosc
+ has_zstandard, has_bzip2, has_blosc, has_set_alignment
def getnetcdfvers(libdirs):
@@ -228,7 +231,7 @@ else:
setup_cfg = 'setup.cfg'
# contents of setup.cfg will override env vars, unless
-# USE_SETUPCFG evaluates to False.
+# USE_SETUPCFG evaluates to False.
ncconfig = None
use_ncconfig = None
if USE_SETUPCFG and os.path.exists(setup_cfg):
@@ -338,7 +341,7 @@ if USE_NCCONFIG is None and use_ncconfig is not None:
elif USE_NCCONFIG is None:
# if nc-config exists, and USE_NCCONFIG not set, try to use it.
if HAS_NCCONFIG: USE_NCCONFIG=True
-#elif USE_NCCONFIG is None:
+#elif USE_NCCONFIG is None:
# USE_NCCONFIG = False # don't try to use nc-config if USE_NCCONFIG not set
try:
@@ -555,7 +558,7 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:] and '--version' n
has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
has_cdf5_format, has_nc_open_mem, has_nc_create_mem, \
has_parallel4_support, has_pnetcdf_support, has_szip_support, has_quantize, \
- has_zstandard, has_bzip2, has_blosc = \
+ has_zstandard, has_bzip2, has_blosc, has_set_alignment = \
check_api(inc_dirs,netcdf_lib_version)
# for netcdf 4.4.x CDF5 format is always enabled.
if netcdf_lib_version is not None and\
@@ -662,6 +665,13 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:] and '--version' n
sys.stdout.write('netcdf lib does not have szip compression functions\n')
f.write('DEF HAS_SZIP_SUPPORT = 0\n')
+ if has_set_alignment:
+ sys.stdout.write('netcdf lib has nc_set_alignment function\n')
+ f.write('DEF HAS_SET_ALIGNMENT = 1\n')
+ else:
+ sys.stdout.write('netcdf lib does not have nc_set_alignment function\n')
+ f.write('DEF HAS_SET_ALIGNMENT = 0\n')
+
f.close()
if has_parallel4_support or has_pnetcdf_support:
=====================================
src/netCDF4/__init__.py
=====================================
@@ -12,7 +12,7 @@ from ._netCDF4 import (__version__, __netcdf4libversion__, __hdf5libversion__,
__has_bzip2_support__, __has_blosc_support__, __has_szip_support__)
import os
__all__ =\
-['Dataset','Variable','Dimension','Group','MFDataset','MFTime','CompoundType','VLType','date2num','num2date','date2index','stringtochar','chartostring','stringtoarr','getlibversion','EnumType','get_chunk_cache','set_chunk_cache']
+['Dataset','Variable','Dimension','Group','MFDataset','MFTime','CompoundType','VLType','date2num','num2date','date2index','stringtochar','chartostring','stringtoarr','getlibversion','EnumType','get_chunk_cache','set_chunk_cache','set_alignment','get_alignment']
# if HDF5_PLUGIN_PATH not set, point to package path if plugins live there
pluginpath = os.path.join(__path__[0],'plugins')
if 'HDF5_PLUGIN_PATH' not in os.environ and\
=====================================
src/netCDF4/_netCDF4.pyx
=====================================
@@ -1,5 +1,5 @@
"""
-Version 1.6.0
+Version 1.6.1
-------------
# Introduction
@@ -1230,7 +1230,7 @@ if sys.version_info[0:2] < (3, 7):
# Python 3.7+ guarantees order; older versions need OrderedDict
from collections import OrderedDict
-__version__ = "1.6.0"
+__version__ = "1.6.1"
# Initialize numpy
import posixpath
@@ -1265,7 +1265,8 @@ ELSE:
def _gethdf5libversion():
cdef unsigned int majorvers, minorvers, releasevers
cdef herr_t ierr
- ierr = H5get_libversion( &majorvers, &minorvers, &releasevers)
+ with nogil:
+ ierr = H5get_libversion( &majorvers, &minorvers, &releasevers)
if ierr < 0:
raise RuntimeError('error getting HDF5 library version info')
return '%d.%d.%d' % (majorvers,minorvers,releasevers)
@@ -1289,7 +1290,8 @@ details. Values can be reset with `set_chunk_cache`."""
cdef int ierr
cdef size_t sizep, nelemsp
cdef float preemptionp
- ierr = nc_get_chunk_cache(&sizep, &nelemsp, &preemptionp)
+ with nogil:
+ ierr = nc_get_chunk_cache(&sizep, &nelemsp, &preemptionp)
_ensure_nc_success(ierr)
size = sizep; nelems = nelemsp; preemption = preemptionp
return (size,nelems,preemption)
@@ -1318,9 +1320,56 @@ details."""
preemptionp = preemption
else:
preemptionp = preemption_orig
- ierr = nc_set_chunk_cache(sizep,nelemsp, preemptionp)
+ with nogil:
+ ierr = nc_set_chunk_cache(sizep,nelemsp, preemptionp)
_ensure_nc_success(ierr)
+IF HAS_SET_ALIGNMENT:
+ def get_alignment():
+ """
+ **`get_alignment()`**
+
+ return current netCDF alignment within HDF5 files in a tuple
+ (threshold,alignment). See netcdf C library documentation for
+ `nc_get_alignment` for details. Values can be reset with
+ `set_alignment`.
+
+ This function was added in netcdf 4.9.0."""
+ cdef int ierr
+ cdef int thresholdp, alignmentp
+ ierr = nc_get_alignment(&thresholdp, &alignmentp)
+ _ensure_nc_success(ierr)
+ threshold = thresholdp
+ alignment = alignmentp
+ return (threshold,alignment)
+
+ def set_alignment(threshold, alignment):
+ """
+ **`set_alignment(threshold,alignment)`**
+
+ Change the HDF5 file alignment.
+ See netcdf C library documentation for `nc_set_alignment` for
+ details.
+
+ This function was added in netcdf 4.9.0."""
+ cdef int ierr
+ cdef int thresholdp, alignmentp
+ thresholdp = threshold
+ alignmentp = alignment
+
+ ierr = nc_set_alignment(thresholdp, alignmentp)
+ _ensure_nc_success(ierr)
+ELSE:
+ def get_alignment():
+ raise RuntimeError(
+ "This function requires netcdf4 4.9.0+ to be used at compile time"
+ )
+
+ def set_alignment(threshold, alignment):
+ raise RuntimeError(
+ "This function requires netcdf4 4.9.0+ to be used at compile time"
+ )
+
__netcdf4libversion__ = getlibversion().split()[0]
__hdf5libversion__ = _gethdf5libversion()
__has_rename_grp__ = HAS_RENAME_GRP
@@ -1336,6 +1385,7 @@ __has_zstandard_support__ = HAS_ZSTANDARD_SUPPORT
__has_bzip2_support__ = HAS_BZIP2_SUPPORT
__has_blosc_support__ = HAS_BLOSC_SUPPORT
__has_szip_support__ = HAS_SZIP_SUPPORT
+__has_set_alignment__ = HAS_SET_ALIGNMENT
_needsworkaround_issue485 = __netcdf4libversion__ < "4.4.0" or \
(__netcdf4libversion__.startswith("4.4.0") and \
"-development" in __netcdf4libversion__)
@@ -1507,7 +1557,8 @@ cdef _get_att(grp, int varid, name, encoding='utf-8'):
result = [values[j].decode(encoding,errors='replace').replace('\x00','')
if values[j] else "" for j in range(att_len)]
finally:
- ierr = nc_free_string(att_len, values) # free memory in netcdf C lib
+ with nogil:
+ ierr = nc_free_string(att_len, values) # free memory in netcdf C lib
finally:
PyMem_Free(values)
@@ -1548,9 +1599,13 @@ cdef _get_att(grp, int varid, name, encoding='utf-8'):
def _set_default_format(object format='NETCDF4'):
# Private function to set the netCDF file format
+ cdef int ierr, formatid
if format not in _format_dict:
raise ValueError("unrecognized format requested")
- nc_set_default_format(_format_dict[format], NULL)
+ formatid = _format_dict[format]
+ with nogil:
+ ierr = nc_set_default_format(formatid, NULL)
+ _ensure_nc_success(ierr)
cdef _get_format(int grpid):
# Private function to get the netCDF file format
@@ -1597,22 +1652,25 @@ cdef issue485_workaround(int grpid, int varid, char* attname):
if not _needsworkaround_issue485:
return
- ierr = nc_inq_att(grpid, varid, attname, &att_type, &att_len)
+ with nogil:
+ ierr = nc_inq_att(grpid, varid, attname, &att_type, &att_len)
if ierr == NC_NOERR and att_type == NC_CHAR:
- ierr = nc_del_att(grpid, varid, attname)
+ with nogil:
+ ierr = nc_del_att(grpid, varid, attname)
_ensure_nc_success(ierr)
cdef _set_att(grp, int varid, name, value,\
nc_type xtype=-99, force_ncstring=False):
# Private function to set an attribute name/value pair
- cdef int ierr, lenarr
+ cdef int ierr, lenarr, N, grpid
cdef char *attname
cdef char *datstring
cdef char **string_ptrs
cdef ndarray value_arr
bytestr = _strencode(name)
attname = bytestr
+ grpid = grp._grpid
# put attribute value into a numpy array.
value_arr = numpy.array(value)
if value_arr.ndim > 1: # issue #841
@@ -1626,7 +1684,7 @@ be raised in the next release."""
warnings.warn(msg,FutureWarning)
# if array is 64 bit integers or
# if 64-bit datatype not supported, cast to 32 bit integers.
- fmt = _get_format(grp._grpid)
+ fmt = _get_format(grpid)
is_netcdf3 = fmt.startswith('NETCDF3') or fmt == 'NETCDF4_CLASSIC'
if value_arr.dtype.str[1:] == 'i8' and ('i8' not in _supportedtypes or\
(is_netcdf3 and fmt != 'NETCDF3_64BIT_DATA')):
@@ -1648,8 +1706,9 @@ be raised in the next release."""
if len(strings[j]) == 0:
strings[j] = _strencode('\x00')
string_ptrs[j] = strings[j]
- issue485_workaround(grp._grpid, varid, attname)
- ierr = nc_put_att_string(grp._grpid, varid, attname, N, string_ptrs)
+ issue485_workaround(grpid, varid, attname)
+ with nogil:
+ ierr = nc_put_att_string(grpid, varid, attname, N, string_ptrs)
finally:
PyMem_Free(string_ptrs)
else:
@@ -1673,12 +1732,15 @@ be raised in the next release."""
try:
if force_ncstring: raise UnicodeError
dats_ascii = _to_ascii(dats) # try to encode bytes as ascii string
- ierr = nc_put_att_text(grp._grpid, varid, attname, lenarr, datstring)
+ with nogil:
+ ierr = nc_put_att_text(grpid, varid, attname, lenarr, datstring)
except UnicodeError:
- issue485_workaround(grp._grpid, varid, attname)
- ierr = nc_put_att_string(grp._grpid, varid, attname, 1, &datstring)
+ issue485_workaround(grpid, varid, attname)
+ with nogil:
+ ierr = nc_put_att_string(grpid, varid, attname, 1, &datstring)
else:
- ierr = nc_put_att_text(grp._grpid, varid, attname, lenarr, datstring)
+ with nogil:
+ ierr = nc_put_att_text(grpid, varid, attname, lenarr, datstring)
_ensure_nc_success(ierr, err_cls=AttributeError)
# a 'regular' array type ('f4','i4','f8' etc)
else:
@@ -1689,7 +1751,8 @@ be raised in the next release."""
elif xtype == -99: # if xtype is not passed in as kwarg.
xtype = _nptonctype[value_arr.dtype.str[1:]]
lenarr = PyArray_SIZE(value_arr)
- ierr = nc_put_att(grp._grpid, varid, attname, xtype, lenarr,
+ with nogil:
+ ierr = nc_put_att(grpid, varid, attname, xtype, lenarr,
PyArray_DATA(value_arr))
_ensure_nc_success(ierr, err_cls=AttributeError)
@@ -2151,11 +2214,11 @@ strings.
**`info`**: MPI_Info object for parallel access. Default `None`, which
means MPI_INFO_NULL will be used. Ignored if `parallel=False`..
"""
- cdef int grpid, ierr, numgrps, numdims, numvars
+ cdef int grpid, ierr, numgrps, numdims, numvars,
cdef size_t initialsize
cdef char *path
cdef char namstring[NC_MAX_NAME+1]
- cdef int cmode
+ cdef int cmode, parmode
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
cdef MPI_Comm mpicomm
cdef MPI_Info mpiinfo
@@ -2202,7 +2265,7 @@ strings.
mpiinfo = info.ob_mpi
else:
mpiinfo = MPI_INFO_NULL
- cmode = NC_MPIIO | _cmode_dict[format]
+ parmode = NC_MPIIO | _cmode_dict[format]
self._inmemory = False
@@ -2217,7 +2280,8 @@ strings.
# kwarg is interpreted as advisory size.
IF HAS_NC_CREATE_MEM:
initialsize = <size_t>memory
- ierr = nc_create_mem(path, 0, initialsize, &grpid)
+ with nogil:
+ ierr = nc_create_mem(path, 0, initialsize, &grpid)
self._inmemory = True # checked in close method
ELSE:
msg = """
@@ -2228,33 +2292,45 @@ strings.
if clobber:
if parallel:
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_create_par(path, NC_CLOBBER | cmode, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_CLOBBER | parmode
+ with nogil:
+ ierr = nc_create_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
if persist:
- ierr = nc_create(path, NC_WRITE | NC_CLOBBER |
- NC_DISKLESS | NC_PERSIST, &grpid)
+ cmode = NC_WRITE | NC_CLOBBER | NC_DISKLESS | NC_PERSIST
+ with nogil:
+ ierr = nc_create(path, cmode, &grpid)
else:
- ierr = nc_create(path, NC_CLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_CLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode , &grpid)
else:
- ierr = nc_create(path, NC_CLOBBER, &grpid)
+ with nogil:
+ ierr = nc_create(path, NC_CLOBBER, &grpid)
else:
if parallel:
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_create_par(path, NC_NOCLOBBER | cmode, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_NOCLOBBER | parmode
+ with nogil:
+ ierr = nc_create_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
if persist:
- ierr = nc_create(path, NC_WRITE | NC_NOCLOBBER |
- NC_DISKLESS | NC_PERSIST , &grpid)
+ cmode = NC_WRITE | NC_NOCLOBBER | NC_DISKLESS | NC_PERSIST
+ with nogil:
+ ierr = nc_create(path, cmode, &grpid)
else:
- ierr = nc_create(path, NC_NOCLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_NOCLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode , &grpid)
else:
- ierr = nc_create(path, NC_NOCLOBBER, &grpid)
+ with nogil:
+ ierr = nc_create(path, NC_NOCLOBBER, &grpid)
# reset default format to netcdf3 - this is a workaround
# for issue 170 (nc_open'ing a DAP dataset after switching
# format to NETCDF4). This bug should be fixed in version
@@ -2270,7 +2346,8 @@ strings.
if result != 0:
raise ValueError("Unable to retrieve Buffer from %s" % (memory,))
- ierr = nc_open_mem(<char *>path, 0, self._buffer.len, <void *>self._buffer.buf, &grpid)
+ with nogil:
+ ierr = nc_open_mem(<char *>path, 0, self._buffer.len, <void *>self._buffer.buf, &grpid)
ELSE:
msg = """
nc_open_mem functionality not enabled. To enable, install Cython, make sure you have
@@ -2278,75 +2355,108 @@ strings.
raise ValueError(msg)
elif parallel:
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_open_par(path, NC_NOWRITE | NC_MPIIO, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_NOWRITE | NC_MPIIO
+ with nogil:
+ ierr = nc_open_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
- ierr = nc_open(path, NC_NOWRITE | NC_DISKLESS, &grpid)
+ cmode = NC_NOWRITE | NC_DISKLESS
+ with nogil:
+ ierr = nc_open(path, cmode, &grpid)
else:
if mode == 'rs':
# NC_SHARE is very important for speed reading
# large netcdf3 files with a record dimension
# (pull request #902).
- ierr = nc_open(path, NC_NOWRITE | NC_SHARE, &grpid)
+ cmode = NC_NOWRITE | NC_SHARE
+ with nogil:
+ ierr = nc_open(path, cmode, &grpid)
else:
- ierr = nc_open(path, NC_NOWRITE, &grpid)
+ with nogil:
+ ierr = nc_open(path, NC_NOWRITE, &grpid)
elif mode in ['a','r+'] and os.path.exists(filename):
if parallel:
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_open_par(path, NC_WRITE | NC_MPIIO, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_WRITE | NC_MPIIO
+ with nogil:
+ ierr = nc_open_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
- ierr = nc_open(path, NC_WRITE | NC_DISKLESS, &grpid)
+ cmode = NC_WRITE | NC_DISKLESS
+ with nogil:
+ ierr = nc_open(path, cmode, &grpid)
else:
- ierr = nc_open(path, NC_WRITE, &grpid)
+ with nogil:
+ ierr = nc_open(path, NC_WRITE, &grpid)
elif mode in ['as','r+s'] and os.path.exists(filename):
if parallel:
# NC_SHARE ignored
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_open_par(path, NC_WRITE | NC_MPIIO, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_WRITE | NC_MPIIO
+ with nogil:
+ ierr = nc_open_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
- ierr = nc_open(path, NC_SHARE | NC_DISKLESS, &grpid)
+ cmode = NC_SHARE | NC_DISKLESS
+ with nogil:
+ ierr = nc_open(path, cmode, &grpid)
else:
- ierr = nc_open(path, NC_SHARE, &grpid)
+ with nogil:
+ ierr = nc_open(path, NC_SHARE, &grpid)
elif mode == 'ws' or (mode in ['as','r+s'] and not os.path.exists(filename)):
_set_default_format(format=format)
if clobber:
if parallel:
# NC_SHARE ignored
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_create_par(path, NC_CLOBBER | cmode, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_CLOBBER | parmode
+ with nogil:
+ ierr = nc_create_par(path, NC_CLOBBER | cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
if persist:
- ierr = nc_create(path, NC_WRITE | NC_SHARE | NC_CLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_WRITE | NC_SHARE | NC_CLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode, &grpid)
else:
- ierr = nc_create(path, NC_SHARE | NC_CLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_SHARE | NC_CLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode , &grpid)
else:
- ierr = nc_create(path, NC_SHARE | NC_CLOBBER, &grpid)
+ cmode = NC_SHARE | NC_CLOBBER
+ with nogil:
+ ierr = nc_create(path, cmode, &grpid)
else:
if parallel:
# NC_SHARE ignored
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
- ierr = nc_create_par(path, NC_NOCLOBBER | cmode, \
- mpicomm, mpiinfo, &grpid)
+ cmode = NC_NOCLOBBER | parmode
+ with nogil:
+ ierr = nc_create_par(path, cmode, \
+ mpicomm, mpiinfo, &grpid)
ELSE:
pass
elif diskless:
if persist:
- ierr = nc_create(path, NC_WRITE | NC_SHARE | NC_NOCLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_WRITE | NC_SHARE | NC_NOCLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode , &grpid)
else:
- ierr = nc_create(path, NC_SHARE | NC_NOCLOBBER | NC_DISKLESS , &grpid)
+ cmode = NC_SHARE | NC_NOCLOBBER | NC_DISKLESS
+ with nogil:
+ ierr = nc_create(path, cmode , &grpid)
else:
- ierr = nc_create(path, NC_SHARE | NC_NOCLOBBER, &grpid)
+ cmode = NC_SHARE | NC_NOCLOBBER
+ with nogil:
+ ierr = nc_create(path, cmode, &grpid)
else:
raise ValueError("mode must be 'w', 'x', 'r', 'a' or 'r+', got '%s'" % mode)
@@ -2469,7 +2579,9 @@ version 4.1.2 or higher of the netcdf C lib, and rebuild netcdf4-python."""
return '\n'.join(ncdump)
def _close(self, check_err):
- cdef int ierr = nc_close(self._grpid)
+ cdef int ierr
+ with nogil:
+ ierr = nc_close(self._grpid)
if check_err:
_ensure_nc_success(ierr)
@@ -2485,7 +2597,8 @@ version 4.1.2 or higher of the netcdf C lib, and rebuild netcdf4-python."""
def _close_mem(self, check_err):
cdef int ierr
cdef NC_memio memio
- ierr = nc_close_memio(self._grpid, &memio)
+ with nogil:
+ ierr = nc_close_memio(self._grpid, &memio)
if check_err:
_ensure_nc_success(ierr)
@@ -2534,15 +2647,20 @@ Is the Dataset open or closed?
**`sync(self)`**
Writes all buffered data in the `Dataset` to the disk file."""
- _ensure_nc_success(nc_sync(self._grpid))
+ cdef int ierr
+ with nogil:
+ ierr = nc_sync(self._grpid)
+ _ensure_nc_success(ierr)
def _redef(self):
cdef int ierr
- ierr = nc_redef(self._grpid)
+ with nogil:
+ ierr = nc_redef(self._grpid)
def _enddef(self):
cdef int ierr
- ierr = nc_enddef(self._grpid)
+ with nogil:
+ ierr = nc_enddef(self._grpid)
def set_fill_on(self):
"""
@@ -2557,8 +2675,10 @@ separately for each variable type). The default behavior of the netCDF
library corresponds to `set_fill_on`. Data which are equal to the
`_Fill_Value` indicate that the variable was created, but never written
to."""
- cdef int oldmode
- _ensure_nc_success(nc_set_fill(self._grpid, NC_FILL, &oldmode))
+ cdef int oldmode, ierr
+ with nogil:
+ ierr = nc_set_fill(self._grpid, NC_FILL, &oldmode)
+ _ensure_nc_success(ierr)
def set_fill_off(self):
"""
@@ -2569,8 +2689,10 @@ Sets the fill mode for a `Dataset` open for writing to `off`.
This will prevent the data from being pre-filled with fill values, which
may result in some performance improvements. However, you must then make
sure the data is actually written before being read."""
- cdef int oldmode
- _ensure_nc_success(nc_set_fill(self._grpid, NC_NOFILL, &oldmode))
+ cdef int oldmode, ierr
+ with nogil:
+ ierr = nc_set_fill(self._grpid, NC_NOFILL, &oldmode)
+ _ensure_nc_success(ierr)
def createDimension(self, dimname, size=None):
"""
@@ -2594,6 +2716,7 @@ instance. To determine if a dimension is 'unlimited', use the
rename a `Dimension` named `oldname` to `newname`."""
cdef char *namstring
+ cdef Dimension dim
bytestr = _strencode(newname)
namstring = bytestr
if self.data_model != 'NETCDF4': self._redef()
@@ -2601,7 +2724,8 @@ rename a `Dimension` named `oldname` to `newname`."""
dim = self.dimensions[oldname]
except KeyError:
raise KeyError('%s not a valid dimension name' % oldname)
- ierr = nc_rename_dim(self._grpid, dim._dimid, namstring)
+ with nogil:
+ ierr = nc_rename_dim(self._grpid, dim._dimid, namstring)
if self.data_model != 'NETCDF4': self._enddef()
_ensure_nc_success(ierr)
@@ -2850,6 +2974,7 @@ is the number of variable dimensions."""
rename a `Variable` named `oldname` to `newname`"""
cdef char *namstring
+ cdef Variable var
try:
var = self.variables[oldname]
except KeyError:
@@ -2857,7 +2982,8 @@ rename a `Variable` named `oldname` to `newname`"""
bytestr = _strencode(newname)
namstring = bytestr
if self.data_model != 'NETCDF4': self._redef()
- ierr = nc_rename_var(self._grpid, var._varid, namstring)
+ with nogil:
+ ierr = nc_rename_var(self._grpid, var._varid, namstring)
if self.data_model != 'NETCDF4': self._enddef()
_ensure_nc_success(ierr)
@@ -2975,7 +3101,8 @@ attributes."""
bytestr = _strencode(name)
attname = bytestr
if self.data_model != 'NETCDF4': self._redef()
- ierr = nc_del_att(self._grpid, NC_GLOBAL, attname)
+ with nogil:
+ ierr = nc_del_att(self._grpid, NC_GLOBAL, attname)
if self.data_model != 'NETCDF4': self._enddef()
_ensure_nc_success(ierr)
@@ -3020,11 +3147,14 @@ attributes."""
rename a `Dataset` or `Group` attribute named `oldname` to `newname`."""
cdef char *oldnamec
cdef char *newnamec
+ cdef int ierr
bytestr = _strencode(oldname)
oldnamec = bytestr
bytestr = _strencode(newname)
newnamec = bytestr
- _ensure_nc_success(nc_rename_att(self._grpid, NC_GLOBAL, oldnamec, newnamec))
+ with nogil:
+ ierr = nc_rename_att(self._grpid, NC_GLOBAL, oldnamec, newnamec)
+ _ensure_nc_success(ierr)
def renameGroup(self, oldname, newname):
"""
@@ -3032,14 +3162,19 @@ rename a `Dataset` or `Group` attribute named `oldname` to `newname`."""
rename a `Group` named `oldname` to `newname` (requires netcdf >= 4.3.1)."""
cdef char *newnamec
+ cdef int grpid
IF HAS_RENAME_GRP:
+ cdef int ierr
bytestr = _strencode(newname)
newnamec = bytestr
try:
grp = self.groups[oldname]
+ grpid = grp._grpid
except KeyError:
raise KeyError('%s not a valid group name' % oldname)
- _ensure_nc_success(nc_rename_grp(grp._grpid, newnamec))
+ with nogil:
+ ierr = nc_rename_grp(grpid, newnamec)
+ _ensure_nc_success(ierr)
# remove old key from groups dict.
self.groups.pop(oldname)
# add new key.
@@ -3361,6 +3496,62 @@ to be installed and in `$PATH`.
f = open(outfile,'w')
f.write(result.stdout)
f.close()
+ def has_blosc_filter(self):
+ """
+**`has_blosc_filter(self)`**
+returns True if blosc compression filter is available"""
+ cdef int ierr
+ IF HAS_BLOSC_SUPPORT:
+ with nogil:
+ ierr = nc_inq_filter_avail(self._grpid, H5Z_FILTER_BLOSC)
+ if ierr:
+ return False
+ else:
+ return True
+ ELSE:
+ return False
+ def has_zstd_filter(self):
+ """
+**`has_zstd_filter(self)`**
+returns True if zstd compression filter is available"""
+ cdef int ierr
+ IF HAS_ZSTANDARD_SUPPORT:
+ with nogil:
+ ierr = nc_inq_filter_avail(self._grpid, H5Z_FILTER_ZSTD)
+ if ierr:
+ return False
+ else:
+ return True
+ ELSE:
+ return False
+ def has_bzip2_filter(self):
+ """
+**`has_bzip2_filter(self)`**
+returns True if bzip2 compression filter is available"""
+ cdef int ierr
+ IF HAS_BZIP2_SUPPORT:
+ with nogil:
+ ierr = nc_inq_filter_avail(self._grpid, H5Z_FILTER_BZIP2)
+ if ierr:
+ return False
+ else:
+ return True
+ ELSE:
+ return False
+ def has_szip_filter(self):
+ """
+**`has_szip_filter(self)`**
+returns True if szip compression filter is available"""
+ cdef int ierr
+ IF HAS_SZIP_SUPPORT:
+ with nogil:
+ ierr = nc_inq_filter_avail(self._grpid, H5Z_FILTER_SZIP)
+ if ierr:
+ return False
+ else:
+ return True
+ ELSE:
+ return False
cdef class Group(Dataset):
"""
@@ -3393,6 +3584,7 @@ Additional read-only class variables:
another `Group` instance, not using this class directly.
"""
cdef char *groupname
+ cdef int ierr, grpid
# flag to indicate that Variables in this Group support orthogonal indexing.
self.__orthogonal_indexing__ = True
# set data_model and file_format attributes.
@@ -3419,7 +3611,10 @@ Additional read-only class variables:
else:
bytestr = _strencode(name)
groupname = bytestr
- _ensure_nc_success(nc_def_grp(parent._grpid, groupname, &self._grpid))
+ grpid = parent._grpid
+ with nogil:
+ ierr = nc_def_grp(grpid, groupname, &self._grpid)
+ _ensure_nc_success(ierr)
if sys.version_info[0:2] < (3, 7):
self.cmptypes = OrderedDict()
self.vltypes = OrderedDict()
@@ -3504,7 +3699,8 @@ Read-only class variables:
else:
lendim = NC_UNLIMITED
if grp.data_model != 'NETCDF4': grp._redef()
- ierr = nc_def_dim(self._grpid, dimname, lendim, &self._dimid)
+ with nogil:
+ ierr = nc_def_dim(self._grpid, dimname, lendim, &self._dimid)
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -3569,7 +3765,8 @@ returns `True` if the `Dimension` instance is unlimited, `False` otherwise."""
cdef int ierr, n, numunlimdims, ndims, nvars, ngatts, xdimid
cdef int *unlimdimids
if self._data_model == 'NETCDF4':
- ierr = nc_inq_unlimdims(self._grpid, &numunlimdims, NULL)
+ with nogil:
+ ierr = nc_inq_unlimdims(self._grpid, &numunlimdims, NULL)
_ensure_nc_success(ierr)
if numunlimdims == 0:
return False
@@ -3931,7 +4128,8 @@ behavior is similar to Fortran or Matlab, but different than numpy.
self._vltype = datatype
xtype = datatype._nc_type
# make sure this a valid user defined datatype defined in this Group
- ierr = nc_inq_type(self._grpid, xtype, namstring, NULL)
+ with nogil:
+ ierr = nc_inq_type(self._grpid, xtype, namstring, NULL)
_ensure_nc_success(ierr)
# dtype variable attribute is a numpy datatype object.
self.dtype = datatype.dtype
@@ -3961,24 +4159,28 @@ behavior is similar to Fortran or Matlab, but different than numpy.
if grp.data_model != 'NETCDF4': grp._redef()
# define variable.
if ndims:
- ierr = nc_def_var(self._grpid, varname, xtype, ndims,
- dimids, &self._varid)
+ with nogil:
+ ierr = nc_def_var(self._grpid, varname, xtype, ndims,
+ dimids, &self._varid)
free(dimids)
else: # a scalar variable.
- ierr = nc_def_var(self._grpid, varname, xtype, ndims,
- NULL, &self._varid)
+ with nogil:
+ ierr = nc_def_var(self._grpid, varname, xtype, ndims,
+ NULL, &self._varid)
# set chunk cache size if desired
# default is 1mb per var, can cause problems when many (1000's)
# of vars are created. This change only lasts as long as file is
# open.
if grp.data_model.startswith('NETCDF4') and chunk_cache is not None:
- ierr = nc_get_var_chunk_cache(self._grpid, self._varid, &sizep,
- &nelemsp, &preemptionp)
+ with nogil:
+ ierr = nc_get_var_chunk_cache(self._grpid, self._varid, &sizep,
+ &nelemsp, &preemptionp)
_ensure_nc_success(ierr)
# reset chunk cache size, leave other parameters unchanged.
sizep = chunk_cache
- ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
- nelemsp, preemptionp)
+ with nogil:
+ ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
+ nelemsp, preemptionp)
_ensure_nc_success(ierr)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
@@ -3995,9 +4197,11 @@ behavior is similar to Fortran or Matlab, but different than numpy.
if zlib:
icomplevel = complevel
if shuffle:
- ierr = nc_def_var_deflate(self._grpid, self._varid, 1, 1, icomplevel)
+ with nogil:
+ ierr = nc_def_var_deflate(self._grpid, self._varid, 1, 1, icomplevel)
else:
- ierr = nc_def_var_deflate(self._grpid, self._varid, 0, 1, icomplevel)
+ with nogil:
+ ierr = nc_def_var_deflate(self._grpid, self._varid, 0, 1, icomplevel)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4009,7 +4213,8 @@ behavior is similar to Fortran or Matlab, but different than numpy.
msg="unknown szip coding ('ec' or 'nn' supported)"
raise ValueError(msg)
iszip_pixels_per_block = szip_pixels_per_block
- ierr = nc_def_var_szip(self._grpid, self._varid, iszip_coding, iszip_pixels_per_block)
+ with nogil:
+ ierr = nc_def_var_szip(self._grpid, self._varid, iszip_coding, iszip_pixels_per_block)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4020,7 +4225,8 @@ compression='szip' only works if linked version of hdf5 has szip functionality e
if zstd:
IF HAS_ZSTANDARD_SUPPORT:
icomplevel = complevel
- ierr = nc_def_var_zstandard(self._grpid, self._varid, icomplevel)
+ with nogil:
+ ierr = nc_def_var_zstandard(self._grpid, self._varid, icomplevel)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4032,7 +4238,8 @@ version 4.9.0 or higher netcdf-c with zstandard support, and rebuild netcdf4-pyt
if bzip2:
IF HAS_BZIP2_SUPPORT:
icomplevel = complevel
- ierr = nc_def_var_bzip2(self._grpid, self._varid, icomplevel)
+ with nogil:
+ ierr = nc_def_var_bzip2(self._grpid, self._varid, icomplevel)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4047,7 +4254,8 @@ version 4.9.0 or higher netcdf-c with bzip2 support, and rebuild netcdf4-python.
iblosc_shuffle = blosc_shuffle
iblosc_blocksize = 0 # not currently used by c lib
iblosc_complevel = complevel
- ierr = nc_def_var_blosc(self._grpid, self._varid,\
+ with nogil:
+ ierr = nc_def_var_blosc(self._grpid, self._varid,\
iblosc_compressor,\
iblosc_complevel,iblosc_blocksize,\
iblosc_shuffle)
@@ -4061,7 +4269,8 @@ version 4.9.0 or higher netcdf-c with blosc support, and rebuild netcdf4-python.
raise ValueError(msg)
# set checksum.
if fletcher32 and ndims: # don't bother for scalar variable
- ierr = nc_def_var_fletcher32(self._grpid, self._varid, 1)
+ with nogil:
+ ierr = nc_def_var_fletcher32(self._grpid, self._varid, 1)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4087,16 +4296,19 @@ version 4.9.0 or higher netcdf-c with blosc support, and rebuild netcdf4-python.
raise ValueError(msg)
chunksizesp[n] = chunksizes[n]
if chunksizes is not None or contiguous:
- ierr = nc_def_var_chunking(self._grpid, self._varid, icontiguous, chunksizesp)
+ with nogil:
+ ierr = nc_def_var_chunking(self._grpid, self._varid, icontiguous, chunksizesp)
free(chunksizesp)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
# set endian-ness of variable
if endian == 'little':
- ierr = nc_def_var_endian(self._grpid, self._varid, NC_ENDIAN_LITTLE)
+ with nogil:
+ ierr = nc_def_var_endian(self._grpid, self._varid, NC_ENDIAN_LITTLE)
elif endian == 'big':
- ierr = nc_def_var_endian(self._grpid, self._varid, NC_ENDIAN_BIG)
+ with nogil:
+ ierr = nc_def_var_endian(self._grpid, self._varid, NC_ENDIAN_BIG)
elif endian == 'native':
pass # this is the default format.
else:
@@ -4106,14 +4318,16 @@ version 4.9.0 or higher netcdf-c with blosc support, and rebuild netcdf4-python.
if significant_digits is not None:
nsd = significant_digits
if quantize_mode == 'BitGroom':
- ierr = nc_def_var_quantize(self._grpid,
- self._varid, NC_QUANTIZE_BITGROOM, nsd)
+ with nogil:
+ ierr = nc_def_var_quantize(self._grpid,
+ self._varid, NC_QUANTIZE_BITGROOM, nsd)
elif quantize_mode == 'GranularBitRound':
- ierr = nc_def_var_quantize(self._grpid,
- self._varid, NC_QUANTIZE_GRANULARBR, nsd)
+ with nogil:
+ ierr = nc_def_var_quantize(self._grpid,
+ self._varid, NC_QUANTIZE_GRANULARBR, nsd)
elif quantize_mode == 'BitRound':
ierr = nc_def_var_quantize(self._grpid,
- self._varid, NC_QUANTIZE_BITROUND, nsd)
+ self._varid, NC_QUANTIZE_BITROUND, nsd)
else:
raise ValueError("'quantize_mode' keyword argument must be 'BitGroom','GranularBitRound' or 'BitRound', got '%s'" % quantize_mode)
@@ -4142,7 +4356,8 @@ kwarg for quantization."""
# anyway.
ierr = 0
else:
- ierr = nc_def_var_fill(self._grpid, self._varid, 1, NULL)
+ with nogil:
+ ierr = nc_def_var_fill(self._grpid, self._varid, 1, NULL)
if ierr != NC_NOERR:
if grp.data_model != 'NETCDF4': grp._enddef()
_ensure_nc_success(ierr)
@@ -4431,7 +4646,8 @@ attributes."""
bytestr = _strencode(name)
attname = bytestr
if self._grp.data_model != 'NETCDF4': self._grp._redef()
- ierr = nc_del_att(self._grpid, self._varid, attname)
+ with nogil:
+ ierr = nc_del_att(self._grpid, self._varid, attname)
if self._grp.data_model != 'NETCDF4': self._grp._enddef()
_ensure_nc_success(ierr)
@@ -4440,13 +4656,19 @@ attributes."""
**`filters(self)`**
return dictionary containing HDF5 filter parameters."""
- cdef int ierr,ideflate,ishuffle,icomplevel,icomplevel_zstd,icomplevel_bzip2,ifletcher32
+ cdef int ierr,ideflate,ishuffle,icomplevel,ifletcher32
cdef int izstd=0
cdef int ibzip2=0
cdef int iblosc=0
cdef int iszip=0
- cdef unsigned int iblosc_complevel,iblosc_blocksize,iblosc_compressor,iblosc_shuffle
- cdef int iszip_coding, iszip_pixels_per_block
+ cdef int iszip_coding=0
+ cdef int iszip_pixels_per_block=0
+ cdef int icomplevel_zstd=0
+ cdef int icomplevel_bzip2=0
+ cdef unsigned int iblosc_shuffle=0
+ cdef unsigned int iblosc_compressor=0
+ cdef unsigned int iblosc_blocksize=0
+ cdef unsigned int iblosc_complevel=0
filtdict = {'zlib':False,'szip':False,'zstd':False,'bzip2':False,'blosc':False,'shuffle':False,'complevel':0,'fletcher32':False}
if self._grp.data_model not in ['NETCDF4_CLASSIC','NETCDF4']: return
with nogil:
@@ -4456,23 +4678,27 @@ return dictionary containing HDF5 filter parameters."""
ierr = nc_inq_var_fletcher32(self._grpid, self._varid, &ifletcher32)
_ensure_nc_success(ierr)
IF HAS_ZSTANDARD_SUPPORT:
- ierr = nc_inq_var_zstandard(self._grpid, self._varid, &izstd,\
- &icomplevel_zstd)
+ with nogil:
+ ierr = nc_inq_var_zstandard(self._grpid, self._varid, &izstd,\
+ &icomplevel_zstd)
if ierr != 0: izstd=0
# _ensure_nc_success(ierr)
IF HAS_BZIP2_SUPPORT:
- ierr = nc_inq_var_bzip2(self._grpid, self._varid, &ibzip2,\
- &icomplevel_bzip2)
+ with nogil:
+ ierr = nc_inq_var_bzip2(self._grpid, self._varid, &ibzip2,\
+ &icomplevel_bzip2)
if ierr != 0: ibzip2=0
#_ensure_nc_success(ierr)
IF HAS_BLOSC_SUPPORT:
- ierr = nc_inq_var_blosc(self._grpid, self._varid, &iblosc,\
- &iblosc_compressor,&iblosc_complevel,&iblosc_blocksize,&iblosc_shuffle)
+ with nogil:
+ ierr = nc_inq_var_blosc(self._grpid, self._varid, &iblosc,\
+ &iblosc_compressor,&iblosc_complevel,&iblosc_blocksize,&iblosc_shuffle)
if ierr != 0: iblosc=0
#_ensure_nc_success(ierr)
IF HAS_SZIP_SUPPORT:
- ierr = nc_inq_var_szip(self._grpid, self._varid, &iszip_coding,\
- &iszip_pixels_per_block)
+ with nogil:
+ ierr = nc_inq_var_szip(self._grpid, self._varid, &iszip_coding,\
+ &iszip_pixels_per_block)
if ierr != 0:
iszip=0
else:
@@ -4618,8 +4844,9 @@ details."""
preemptionp = preemption
else:
preemptionp = preemption_orig
- ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
- nelemsp, preemptionp)
+ with nogil:
+ ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
+ nelemsp, preemptionp)
_ensure_nc_success(ierr)
def __delattr__(self,name):
@@ -4702,7 +4929,8 @@ rename a `Variable` attribute named `oldname` to `newname`."""
oldnamec = bytestr
bytestr = _strencode(newname)
newnamec = bytestr
- ierr = nc_rename_att(self._grpid, self._varid, oldnamec, newnamec)
+ with nogil:
+ ierr = nc_rename_att(self._grpid, self._varid, oldnamec, newnamec)
_ensure_nc_success(ierr)
def __getitem__(self, elem):
@@ -5106,8 +5334,9 @@ rename a `Variable` attribute named `oldname` to `newname`."""
encoding = getattr(self,'_Encoding','utf-8')
bytestr = _strencode(data,encoding=encoding)
strdata[0] = bytestr
- ierr = nc_put_vara(self._grpid, self._varid,
- startp, countp, strdata)
+ with nogil:
+ ierr = nc_put_vara(self._grpid, self._varid,
+ startp, countp, strdata)
_ensure_nc_success(ierr)
free(strdata)
else: # regular VLEN
@@ -5117,8 +5346,9 @@ rename a `Variable` attribute named `oldname` to `newname`."""
vldata = <nc_vlen_t *>malloc(sizeof(nc_vlen_t))
vldata[0].len = PyArray_SIZE(data2)
vldata[0].p = PyArray_DATA(data2)
- ierr = nc_put_vara(self._grpid, self._varid,
- startp, countp, vldata)
+ with nogil:
+ ierr = nc_put_vara(self._grpid, self._varid,
+ startp, countp, vldata)
_ensure_nc_success(ierr)
free(vldata)
free(startp)
@@ -5556,11 +5786,13 @@ NC_CHAR).
data = data.byteswap()
# strides all 1 or scalar variable, use put_vara (faster)
if sum(stride) == ndims or ndims == 0:
- ierr = nc_put_vara(self._grpid, self._varid,
- startp, countp, PyArray_DATA(data))
+ with nogil:
+ ierr = nc_put_vara(self._grpid, self._varid,
+ startp, countp, PyArray_DATA(data))
else:
- ierr = nc_put_vars(self._grpid, self._varid,
- startp, countp, stridep, PyArray_DATA(data))
+ with nogil:
+ ierr = nc_put_vars(self._grpid, self._varid,
+ startp, countp, stridep, PyArray_DATA(data))
_ensure_nc_success(ierr)
elif self._isvlen:
if data.dtype.char !='O':
@@ -5583,12 +5815,14 @@ NC_CHAR).
strdata[i] = data[i]
# strides all 1 or scalar variable, use put_vara (faster)
if sum(stride) == ndims or ndims == 0:
- ierr = nc_put_vara(self._grpid, self._varid,
- startp, countp, strdata)
+ with nogil:
+ ierr = nc_put_vara(self._grpid, self._varid,
+ startp, countp, strdata)
else:
raise IndexError('strides must all be 1 for string variables')
- #ierr = nc_put_vars(self._grpid, self._varid,
- # startp, countp, stridep, strdata)
+ #with nogil:
+ # ierr = nc_put_vars(self._grpid, self._varid,
+ # startp, countp, stridep, strdata)
_ensure_nc_success(ierr)
free(strdata)
else:
@@ -5610,12 +5844,14 @@ NC_CHAR).
databuff = databuff + PyArray_STRIDES(data)[0]
# strides all 1 or scalar variable, use put_vara (faster)
if sum(stride) == ndims or ndims == 0:
- ierr = nc_put_vara(self._grpid, self._varid,
- startp, countp, vldata)
+ with nogil:
+ ierr = nc_put_vara(self._grpid, self._varid,
+ startp, countp, vldata)
else:
raise IndexError('strides must all be 1 for vlen variables')
- #ierr = nc_put_vars(self._grpid, self._varid,
- # startp, countp, stridep, vldata)
+ #with nogil:
+ # ierr = nc_put_vars(self._grpid, self._varid,
+ # startp, countp, stridep, vldata)
_ensure_nc_success(ierr)
# free the pointer array.
free(vldata)
@@ -5625,7 +5861,7 @@ NC_CHAR).
def _get(self,start,count,stride):
"""Private method to retrieve data from a netCDF variable"""
- cdef int ierr, ndims
+ cdef int ierr, ndims, totelem
cdef size_t *startp
cdef size_t *countp
cdef ptrdiff_t *stridep
@@ -5704,8 +5940,9 @@ NC_CHAR).
else:
# FIXME: is this a bug in netCDF4?
raise IndexError('strides must all be 1 for string variables')
- #ierr = nc_get_vars(self._grpid, self._varid,
- # startp, countp, stridep, strdata)
+ #with nogil:
+ # ierr = nc_get_vars(self._grpid, self._varid,
+ # startp, countp, stridep, strdata)
if ierr == NC_EINVALCOORDS:
raise IndexError
elif ierr != NC_NOERR:
@@ -5723,7 +5960,8 @@ NC_CHAR).
# reshape the output array
data = numpy.reshape(data, shapeout)
# free string data internally allocated in netcdf C lib
- ierr = nc_free_string(totelem, strdata)
+ with nogil:
+ ierr = nc_free_string(totelem, strdata)
# free the pointer array
free(strdata)
else:
@@ -5740,8 +5978,9 @@ NC_CHAR).
startp, countp, vldata)
else:
raise IndexError('strides must all be 1 for vlen variables')
- #ierr = nc_get_vars(self._grpid, self._varid,
- # startp, countp, stridep, vldata)
+ #with nogil:
+ # ierr = nc_get_vars(self._grpid, self._varid,
+ # startp, countp, stridep, vldata)
if ierr == NC_EINVALCOORDS:
raise IndexError
elif ierr != NC_NOERR:
@@ -5757,7 +5996,8 @@ NC_CHAR).
# reshape the output array
data = numpy.reshape(data, shapeout)
# free vlen data internally allocated in netcdf C lib
- ierr = nc_free_vlens(totelem, vldata)
+ with nogil:
+ ierr = nc_free_vlens(totelem, vldata)
# free the pointer array
free(vldata)
free(startp)
@@ -5790,11 +6030,13 @@ open for parallel access.
IF HAS_PARALLEL4_SUPPORT or HAS_PNETCDF_SUPPORT:
# set collective MPI IO mode on or off
if value:
- ierr = nc_var_par_access(self._grpid, self._varid,
- NC_COLLECTIVE)
+ with nogil:
+ ierr = nc_var_par_access(self._grpid, self._varid,
+ NC_COLLECTIVE)
else:
- ierr = nc_var_par_access(self._grpid, self._varid,
- NC_INDEPENDENT)
+ with nogil:
+ ierr = nc_var_par_access(self._grpid, self._varid,
+ NC_INDEPENDENT)
_ensure_nc_success(ierr)
ELSE:
pass # does nothing
@@ -5942,7 +6184,7 @@ cdef _def_compound(grp, object dt, object dtype_name):
# private function used to construct a netcdf compound data type
# from a numpy dtype object by CompoundType.__init__.
cdef nc_type xtype, xtype_tmp
- cdef int ierr, ndims
+ cdef int ierr, ndims, grpid
cdef size_t offset, size
cdef char *namstring
cdef char *nested_namstring
@@ -5950,7 +6192,9 @@ cdef _def_compound(grp, object dt, object dtype_name):
bytestr = _strencode(dtype_name)
namstring = bytestr
size = dt.itemsize
- ierr = nc_def_compound(grp._grpid, size, namstring, &xtype)
+ grpid = grp._grpid
+ with nogil:
+ ierr = nc_def_compound(grpid, size, namstring, &xtype)
_ensure_nc_success(ierr)
names = list(dt.fields.keys())
formats = [v[0] for v in dt.fields.values()]
@@ -5968,8 +6212,9 @@ cdef _def_compound(grp, object dt, object dtype_name):
xtype_tmp = _nptonctype[format.str[1:]]
except KeyError:
raise ValueError('Unsupported compound type element')
- ierr = nc_insert_compound(grp._grpid, xtype, namstring,
- offset, xtype_tmp)
+ with nogil:
+ ierr = nc_insert_compound(grpid, xtype, namstring,
+ offset, xtype_tmp)
_ensure_nc_success(ierr)
else:
if format.shape == (): # nested scalar compound type
@@ -5977,9 +6222,10 @@ cdef _def_compound(grp, object dt, object dtype_name):
xtype_tmp = _find_cmptype(grp, format)
bytestr = _strencode(name)
nested_namstring = bytestr
- ierr = nc_insert_compound(grp._grpid, xtype,\
- nested_namstring,\
- offset, xtype_tmp)
+ with nogil:
+ ierr = nc_insert_compound(grpid, xtype,\
+ nested_namstring,\
+ offset, xtype_tmp)
_ensure_nc_success(ierr)
else: # nested array compound element
ndims = len(format.shape)
@@ -5991,7 +6237,8 @@ cdef _def_compound(grp, object dt, object dtype_name):
xtype_tmp = _nptonctype[format.subdtype[0].str[1:]]
except KeyError:
raise ValueError('Unsupported compound type element')
- ierr = nc_insert_array_compound(grp._grpid,xtype,namstring,
+ with nogil:
+ ierr = nc_insert_array_compound(grpid,xtype,namstring,
offset,xtype_tmp,ndims,dim_sizes)
_ensure_nc_success(ierr)
else: # nested array compound type.
@@ -6002,10 +6249,11 @@ cdef _def_compound(grp, object dt, object dtype_name):
# xtype_tmp = _find_cmptype(grp, format.subdtype[0])
# bytestr = _strencode(name)
# nested_namstring = bytestr
- # ierr = nc_insert_array_compound(grp._grpid,xtype,\
- # nested_namstring,\
- # offset,xtype_tmp,\
- # ndims,dim_sizes)
+ # with nogil:
+ # ierr = nc_insert_array_compound(grpid,xtype,\
+ # nested_namstring,\
+ # offset,xtype_tmp,\
+ # ndims,dim_sizes)
# _ensure_nc_success(ierr)
free(dim_sizes)
return xtype
@@ -6179,10 +6427,11 @@ cdef _def_vlen(grp, object dt, object dtype_name):
# private function used to construct a netcdf VLEN data type
# from a numpy dtype object or python str object by VLType.__init__.
cdef nc_type xtype, xtype_tmp
- cdef int ierr, ndims
+ cdef int ierr, ndims, grpid
cdef size_t offset, size
cdef char *namstring
cdef char *nested_namstring
+ grpid = grp._grpid
if dt == str: # python string, use NC_STRING
xtype = NC_STRING
# dtype_name ignored
@@ -6194,7 +6443,8 @@ cdef _def_vlen(grp, object dt, object dtype_name):
# find netCDF primitive data type corresponding to
# specified numpy data type.
xtype_tmp = _nptonctype[dt.str[1:]]
- ierr = nc_def_vlen(grp._grpid, namstring, xtype_tmp, &xtype);
+ with nogil:
+ ierr = nc_def_vlen(grpid, namstring, xtype_tmp, &xtype);
_ensure_nc_success(ierr)
else:
raise KeyError("unsupported datatype specified for VLEN")
@@ -6205,17 +6455,17 @@ cdef _read_vlen(group, nc_type xtype, endian=None):
# construct a corresponding numpy dtype instance,
# then use that to create a VLType instance.
# called by _get_types, _get_vars.
- cdef int ierr, _grpid
+ cdef int ierr, grpid
cdef size_t vlsize
cdef nc_type base_xtype
cdef char vl_namstring[NC_MAX_NAME+1]
- _grpid = group._grpid
+ grpid = group._grpid
if xtype == NC_STRING:
dt = str
name = None
else:
with nogil:
- ierr = nc_inq_vlen(_grpid, xtype, vl_namstring, &vlsize, &base_xtype)
+ ierr = nc_inq_vlen(grpid, xtype, vl_namstring, &vlsize, &base_xtype)
_ensure_nc_success(ierr)
name = vl_namstring.decode('utf-8')
try:
@@ -6286,17 +6536,19 @@ cdef _def_enum(grp, object dt, object dtype_name, object enum_dict):
# private function used to construct a netCDF Enum data type
# from a numpy dtype object or python str object by EnumType.__init__.
cdef nc_type xtype, xtype_tmp
- cdef int ierr
+ cdef int ierr, grpid
cdef char *namstring
cdef ndarray value_arr
bytestr = _strencode(dtype_name)
namstring = bytestr
+ grpid = grp._grpid
dt = numpy.dtype(dt) # convert to numpy datatype.
if dt.str[1:] in _intnptonctype.keys():
# find netCDF primitive data type corresponding to
# specified numpy data type.
xtype_tmp = _intnptonctype[dt.str[1:]]
- ierr = nc_def_enum(grp._grpid, xtype_tmp, namstring, &xtype);
+ with nogil:
+ ierr = nc_def_enum(grpid, xtype_tmp, namstring, &xtype)
_ensure_nc_success(ierr)
else:
msg="unsupported datatype specified for ENUM (must be integer)"
@@ -6306,8 +6558,9 @@ cdef _def_enum(grp, object dt, object dtype_name, object enum_dict):
value_arr = numpy.array(enum_dict[field],dt)
bytestr = _strencode(field)
namstring = bytestr
- ierr = nc_insert_enum(grp._grpid, xtype, namstring,
- PyArray_DATA(value_arr))
+ with nogil:
+ ierr = nc_insert_enum(grpid, xtype, namstring,
+ PyArray_DATA(value_arr))
_ensure_nc_success(ierr)
return xtype, dt
@@ -6316,15 +6569,15 @@ cdef _read_enum(group, nc_type xtype, endian=None):
# construct a corresponding numpy dtype instance,
# then use that to create a EnumType instance.
# called by _get_types, _get_vars.
- cdef int ierr, _grpid, nmem
+ cdef int ierr, grpid, nmem
cdef ndarray enum_val
cdef nc_type base_xtype
cdef char enum_namstring[NC_MAX_NAME+1]
cdef size_t nmembers
- _grpid = group._grpid
+ grpid = group._grpid
# get name, datatype, and number of members.
with nogil:
- ierr = nc_inq_enum(_grpid, xtype, enum_namstring, &base_xtype, NULL,\
+ ierr = nc_inq_enum(grpid, xtype, enum_namstring, &base_xtype, NULL,\
&nmembers)
_ensure_nc_success(ierr)
enum_name = enum_namstring.decode('utf-8')
@@ -6339,7 +6592,7 @@ cdef _read_enum(group, nc_type xtype, endian=None):
enum_val = numpy.empty(1,dt)
for nmem from 0 <= nmem < nmembers:
with nogil:
- ierr = nc_inq_enum_member(_grpid, xtype, nmem, \
+ ierr = nc_inq_enum_member(grpid, xtype, nmem, \
enum_namstring,PyArray_DATA(enum_val))
_ensure_nc_success(ierr)
name = enum_namstring.decode('utf-8')
=====================================
test/run_all.py
=====================================
@@ -1,5 +1,5 @@
-import glob, os, sys, unittest, struct
-from netCDF4 import getlibversion,__hdf5libversion__,__netcdf4libversion__,__version__
+import glob, os, sys, unittest, struct, tempfile
+from netCDF4 import getlibversion,__hdf5libversion__,__netcdf4libversion__,__version__, Dataset
from netCDF4 import __has_cdf5_format__, __has_nc_inq_path__, __has_nc_create_mem__, \
__has_parallel4_support__, __has_pnetcdf_support__, \
__has_zstandard_support__, __has_bzip2_support__, \
@@ -26,18 +26,22 @@ if not __has_cdf5_format__ or struct.calcsize("P") < 8:
if not __has_quantization_support__:
test_files.remove('tst_compression_quant.py')
sys.stdout.write('not running tst_compression_quant.py ...\n')
-if not __has_zstandard_support__ or os.getenv('NO_PLUGINS'):
+filename = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
+nc = Dataset(filename,'w')
+if not __has_zstandard_support__ or os.getenv('NO_PLUGINS') or not nc.has_zstd_filter():
test_files.remove('tst_compression_zstd.py')
sys.stdout.write('not running tst_compression_zstd.py ...\n')
-if not __has_bzip2_support__ or os.getenv('NO_PLUGINS'):
+if not __has_bzip2_support__ or os.getenv('NO_PLUGINS') or not nc.has_bzip2_filter():
test_files.remove('tst_compression_bzip2.py')
sys.stdout.write('not running tst_compression_bzip2.py ...\n')
-if not __has_blosc_support__ or os.getenv('NO_PLUGINS'):
+if not __has_blosc_support__ or os.getenv('NO_PLUGINS') or not nc.has_blosc_filter():
test_files.remove('tst_compression_blosc.py')
sys.stdout.write('not running tst_compression_blosc.py ...\n')
-if not __has_szip_support__:
+if not __has_szip_support__ or not nc.has_szip_filter():
test_files.remove('tst_compression_szip.py')
sys.stdout.write('not running tst_compression_szip.py ...\n')
+nc.close()
+os.remove(filename)
# Don't run tests that require network connectivity
if os.getenv('NO_NET'):
=====================================
test/tst_alignment.py
=====================================
@@ -0,0 +1,156 @@
+import numpy as np
+from netCDF4 import set_alignment, get_alignment, Dataset
+import netCDF4
+import os
+import subprocess
+import tempfile
+import unittest
+
+# During testing, sometimes development versions are used.
+# They may be written as 4.9.1-development
+libversion_no_development = netCDF4.__netcdf4libversion__.split('-')[0]
+libversion = tuple(int(v) for v in libversion_no_development.split('.'))
+has_alignment = (libversion[0] > 4) or (
+ libversion[0] == 4 and (libversion[1] >= 9)
+)
+try:
+ has_h5ls = subprocess.check_call(['h5ls', '--version'], stdout=subprocess.PIPE) == 0
+except Exception:
+ has_h5ls = False
+
+file_name = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
+
+
+class AlignmentTestCase(unittest.TestCase):
+ def setUp(self):
+ self.file = file_name
+
+ # This is a global variable in netcdf4, it must be set before File
+ # creation
+ if has_alignment:
+ set_alignment(1024, 4096)
+ assert get_alignment() == (1024, 4096)
+
+ f = Dataset(self.file, 'w')
+ f.createDimension('x', 4096)
+ # Create many datasets so that we decrease the chance of
+ # the dataset being randomly aligned
+ for i in range(10):
+ f.createVariable(f'data{i:02d}', np.float64, ('x',))
+ v = f.variables[f'data{i:02d}']
+ v[...] = 0
+ f.close()
+ if has_alignment:
+ # ensure to reset the alignment to 1 (default values) so as not to
+ # disrupt other tests
+ set_alignment(1, 1)
+ assert get_alignment() == (1, 1)
+
+ def test_version_settings(self):
+ if has_alignment:
+ # One should always be able to set the alignment to 1, 1
+ set_alignment(1, 1)
+ assert get_alignment() == (1, 1)
+ else:
+ with self.assertRaises(RuntimeError):
+ set_alignment(1, 1)
+ with self.assertRaises(RuntimeError):
+ get_alignment()
+
+ # if we have no support for alignment, we have no guarantees on
+ # how the data can be aligned
+ @unittest.skipIf(
+ not has_h5ls,
+ "h5ls not found."
+ )
+ @unittest.skipIf(
+ not has_alignment,
+ "No support for set_alignment in libnetcdf."
+ )
+ def test_setting_alignment(self):
+ # We choose to use h5ls instead of h5py since h5ls is very likely
+ # to be installed alongside the rest of the tooling required to build
+ # netcdf4-python
+ # Output from h5ls is expected to look like:
+ """
+Opened "/tmp/tmpqexgozg1.nc" with sec2 driver.
+data00 Dataset {4096/4096}
+ Attribute: DIMENSION_LIST {1}
+ Type: variable length of
+ object reference
+ Attribute: _Netcdf4Coordinates {1}
+ Type: 32-bit little-endian integer
+ Location: 1:563
+ Links: 1
+ Storage: 32768 logical bytes, 32768 allocated bytes, 100.00% utilization
+ Type: IEEE 64-bit little-endian float
+ Address: 8192
+data01 Dataset {4096/4096}
+ Attribute: DIMENSION_LIST {1}
+ Type: variable length of
+ object reference
+ Attribute: _Netcdf4Coordinates {1}
+ Type: 32-bit little-endian integer
+ Location: 1:1087
+ Links: 1
+ Storage: 32768 logical bytes, 32768 allocated bytes, 100.00% utilization
+ Type: IEEE 64-bit little-endian float
+ Address: 40960
+[...]
+x Dataset {4096/4096}
+ Attribute: CLASS scalar
+ Type: 16-byte null-terminated ASCII string
+ Attribute: NAME scalar
+ Type: 64-byte null-terminated ASCII string
+ Attribute: REFERENCE_LIST {10}
+ Type: struct {
+ "dataset" +0 object reference
+ "dimension" +8 32-bit little-endian unsigned integer
+ } 16 bytes
+ Attribute: _Netcdf4Dimid scalar
+ Type: 32-bit little-endian integer
+ Location: 1:239
+ Links: 1
+ Storage: 16384 logical bytes, 0 allocated bytes
+ Type: IEEE 32-bit big-endian float
+ Address: 18446744073709551615
+"""
+ h5ls_results = subprocess.check_output(
+ ["h5ls", "--verbose", "--address", "--simple", self.file]
+ ).decode()
+
+ addresses = {
+ f'data{i:02d}': -1
+ for i in range(10)
+ }
+
+ data_variable = None
+ for line in h5ls_results.split('\n'):
+ if not line.startswith(' '):
+ data_variable = line.split(' ')[0]
+ # only process the data variables we care to inpsect
+ if data_variable not in addresses:
+ continue
+ line = line.strip()
+ if line.startswith('Address:'):
+ address = int(line.split(':')[1].strip())
+ addresses[data_variable] = address
+
+ for key, address in addresses.items():
+ is_aligned = (address % 4096) == 0
+ assert is_aligned, f"{key} is not aligned. Address = 0x{address:x}"
+
+ # Alternative implementation in h5py
+ # import h5py
+ # with h5py.File(self.file, 'r') as h5file:
+ # for i in range(10):
+ # v = h5file[f'data{i:02d}']
+ # assert (dataset.id.get_offset() % 4096) == 0
+
+ def tearDown(self):
+ # Remove the temporary files
+ os.remove(self.file)
+
+
+if __name__ == '__main__':
+ unittest.main()
=====================================
test/tst_compression_blosc.py
=====================================
@@ -1,7 +1,7 @@
from numpy.random.mtrand import uniform
from netCDF4 import Dataset
from numpy.testing import assert_almost_equal
-import os, tempfile, unittest
+import os, tempfile, unittest, sys
ndim = 100000
iblosc_shuffle=2
@@ -74,4 +74,9 @@ class CompressionTestCase(unittest.TestCase):
f.close()
if __name__ == '__main__':
- unittest.main()
+ nc = Dataset(filename,'w')
+ if not nc.has_blosc_filter():
+ sys.stdout.write('blosc filter not available, skipping tests ...\n')
+ else:
+ nc.close()
+ unittest.main()
=====================================
test/tst_compression_bzip2.py
=====================================
@@ -1,7 +1,7 @@
from numpy.random.mtrand import uniform
from netCDF4 import Dataset
from numpy.testing import assert_almost_equal
-import os, tempfile, unittest
+import os, tempfile, unittest, sys
ndim = 100000
filename1 = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
@@ -49,4 +49,9 @@ class CompressionTestCase(unittest.TestCase):
f.close()
if __name__ == '__main__':
- unittest.main()
+ nc = Dataset(filename1,'w')
+ if not nc.has_bzip2_filter():
+ sys.stdout.write('bzip2 filter not available, skipping tests ...\n')
+ else:
+ nc.close()
+ unittest.main()
=====================================
test/tst_compression_szip.py
=====================================
@@ -1,7 +1,7 @@
from numpy.random.mtrand import uniform
from netCDF4 import Dataset
from numpy.testing import assert_almost_equal
-import os, tempfile, unittest
+import os, tempfile, unittest, sys
ndim = 100000
filename = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
@@ -39,4 +39,9 @@ class CompressionTestCase(unittest.TestCase):
f.close()
if __name__ == '__main__':
- unittest.main()
+ nc = Dataset(filename,'w')
+ if not nc.has_szip_filter():
+ sys.stdout.write('szip filter not available, skipping tests ...\n')
+ else:
+ nc.close()
+ unittest.main()
=====================================
test/tst_compression_zstd.py
=====================================
@@ -1,7 +1,7 @@
from numpy.random.mtrand import uniform
from netCDF4 import Dataset
from numpy.testing import assert_almost_equal
-import os, tempfile, unittest
+import os, tempfile, unittest, sys
ndim = 100000
filename1 = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
@@ -49,4 +49,9 @@ class CompressionTestCase(unittest.TestCase):
f.close()
if __name__ == '__main__':
- unittest.main()
+ nc = Dataset(filename1,'w')
+ if not nc.has_zstd_filter():
+ sys.stdout.write('zstd filter not available, skipping tests ...\n')
+ else:
+ nc.close()
+ unittest.main()
View it on GitLab: https://salsa.debian.org/debian-gis-team/netcdf4-python/-/compare/8a3e53aa29b2d083cc32c24fa1efa0d1ea979cc6...f6584b732600f53f7d120d55fb3245f6b8fc4a63
--
View it on GitLab: https://salsa.debian.org/debian-gis-team/netcdf4-python/-/compare/8a3e53aa29b2d083cc32c24fa1efa0d1ea979cc6...f6584b732600f53f7d120d55fb3245f6b8fc4a63
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20220916/95cbd1e7/attachment-0001.htm>
More information about the Pkg-grass-devel
mailing list