[netcdf4-python] 01/05: New upstream version 1.3.0

Bas Couwenberg sebastic at debian.org
Mon Sep 25 07:18:49 UTC 2017


This is an automated email from the git hooks/post-receive script.

sebastic pushed a commit to branch master
in repository netcdf4-python.

commit d36f4b43d3218ab873cc8421d3f3d9d3b3588f5f
Author: Bas Couwenberg <sebastic at xs4all.nl>
Date:   Mon Sep 25 08:14:42 2017 +0200

    New upstream version 1.3.0
---
 .travis.yml                    |    2 +-
 Changelog                      |   34 +
 MANIFEST.in                    |    1 -
 PKG-INFO                       |    2 +-
 README.md                      |   15 +-
 README.release                 |    5 +-
 conda.recipe/bld.bat           |    1 -
 conda.recipe/build.sh          |    1 -
 docs/netCDF4/index.html        |   60 +-
 examples/reading_netCDF.ipynb  | 2421 ++++++++++++++++++++--------------------
 examples/writing_netCDF.ipynb  | 2256 +++++++++++++++++++------------------
 include/netCDF4.pxi            |   30 +-
 netCDF4/_netCDF4.pyx           |  425 +++----
 netCDF4/utils.py               |   64 +-
 netcdftime/_netcdftime.pyx     |   48 +-
 setup.cfg                      |    2 -
 setup.cfg.template             |   50 -
 setup.py                       |  397 ++++---
 test/tst_compound_alignment.py |    8 +-
 test/tst_compoundvar.py        |   28 +-
 test/tst_filepath.py           |   14 +-
 test/tst_netcdftime.py         |   69 +-
 test/tst_types.py              |    7 +
 test/tst_utils.py              |   93 +-
 24 files changed, 3166 insertions(+), 2867 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index e064866..05c4e5d 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -26,7 +26,7 @@ matrix:
     # Absolute minimum dependencies.
     - python: 2.7
       env:
-        - DEPENDS="numpy==1.7.0 cython==0.19 ordereddict==1.1 setuptools==18.0"
+        - DEPENDS="numpy==1.9.0 cython==0.19 ordereddict==1.1 setuptools==18.0"
 
 notifications:
   email: false
diff --git a/Changelog b/Changelog
index 846c012..c2bf06d 100644
--- a/Changelog
+++ b/Changelog
@@ -1,3 +1,37 @@
+ version 1.3.0 (tag v1.3.0rel)
+==============================
+ * always search for HDF5 headers when building, even when nc-config is used 
+   (since nc-config does not always include the path to the HDF5 headers).
+   Also use H5get_libversion to obtain HDF5 version info instead of
+   H5public.h. Fixes issue #677.
+ * encoding kwarg added to Dataset.__init__ and Dataset.filepath (default
+   is to use sys.getfilesystemencoding()) so that oddball
+   encodings (such as cp1252 on windows) can be handled in Dataset 
+   filepaths (issue #686).
+ * Calls to nc_get_vars are avoided, since nc_get_vars is very slow (issue
+   #680).  Strided slices are now converted to multiple calls to
+   nc_get_vara.  This speeds up strided slice reads by a factor of 10-100
+   (especially for NETCDF4/HDF5 files) in most cases. In some cases, strided reads
+   using nc_get_vars are faster (e.g. strided reads over many dimensions
+   such as var[:,::2,::2,::2])), so a variable method use_nc_get_vars was added. 
+   var.use_nc_get_vars(True) will tell the library to use nc_get_vars instead
+   of multiple calls to nc_get_vara, which was the default behaviour previous
+   to this change.
+ * fix utc offset time zone conversion in netcdftime - it was being done
+   exactly backwards (issue #685 - thanks to @pgamez and @mdecker).
+ * Fix error message for illegal ellipsis slicing, add test (issue #701).
+ * Improve timezone format parsing in netcdftime
+   (https://github.com/Unidata/netcdftime/issues/17).  
+ * make sure numpy datatypes used to define CompoundTypes have
+   isalignedstruct flag set to True (issue #705), otherwise.
+   segfaults can occur. Fix required raising them minimum numpy requirement 
+   from 1.7.0 to 1.9.0.
+ * ignore missing_value, _FillValue, valid_range, valid_min and valid_max
+   when creating masked arrays if attribute cannot be safely
+   cast to variable data type (and issue a warning).  When setting
+   these attributes don't cast to variable dtype unless it can
+   be done safely and issue a warning. Issue #707.
+
  version 1.2.9 (tag v1.2.9rel)
 ==============================
  * Fix for auto scaling and masking when _Unsigned attribute set (create
diff --git a/MANIFEST.in b/MANIFEST.in
index b699e50..197ac93 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -8,7 +8,6 @@ include Changelog
 include appveyor.yml
 include .travis.yml
 include setup.cfg
-include setup.cfg.template
 include examples/*py
 include examples/*ipynb
 include examples/README.md
diff --git a/PKG-INFO b/PKG-INFO
index 06e5b8e..44f8c64 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: netCDF4
-Version: 1.2.8
+Version: 1.3.0
 Author: Jeff Whitaker
 Author-email: jeffrey s whitaker at noaa gov
 Home-page: https://github.com/Unidata/netcdf4-python
diff --git a/README.md b/README.md
index 63bfce3..c0294db 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,17 @@
 [![PyPI package](https://badge.fury.io/py/netCDF4.svg)](http://python.org/pypi/netCDF4)
 
 ## News
-For the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
+For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
+
+9/25/2017: Version [1.3.0](https://pypi.python.org/pypi/netCDF4/1.3.0) released. Bug fixes
+for `netcdftime` and optimizations for reading strided slices. `encoding` kwarg added to 
+`Dataset.__init__` and `Dataset.filepath` to deal with oddball encodings in filename
+paths (`sys.getfilesystemencoding()` is used by default to determine encoding).
+Make sure numpy datatypes used to define CompoundTypes have `isalignedstruct` flag set
+to avoid segfaults - which required bumping the minimum required numpy from 1.7.0 
+to 1.9.0. In cases where `missing_value/valid_min/valid_max/_FillValue` cannot be
+safely cast to the variable's dtype, they are no longer be used to automatically
+mask the data and a warning message is issued.
 
 6/10/2017: Version [1.2.9](https://pypi.python.org/pypi/netCDF4/1.2.9) released. Fixes for auto-scaling
 and masking when `_Unsigned` and/or `valid_min`, `valid_max` attributes present.  setup.py updated
@@ -122,8 +132,7 @@ instead of relying exclusively on the nc-config utility.
   installed and you have [Python](https://www.python.org) 2.7 or newer.
 
 * Make sure [HDF5](http://www.h5py.org/) and netcdf-4 are installed, and the `nc-config` utility
-  is in your Unix PATH. If `setup.cfg` does not exist, copy `setup.cfg.template`
-  to `setup.cfg`, and make sure the line with `use_ncconfig=True` is un-commented.
+  is in your Unix PATH. 
 
 * Run `python setup.py build`, then `python setup.py install` (with `sudo` if necessary).
 
diff --git a/README.release b/README.release
index f03a8fa..82f4508 100644
--- a/README.release
+++ b/README.release
@@ -13,10 +13,11 @@
    git tag -a vX.Y.Zrel -m "version X.Y.Z release"
    git push origin --tags
 * push an empty commit to the netcdf4-python-wheels repo to trigger new builds.
+  (e.g. git commit --allow-empty -m "Trigger build")
   You will likely want to edit the .travis.yml file at 
   https://github.com/MacPython/netcdf4-python-wheels to specify the BUILD_COMMIT before triggering a build.
-* update the pypi entry, upload the macosx wheels and the windows wheels
-  from Christoph Gohkle's site.  Lastly, create a source tarball using
+* update the pypi entry, upload the wheels from wheels.scipy.org.
+  Lastly, create a source tarball using
   'python setup.py sdist' and upload to pypi.
 * update web docs by copying docs/netCDF4/index.html somewhere, switch
   to the gh-pages branch, copy the index.html file back, commit and push
diff --git a/conda.recipe/bld.bat b/conda.recipe/bld.bat
index e557907..a022b9e 100644
--- a/conda.recipe/bld.bat
+++ b/conda.recipe/bld.bat
@@ -1,7 +1,6 @@
 set SITECFG=%SRC_DIR%/setup.cfg
 
 echo [options] > %SITECFG%
-echo use_cython=True >> %SITECFG%
 echo [directories] >> %SITECFG%
 echo HDF5_libdir = %LIBRARY_LIB% >> %SITECFG%
 echo HDF5_incdir = %LIBRARY_INC% >> %SITECFG%
diff --git a/conda.recipe/build.sh b/conda.recipe/build.sh
index 582d71c..79fc65e 100644
--- a/conda.recipe/build.sh
+++ b/conda.recipe/build.sh
@@ -3,7 +3,6 @@
 SETUPCFG=$SRC_DIR\setup.cfg
 
 echo "[options]" > $SETUPCFG
-echo "use_cython=True" >> $SETUPCFG
 echo "[directories]" >> $SETUPCFG
 echo "netCDF4_dir = $PREFIX" >> $SETUPCFG
 
diff --git a/docs/netCDF4/index.html b/docs/netCDF4/index.html
index a931a7e..fe82e85 100644
--- a/docs/netCDF4/index.html
+++ b/docs/netCDF4/index.html
@@ -4,7 +4,7 @@
   <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1" />
 
     <title>netCDF4 API documentation</title>
-    <meta name="description" content="Version 1.2.9
+    <meta name="description" content="Version 1.3.0
 -------------
 - - - 
 
@@ -1250,6 +1250,7 @@ table {
     <li class="mono"><a href="#netCDF4.Variable.setncattr">setncattr</a></li>
     <li class="mono"><a href="#netCDF4.Variable.setncattr_string">setncattr_string</a></li>
     <li class="mono"><a href="#netCDF4.Variable.setncatts">setncatts</a></li>
+    <li class="mono"><a href="#netCDF4.Variable.use_nc_get_vars">use_nc_get_vars</a></li>
   </ul>
 
         </li>
@@ -1268,7 +1269,7 @@ table {
 
   <header id="section-intro">
   <h1 class="title"><span class="name">netCDF4</span> module</h1>
-  <h2>Version 1.2.9</h2>
+  <h2>Version 1.3.0</h2>
 <hr />
 <h1>Introduction</h1>
 <p>netcdf4-python is a Python interface to the netCDF C library.  </p>
@@ -1297,7 +1298,7 @@ types) are not supported.</p>
 <h1>Requires</h1>
 <ul>
 <li>Python 2.7 or later (python 3 works too).</li>
-<li><a href="http://numpy.scipy.org">numpy array module</a>, version 1.7.0 or later.</li>
+<li><a href="http://numpy.scipy.org">numpy array module</a>, version 1.9.0 or later.</li>
 <li><a href="http://cython.org">Cython</a>, version 0.19 or later.</li>
 <li><a href="https://pypi.python.org/pypi/setuptools">setuptools</a>, version 18.0 or
    later.</li>
@@ -1327,9 +1328,9 @@ types) are not supported.</p>
  easiest if all the C libs are built as shared libraries.</li>
 <li>By default, the utility <code>nc-config</code>, installed with netcdf 4.1.2 or higher,
  will be run used to determine where all the dependencies live.</li>
-<li>If <code>nc-config</code> is not in your default <code>$PATH</code>, rename the
- file <code>setup.cfg.template</code> to <code>setup.cfg</code>, then edit
- in a text editor (follow the instructions in the comments).
+<li>If <code>nc-config</code> is not in your default <code>$PATH</code>
+ edit the <code>setup.cfg</code> file
+ in a text editor and follow the instructions in the comments.
  In addition to specifying the path to <code>nc-config</code>,
  you can manually set the paths to all the libraries and their include files
  (in case <code>nc-config</code> does not do the right thing).</li>
@@ -1930,8 +1931,11 @@ for storing numpy complex arrays.  Here's an example:</p>
 
 
 <p>Compound types can be nested, but you must create the 'inner'
-ones first. All of the compound types defined for a <a href="#netCDF4.Dataset"><code>Dataset</code></a> or <a href="#netCDF4.Group"><code>Group</code></a> are stored in a
-Python dictionary, just like variables and dimensions. As always, printing
+ones first. All possible numpy structured arrays cannot be
+represented as Compound variables - an error message will be
+raise if you try to create one that is not supported.
+All of the compound types defined for a <a href="#netCDF4.Dataset"><code>Dataset</code></a> or <a href="#netCDF4.Group"><code>Group</code></a> are stored 
+in a Python dictionary, just like variables and dimensions. As always, printing
 objects gives useful summary information in an interactive session:</p>
 <div class="codehilite"><pre><span></span><span class="o">>>></span> <span class="k">print</span> <span class="n">f</span>
 <span class="o"><</span><span class="nb">type</span> <span class="s2">"netCDF4._netCDF4.Dataset"</span><span class="o">></span>
@@ -2694,7 +2698,9 @@ reducing memory usage and open file handles.  However, in many cases this is not
 desirable, since the associated Variable instances may still be needed, but are
 rendered unusable when the parent Dataset instance is garbage collected.</p>
 <p><strong><code>memory</code></strong>: if not <code>None</code>, open file with contents taken from this block of memory.
-Must be a sequence of bytes.  Note this only works with "r" mode.</p></div>
+Must be a sequence of bytes.  Note this only works with "r" mode.</p>
+<p><strong><code>encoding</code></strong>: encoding used to encode filename string into bytes.
+Default is None (<code>sys.getdefaultfileencoding()</code> is used).</p></div>
   <div class="source_cont">
 </div>
 
@@ -2952,14 +2958,16 @@ attributes.</p></div>
             
   <div class="item">
     <div class="name def" id="netCDF4.Dataset.filepath">
-    <p>def <span class="ident">filepath</span>(</p><p>self)</p>
+    <p>def <span class="ident">filepath</span>(</p><p>self,encoding=None)</p>
     </div>
     
 
     
   
     <div class="desc"><p>Get the file system path (or the opendap URL) which was used to
-open/create the Dataset. Requires netcdf >= 4.1.2</p></div>
+open/create the Dataset. Requires netcdf >= 4.1.2.  The path
+is decoded into a string using <code>sys.getfilesystemencoding()</code> by default, this can be
+changed using the <code>encoding</code> kwarg.</p></div>
   <div class="source_cont">
 </div>
 
@@ -4040,7 +4048,7 @@ attributes.</p></div>
             
   <div class="item">
     <div class="name def" id="netCDF4.Group.filepath">
-    <p>def <span class="ident">filepath</span>(</p><p>self)</p>
+    <p>def <span class="ident">filepath</span>(</p><p>self,encoding=None)</p>
     </div>
     
     <p class="inheritance">
@@ -4051,7 +4059,9 @@ attributes.</p></div>
     
   
     <div class="desc inherited"><p>Get the file system path (or the opendap URL) which was used to
-open/create the Dataset. Requires netcdf >= 4.1.2</p></div>
+open/create the Dataset. Requires netcdf >= 4.1.2.  The path
+is decoded into a string using <code>sys.getfilesystemencoding()</code> by default, this can be
+changed using the <code>encoding</code> kwarg.</p></div>
   <div class="source_cont">
 </div>
 
@@ -4975,7 +4985,7 @@ attributes.</p></div>
             
   <div class="item">
     <div class="name def" id="netCDF4.MFDataset.filepath">
-    <p>def <span class="ident">filepath</span>(</p><p>self)</p>
+    <p>def <span class="ident">filepath</span>(</p><p>self,encoding=None)</p>
     </div>
     
     <p class="inheritance">
@@ -4986,7 +4996,9 @@ attributes.</p></div>
     
   
     <div class="desc inherited"><p>Get the file system path (or the opendap URL) which was used to
-open/create the Dataset. Requires netcdf >= 4.1.2</p></div>
+open/create the Dataset. Requires netcdf >= 4.1.2.  The path
+is decoded into a string using <code>sys.getfilesystemencoding()</code> by default, this can be
+changed using the <code>encoding</code> kwarg.</p></div>
   <div class="source_cont">
 </div>
 
@@ -6353,6 +6365,24 @@ each attribute</p></div>
 
   </div>
   
+            
+  <div class="item">
+    <div class="name def" id="netCDF4.Variable.use_nc_get_vars">
+    <p>def <span class="ident">use_nc_get_vars</span>(</p><p>self,_no_get_vars)</p>
+    </div>
+    
+
+    
+  
+    <div class="desc"><p>enable the use of netcdf library routine <code>nc_get_vars</code>
+to retrieve strided variable slices.  By default,
+<code>nc_get_vars</code> not used since it slower than multiple calls
+to the unstrided read routine <code>nc_get_vara</code> in most cases.</p></div>
+  <div class="source_cont">
+</div>
+
+  </div>
+  
       </div>
       </div>
 
diff --git a/examples/reading_netCDF.ipynb b/examples/reading_netCDF.ipynb
index acc8a9c..9e8070a 100644
--- a/examples/reading_netCDF.ipynb
+++ b/examples/reading_netCDF.ipynb
@@ -1,1308 +1,1293 @@
 {
- "metadata": {
-  "name": "",
-  "signature": "sha256:dd72246b5115e614f58e416dbaf20bdc9b9fd21cd509e3765fca519ab90d3930"
- },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
+ "cells": [
   {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "slide_helper": "subslide_end",
-       "slide_type": "subslide"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "# Reading netCDF data\n",
-      "- requires [numpy](http://numpy.scipy.org) and netCDF/HDF5 C libraries.\n",
-      "- Github site: https://github.com/Unidata/netcdf4-python\n",
-      "- Online docs: http://unidata.github.io/netcdf4-python/\n",
-      "- Based on Konrad Hinsen's old [Scientific.IO.NetCDF](http://dirac.cnrs-orleans.fr/plone/software/scientificpython/) API, with lots of added netcdf version 4 features.\n",
-      "- Developed by Jeff Whitaker at NOAA, with many contributions from users.\n",
-      "\n",
-      "**Important Note**: To run this notebook, you will need the data files from the github repository (the data is not included in the source tarball release).  Please go to https://github.com/Unidata/netcdf4-python and follow the instructions for cloning the repository."
-     ]
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "slide_helper": "subslide_end",
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Interactively exploring a netCDF File\n",
-      "\n",
-      "Let's explore a netCDF file from the *Atlantic Real-Time Ocean Forecast System*\n",
-      "\n",
-      "first, import netcdf4-python and numpy"
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "# Reading netCDF data\n",
+    "- requires [numpy](http://numpy.scipy.org) and netCDF/HDF5 C libraries.\n",
+    "- Github site: https://github.com/Unidata/netcdf4-python\n",
+    "- Online docs: http://unidata.github.io/netcdf4-python/\n",
+    "- Based on Konrad Hinsen's old [Scientific.IO.NetCDF](http://dirac.cnrs-orleans.fr/plone/software/scientificpython/) API, with lots of added netcdf version 4 features.\n",
+    "- Developed by Jeff Whitaker at NOAA, with many contributions from users."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from __future__ import print_function # make sure print behaves the same in 2.7 and 3.x\n",
-      "import netCDF4\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_number": 2,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 1
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Interactively exploring a netCDF File\n",
+    "\n",
+    "Let's explore a netCDF file from the *Atlantic Real-Time Ocean Forecast System*\n",
+    "\n",
+    "first, import netcdf4-python and numpy"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_number": 2,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 2,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Create a netCDF4.Dataset object\n",
-      "- **`f`** is a `Dataset` object, representing an open netCDF file.\n",
-      "- printing the object gives you summary information, similar to *`ncdump -h`*."
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "import netCDF4\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 2,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "f = netCDF4.Dataset('data/rtofs_glo_3dz_f006_6hrly_reg3.nc')\n",
-      "print(f) "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 4,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Dataset'>\n",
-        "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
-        "    Conventions: CF-1.0\n",
-        "    title: HYCOM ATLb2.00\n",
-        "    institution: National Centers for Environmental Prediction\n",
-        "    source: HYCOM archive file\n",
-        "    experiment: 90.9\n",
-        "    history: archv2ncdf3z\n",
-        "    dimensions(sizes): MT(1), Y(850), X(712), Depth(10)\n",
-        "    variables(dimensions): float64 \u001b[4mMT\u001b[0m(MT), float64 \u001b[4mDate\u001b[0m(MT), float32 \u001b[4mDepth\u001b[0m(Depth), int32 \u001b[4mY\u001b[0m(Y), int32 \u001b[4mX\u001b[0m(X), float32 \u001b[4mLatitude\u001b[0m(Y,X), float32 \u001b[4mLongitude\u001b[0m(Y,X), float32 \u001b[4mu\u001b[0m(MT,Depth,Y,X), float32 \u001b[4mv\u001b[0m(MT,Depth,Y,X), float32 \u001b[4mtemperature\u001b[0m(MT,Depth,Y,X), float32 \u001b[4msalinity\u001b[0m(MT,Depth,Y,X)\n",
-        "    groups: \n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 2
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Create a netCDF4.Dataset object\n",
+    "- **`f`** is a `Dataset` object, representing an open netCDF file.\n",
+    "- printing the object gives you summary information, similar to *`ncdump -h`*."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 4,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 4,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Access a netCDF variable\n",
-      "- variable objects stored by name in **`variables`** dict.\n",
-      "- print the variable yields summary info (including all the attributes).\n",
-      "- no actual data read yet (just have a reference to the variable object with metadata)."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Dataset'>\n",
+      "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
+      "    Conventions: CF-1.0\n",
+      "    title: HYCOM ATLb2.00\n",
+      "    institution: National Centers for Environmental Prediction\n",
+      "    source: HYCOM archive file\n",
+      "    experiment: 90.9\n",
+      "    history: archv2ncdf3z\n",
+      "    dimensions(sizes): MT(1), Y(850), X(712), Depth(10)\n",
+      "    variables(dimensions): float64 \u001b[4mMT\u001b[0m(MT), float64 \u001b[4mDate\u001b[0m(MT), float32 \u001b[4mDepth\u001b[0m(Depth), int32 \u001b[4mY\u001b[0m(Y), int32 \u001b[4mX\u001b[0m(X), float32 \u001b[4mLatitude\u001b[0m(Y,X), float32 \u001b[4mLongitude\u001b[0m(Y,X), float32 \u001b[4mu\u001b[0m(MT,Depth,Y,X), float32 \u001b[4mv\u001b[0m(MT,Depth,Y,X), float32 \u001b[4mtemperature\u001b[0m(MT,Depth,Y,X), float32 \u001b[4msalinity\u001b[0m(MT,Depth,Y,X)\n",
+      "    groups: \n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "f = netCDF4.Dataset('data/rtofs_glo_3dz_f006_6hrly_reg3.nc')\n",
+    "print(f) "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 4,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print(f.variables.keys()) # get all variable names\n",
-      "temp = f.variables['temperature']  # temperature variable\n",
-      "print(temp) "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 6,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "[u'MT', u'Date', u'Depth', u'Y', u'X', u'Latitude', u'Longitude', u'u', u'v', u'temperature', u'salinity']\n",
-        "<type 'netCDF4.Variable'>\n",
-        "float32 temperature(MT, Depth, Y, X)\n",
-        "    coordinates: Longitude Latitude Date\n",
-        "    standard_name: sea_water_potential_temperature\n",
-        "    units: degC\n",
-        "    _FillValue: 1.26765e+30\n",
-        "    valid_range: [ -5.07860279  11.14989948]\n",
-        "    long_name:   temp [90.9H]\n",
-        "unlimited dimensions: MT\n",
-        "current shape = (1, 10, 850, 712)\n",
-        "filling on\n"
-       ]
-      }
-     ],
-     "prompt_number": 3
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Access a netCDF variable\n",
+    "- variable objects stored by name in **`variables`** dict.\n",
+    "- print the variable yields summary info (including all the attributes).\n",
+    "- no actual data read yet (just have a reference to the variable object with metadata)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 6,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 6,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## List the Dimensions\n",
-      "\n",
-      "- All variables in a netCDF file have an associated shape, specified by a list of dimensions.\n",
-      "- Let's list all the dimensions in this netCDF file.\n",
-      "- Note that the **`MT`** dimension is special (*`unlimited`*), which means it can be appended to."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[u'MT', u'Date', u'Depth', u'Y', u'X', u'Latitude', u'Longitude', u'u', u'v', u'temperature', u'salinity']\n",
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float32 temperature(MT, Depth, Y, X)\n",
+      "    coordinates: Longitude Latitude Date\n",
+      "    standard_name: sea_water_potential_temperature\n",
+      "    units: degC\n",
+      "    _FillValue: 1.26765e+30\n",
+      "    valid_range: [ -5.07860279  11.14989948]\n",
+      "    long_name:   temp [90.9H]\n",
+      "unlimited dimensions: MT\n",
+      "current shape = (1, 10, 850, 712)\n",
+      "filling on\n"
      ]
+    }
+   ],
+   "source": [
+    "print(f.variables.keys()) # get all variable names\n",
+    "temp = f.variables['temperature']  # temperature variable\n",
+    "print(temp) "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 6,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "for d in f.dimensions.items():\n",
-      "    print(d)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 8
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "(u'MT', <type 'netCDF4.Dimension'> (unlimited): name = 'MT', size = 1\n",
-        ")\n",
-        "(u'Y', <type 'netCDF4.Dimension'>: name = 'Y', size = 850\n",
-        ")\n",
-        "(u'X', <type 'netCDF4.Dimension'>: name = 'X', size = 712\n",
-        ")\n",
-        "(u'Depth', <type 'netCDF4.Dimension'>: name = 'Depth', size = 10\n",
-        ")\n"
-       ]
-      }
-     ],
-     "prompt_number": 4
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## List the Dimensions\n",
+    "\n",
+    "- All variables in a netCDF file have an associated shape, specified by a list of dimensions.\n",
+    "- Let's list all the dimensions in this netCDF file.\n",
+    "- Note that the **`MT`** dimension is special (*`unlimited`*), which means it can be appended to."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 8
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 9
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "Each variable has a **`dimensions`** and a **`shape`** attribute."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "(u'MT', <type 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'MT', size = 1\n",
+      ")\n",
+      "(u'Y', <type 'netCDF4._netCDF4.Dimension'>: name = 'Y', size = 850\n",
+      ")\n",
+      "(u'X', <type 'netCDF4._netCDF4.Dimension'>: name = 'X', size = 712\n",
+      ")\n",
+      "(u'Depth', <type 'netCDF4._netCDF4.Dimension'>: name = 'Depth', size = 10\n",
+      ")\n"
      ]
+    }
+   ],
+   "source": [
+    "for d in f.dimensions.items():\n",
+    "    print(d)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 9
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "temp.dimensions"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 10
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "metadata": {},
-       "output_type": "pyout",
-       "prompt_number": 5,
-       "text": [
-        "(u'MT', u'Depth', u'Y', u'X')"
-       ]
-      }
-     ],
-     "prompt_number": 5
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "Each variable has a **`dimensions`** and a **`shape`** attribute."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 10
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "temp.shape"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 11,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
+     "data": {
+      "text/plain": [
+       "(u'MT', u'Depth', u'Y', u'X')"
+      ]
      },
-     "outputs": [
-      {
-       "metadata": {},
-       "output_type": "pyout",
-       "prompt_number": 6,
-       "text": [
-        "(1, 10, 850, 712)"
-       ]
-      }
-     ],
-     "prompt_number": 6
+     "execution_count": 9,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "temp.dimensions"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 11,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 11,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
+     "data": {
+      "text/plain": [
+       "(1, 10, 850, 712)"
+      ]
      },
-     "source": [
-      "### Each dimension typically has a variable associated with it (called a *coordinate* variable).\n",
-      "- *Coordinate variables* are 1D variables that have the same name as dimensions.\n",
-      "- Coordinate variables and *auxiliary coordinate variables* (named by the *coordinates* attribute) locate values in time and space."
-     ]
+     "execution_count": 10,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "temp.shape"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 11,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "mt = f.variables['MT']\n",
-      "depth = f.variables['Depth']\n",
-      "x,y = f.variables['X'], f.variables['Y']\n",
-      "print(mt)\n",
-      "print(x)                 "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 13,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "float64 MT(MT)\n",
-        "    long_name: time\n",
-        "    units: days since 1900-12-31 00:00:00\n",
-        "    calendar: standard\n",
-        "    axis: T\n",
-        "unlimited dimensions: MT\n",
-        "current shape = (1,)\n",
-        "filling on, default _FillValue of 9.96920996839e+36 used\n",
-        "\n",
-        "<type 'netCDF4.Variable'>\n",
-        "int32 X(X)\n",
-        "    point_spacing: even\n",
-        "    axis: X\n",
-        "unlimited dimensions: \n",
-        "current shape = (712,)\n",
-        "filling on, default _FillValue of -2147483647 used\n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 7
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "### Each dimension typically has a variable associated with it (called a *coordinate* variable).\n",
+    "- *Coordinate variables* are 1D variables that have the same name as dimensions.\n",
+    "- Coordinate variables and *auxiliary coordinate variables* (named by the *coordinates* attribute) locate values in time and space."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 13,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 13,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Accessing data from a netCDF variable object\n",
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float64 MT(MT)\n",
+      "    long_name: time\n",
+      "    units: days since 1900-12-31 00:00:00\n",
+      "    calendar: standard\n",
+      "    axis: T\n",
+      "unlimited dimensions: MT\n",
+      "current shape = (1,)\n",
+      "filling on, default _FillValue of 9.96920996839e+36 used\n",
       "\n",
-      "- netCDF variables objects behave much like numpy arrays.\n",
-      "- slicing a netCDF variable object returns a numpy array with the data.\n",
-      "- Boolean array and integer sequence indexing behaves differently for netCDF variables than for numpy arrays. Only 1-d boolean arrays and integer sequences are allowed, and these indices work independently along each dimension (similar to the way vector subscripts work in fortran)."
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "int32 X(X)\n",
+      "    point_spacing: even\n",
+      "    axis: X\n",
+      "unlimited dimensions: \n",
+      "current shape = (712,)\n",
+      "filling on, default _FillValue of -2147483647 used\n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "mt = f.variables['MT']\n",
+    "depth = f.variables['Depth']\n",
+    "x,y = f.variables['X'], f.variables['Y']\n",
+    "print(mt)\n",
+    "print(x)                 "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 13,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "time = mt[:]  # Reads the netCDF variable MT, array of one element\n",
-      "print(time) "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 15
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "[ 41023.25]\n"
-       ]
-      }
-     ],
-     "prompt_number": 8
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dpth = depth[:] # examine depth array\n",
-      "print(dpth) "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 16
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "[    0.   100.   200.   400.   700.  1000.  2000.  3000.  4000.  5000.]\n"
-       ]
-      }
-     ],
-     "prompt_number": 9
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "xx,yy = x[:],y[:]\n",
-      "print('shape of temp variable: %s' % repr(temp.shape))\n",
-      "tempslice = temp[0, dpth > 400, yy > yy.max()/2, xx > xx.max()/2]\n",
-      "print('shape of temp slice: %s' % repr(tempslice.shape))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 17,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "shape of temp variable: (1, 10, 850, 712)\n",
-        "shape of temp slice: (6, 425, 356)"
-       ]
-      },
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 10
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Accessing data from a netCDF variable object\n",
+    "\n",
+    "- netCDF variables objects behave much like numpy arrays.\n",
+    "- slicing a netCDF variable object returns a numpy array with the data.\n",
+    "- Boolean array and integer sequence indexing behaves differently for netCDF variables than for numpy arrays. Only 1-d boolean arrays and integer sequences are allowed, and these indices work independently along each dimension (similar to the way vector subscripts work in fortran)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 15
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 17,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## What is the sea surface temperature and salinity at 50N, 140W?\n",
-      "### Finding the latitude and longitude indices of 50N, 140W\n",
-      "\n",
-      "- The `X` and `Y` dimensions don't look like longitudes and latitudes\n",
-      "- Use the auxilary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[ 41023.25]\n"
      ]
+    }
+   ],
+   "source": [
+    "time = mt[:]  # Reads the netCDF variable MT, array of one element\n",
+    "print(time) "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 16
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "lat, lon = f.variables['Latitude'], f.variables['Longitude']\n",
-      "print(lat)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 19
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "float32 Latitude(Y, X)\n",
-        "    standard_name: latitude\n",
-        "    units: degrees_north\n",
-        "unlimited dimensions: \n",
-        "current shape = (850, 712)\n",
-        "filling on, default _FillValue of 9.96920996839e+36 used\n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 11
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[    0.   100.   200.   400.   700.  1000.  2000.  3000.  4000.  5000.]\n"
+     ]
+    }
+   ],
+   "source": [
+    "dpth = depth[:] # examine depth array\n",
+    "print(dpth) "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 17,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 20,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "Aha!  So we need to find array indices `iy` and `ix` such that `Latitude[iy, ix]` is close to 50.0 and `Longitude[iy, ix]` is close to -140.0 ..."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "shape of temp variable: (1, 10, 850, 712)\n",
+      "shape of temp slice: (6, 425, 356)\n"
      ]
+    }
+   ],
+   "source": [
+    "xx,yy = x[:],y[:]\n",
+    "print('shape of temp variable: %s' % repr(temp.shape))\n",
+    "tempslice = temp[0, dpth > 400, yy > yy.max()/2, xx > xx.max()/2]\n",
+    "print('shape of temp slice: %s' % repr(tempslice.shape))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 17,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# extract lat/lon values (in degrees) to numpy arrays\n",
-      "latvals = lat[:]; lonvals = lon[:] \n",
-      "# a function to find the index of the point closest pt\n",
-      "# (in squared distance) to give lat/lon value.\n",
-      "def getclosest_ij(lats,lons,latpt,lonpt):\n",
-      "    # find squared distance of every point on grid\n",
-      "    dist_sq = (lats-latpt)**2 + (lons-lonpt)**2  \n",
-      "    # 1D index of minimum dist_sq element\n",
-      "    minindex_flattened = dist_sq.argmin()    \n",
-      "    # Get 2D index for latvals and lonvals arrays from 1D index\n",
-      "    return np.unravel_index(minindex_flattened, lats.shape)\n",
-      "iy_min, ix_min = getclosest_ij(latvals, lonvals, 50., -140)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 20,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 12
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## What is the sea surface temperature and salinity at 50N, 140W?\n",
+    "### Finding the latitude and longitude indices of 50N, 140W\n",
+    "\n",
+    "- The `X` and `Y` dimensions don't look like longitudes and latitudes\n",
+    "- Use the auxilary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 19
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 22
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "### Now we have all the information we need to find our answer.\n"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float32 Latitude(Y, X)\n",
+      "    standard_name: latitude\n",
+      "    units: degrees_north\n",
+      "unlimited dimensions: \n",
+      "current shape = (850, 712)\n",
+      "filling on, default _FillValue of 9.96920996839e+36 used\n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "lat, lon = f.variables['Latitude'], f.variables['Longitude']\n",
+    "print(lat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 20,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 23
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "```\n",
-      "|----------+--------|\n",
-      "| Variable |  Index |\n",
-      "|----------+--------|\n",
-      "| MT       |      0 |\n",
-      "| Depth    |      0 |\n",
-      "| Y        | iy_min |\n",
-      "| X        | ix_min |\n",
-      "|----------+--------|\n",
-      "```"
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "Aha!  So we need to find array indices `iy` and `ix` such that `Latitude[iy, ix]` is close to 50.0 and `Longitude[iy, ix]` is close to -140.0 ..."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 20,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 24
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "### What is the sea surface temperature and salinity at the specified point?"
-     ]
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "# extract lat/lon values (in degrees) to numpy arrays\n",
+    "latvals = lat[:]; lonvals = lon[:] \n",
+    "# a function to find the index of the point closest pt\n",
+    "# (in squared distance) to give lat/lon value.\n",
+    "def getclosest_ij(lats,lons,latpt,lonpt):\n",
+    "    # find squared distance of every point on grid\n",
+    "    dist_sq = (lats-latpt)**2 + (lons-lonpt)**2  \n",
+    "    # 1D index of minimum dist_sq element\n",
+    "    minindex_flattened = dist_sq.argmin()    \n",
+    "    # Get 2D index for latvals and lonvals arrays from 1D index\n",
+    "    return np.unravel_index(minindex_flattened, lats.shape)\n",
+    "iy_min, ix_min = getclosest_ij(latvals, lonvals, 50., -140)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 22
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sal = f.variables['salinity']\n",
-      "# Read values out of the netCDF file for temperature and salinity\n",
-      "print('%7.4f %s' % (temp[0,0,iy_min,ix_min], temp.units))\n",
-      "print('%7.4f %s' % (sal[0,0,iy_min,ix_min], sal.units))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 25,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        " 6.4631 degC\n",
-        "32.6572 psu\n"
-       ]
-      }
-     ],
-     "prompt_number": 13
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "### Now we have all the information we need to find our answer.\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 23
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 25,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Remote data access via openDAP\n",
-      "\n",
-      "- Remote data can be accessed seamlessly with the netcdf4-python API\n",
-      "- Access happens via the DAP protocol and DAP servers, such as TDS.\n",
-      "- many formats supported, like GRIB, are supported \"under the hood\"."
-     ]
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "```\n",
+    "|----------+--------|\n",
+    "| Variable |  Index |\n",
+    "|----------+--------|\n",
+    "| MT       |      0 |\n",
+    "| Depth    |      0 |\n",
+    "| Y        | iy_min |\n",
+    "| X        | ix_min |\n",
+    "|----------+--------|\n",
+    "```"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 24
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "### What is the sea surface temperature and salinity at the specified point?"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 25,
+     "slide_helper": "subslide_end"
+    },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 27
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "The following example showcases some nice netCDF features:\n",
-      "\n",
-      "1. We are seamlessly accessing **remote** data, from a TDS server.\n",
-      "2. We are seamlessly accessing **GRIB2** data, as if it were netCDF data.\n",
-      "3. We are generating **metadata** on-the-fly."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      " 6.4631 degC\n",
+      "32.6572 psu\n"
      ]
+    }
+   ],
+   "source": [
+    "sal = f.variables['salinity']\n",
+    "# Read values out of the netCDF file for temperature and salinity\n",
+    "print('%7.4f %s' % (temp[0,0,iy_min,ix_min], temp.units))\n",
+    "print('%7.4f %s' % (sal[0,0,iy_min,ix_min], sal.units))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 25,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import datetime\n",
-      "date = datetime.datetime.now()\n",
-      "# build URL for latest synoptic analysis time\n",
-      "URL = 'http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_%04i%02i%02i_%02i%02i.grib2/GC' %\\\n",
-      "(date.year,date.month,date.day,6*(date.hour//6),0)\n",
-      "# keep moving back 6 hours until a valid URL found\n",
-      "validURL = False; ncount = 0\n",
-      "while (not validURL and ncount < 10):\n",
-      "    print(URL)\n",
-      "    try:\n",
-      "        gfs = netCDF4.Dataset(URL)\n",
-      "        validURL = True\n",
-      "    except RuntimeError:\n",
-      "        date -= datetime.timedelta(hours=6)\n",
-      "        ncount += 1       "
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 28,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_20150310_1200.grib2/GC\n"
-       ]
-      }
-     ],
-     "prompt_number": 14
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Remote data access via openDAP\n",
+    "\n",
+    "- Remote data can be accessed seamlessly with the netcdf4-python API\n",
+    "- Access happens via the DAP protocol and DAP servers, such as TDS.\n",
+    "- many formats supported, like GRIB, are supported \"under the hood\"."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 27
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Look at metadata for a specific variable\n",
-      "# gfs.variables.keys() will show all available variables.\n",
-      "sfctmp = gfs.variables['Temperature_surface']\n",
-      "# get info about sfctmp\n",
-      "print(sfctmp)\n",
-      "# print coord vars associated with this variable\n",
-      "for dname in sfctmp.dimensions:   \n",
-      "    print(gfs.variables[dname])"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 28,
-       "slide_helper": "subslide_end",
-       "slide_type": "subslide"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "float32 Temperature_surface(time1, lat, lon)\n",
-        "    long_name: Temperature @ Ground or water surface\n",
-        "    units: K\n",
-        "    missing_value: nan\n",
-        "    abbreviation: TMP\n",
-        "    coordinates: time1 \n",
-        "    Grib_Variable_Id: VAR_0-0-0_L1\n",
-        "    Grib2_Parameter: [0 0 0]\n",
-        "    Grib2_Parameter_Discipline: Meteorological products\n",
-        "    Grib2_Parameter_Category: Temperature\n",
-        "    Grib2_Parameter_Name: Temperature\n",
-        "    Grib2_Level_Type: Ground or water surface\n",
-        "    Grib2_Generating_Process_Type: Forecast\n",
-        "unlimited dimensions: \n",
-        "current shape = (93, 361, 720)\n",
-        "filling off\n",
-        "\n",
-        "<type 'netCDF4.Variable'>\n",
-        "float64 time1(time1)\n",
-        "    units: Hour since 2015-03-10T12:00:00Z\n",
-        "    standard_name: time\n",
-        "    long_name: GRIB forecast or observation time\n",
-        "    calendar: proleptic_gregorian\n",
-        "    _CoordinateAxisType: Time\n",
-        "unlimited dimensions: \n",
-        "current shape = (93,)\n",
-        "filling off\n",
-        "\n",
-        "<type 'netCDF4.Variable'>\n",
-        "float32 lat(lat)\n",
-        "    units: degrees_north\n",
-        "    _CoordinateAxisType: Lat\n",
-        "unlimited dimensions: \n",
-        "current shape = (361,)\n",
-        "filling off\n",
-        "\n",
-        "<type 'netCDF4.Variable'>\n",
-        "float32 lon(lon)\n",
-        "    units: degrees_east\n",
-        "    _CoordinateAxisType: Lon\n",
-        "unlimited dimensions: \n",
-        "current shape = (720,)\n",
-        "filling off\n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 15
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "The following example showcases some nice netCDF features:\n",
+    "\n",
+    "1. We are seamlessly accessing **remote** data, from a TDS server.\n",
+    "2. We are seamlessly accessing **GRIB2** data, as if it were netCDF data.\n",
+    "3. We are generating **metadata** on-the-fly."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 19,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 28,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 28,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "##Missing values\n",
-      "- when `data == var.missing_value` somewhere, a masked array is returned.\n",
-      "- illustrate with soil moisture data (only defined over land)\n",
-      "- white areas on plot are masked values over water."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_20150711_0600.grib2/GC\n"
      ]
+    }
+   ],
+   "source": [
+    "import datetime\n",
+    "date = datetime.datetime.now()\n",
+    "# build URL for latest synoptic analysis time\n",
+    "URL = 'http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_%04i%02i%02i_%02i%02i.grib2/GC' %\\\n",
+    "(date.year,date.month,date.day,6*(date.hour//6),0)\n",
+    "# keep moving back 6 hours until a valid URL found\n",
+    "validURL = False; ncount = 0\n",
+    "while (not validURL and ncount < 10):\n",
+    "    print(URL)\n",
+    "    try:\n",
+    "        gfs = netCDF4.Dataset(URL)\n",
+    "        validURL = True\n",
+    "    except RuntimeError:\n",
+    "        date -= datetime.timedelta(hours=6)\n",
+    "        ncount += 1       "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 20,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 28,
+     "slide_helper": "subslide_end",
+     "slide_type": "subslide"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "soilmvar = gfs.variables['Volumetric_Soil_Moisture_Content_depth_below_surface_layer']\n",
-      "# flip the data in latitude so North Hemisphere is up on the plot\n",
-      "soilm = soilmvar[0,0,::-1,:] \n",
-      "print('shape=%s, type=%s, missing_value=%s' % \\\n",
-      "      (soilm.shape, type(soilm), soilmvar.missing_value))\n",
-      "import matplotlib.pyplot as plt\n",
-      "%matplotlib inline\n",
-      "cs = plt.contourf(soilm)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 31
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "shape=(361, 720), type=<class 'numpy.ma.core.MaskedArray'>, missing_value=nan\n"
-       ]
-      },
-      {
-       "metadata": {},
-       "output_type": "display_data",
-       "png": "iVBORw0KGgoAAAANSUhEUgAAAXIAAAD7CAYAAAB37B+tAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJztnX+sZVd13z8L228SIGPzgNjGdjpWwSlueAFUm7Qk4iVN\njJFaTNsImFFbVNIoamJAFLczHhd7MIEyyKFRMyJqA0ROxJtCSKC4isF2wlMTVZgQsAfHuGCFkTIU\njx0PeEhRZ2y8+sc9+71999v7nH1+3XvOvesjPb17z899z9n7e9ZZe+21RVUxDMMwxssz5l0AwzAM\nox0m5IZhGCPHhNwwDGPkmJAbhmGMHBNywzCMkWNCbhiGMXLOncdJRcRiHg3DMBqgqhIuK7XIReQH\nROReEblPRB4QkUPF8kMickJEvlz8vcbb50YR+bqIPCQi15QUZnR/t9xyy9zLYGUfz99Yyz3mso+1\n3LllT1Fqk [...]
-       "text": [
-        "<matplotlib.figure.Figure at 0x10aee5a10>"
-       ]
-      }
-     ],
-     "prompt_number": 16
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 32,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "##Packed integer data\n",
-      "There is a similar feature for variables with `scale_factor` and `add_offset` attributes.\n",
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float32 Temperature_surface(time2, lat, lon)\n",
+      "    long_name: Temperature @ Ground or water surface\n",
+      "    units: K\n",
+      "    abbreviation: TMP\n",
+      "    missing_value: nan\n",
+      "    grid_mapping: LatLon_Projection\n",
+      "    coordinates: reftime time2 lat lon \n",
+      "    Grib_Variable_Id: VAR_0-0-0_L1\n",
+      "    Grib2_Parameter: [0 0 0]\n",
+      "    Grib2_Parameter_Discipline: Meteorological products\n",
+      "    Grib2_Parameter_Category: Temperature\n",
+      "    Grib2_Parameter_Name: Temperature\n",
+      "    Grib2_Level_Type: Ground or water surface\n",
+      "    Grib2_Generating_Process_Type: Forecast\n",
+      "unlimited dimensions: \n",
+      "current shape = (93, 361, 720)\n",
+      "filling off\n",
+      "\n",
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float64 time2(time2)\n",
+      "    units: Hour since 2015-07-11T06:00:00Z\n",
+      "    standard_name: time\n",
+      "    long_name: GRIB forecast or observation time\n",
+      "    calendar: proleptic_gregorian\n",
+      "    _CoordinateAxisType: Time\n",
+      "unlimited dimensions: \n",
+      "current shape = (93,)\n",
+      "filling off\n",
       "\n",
-      "- short integer data will automatically be returned as float data, with the scale and offset applied.  "
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float32 lat(lat)\n",
+      "    units: degrees_north\n",
+      "    _CoordinateAxisType: Lat\n",
+      "unlimited dimensions: \n",
+      "current shape = (361,)\n",
+      "filling off\n",
+      "\n",
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float32 lon(lon)\n",
+      "    units: degrees_east\n",
+      "    _CoordinateAxisType: Lon\n",
+      "unlimited dimensions: \n",
+      "current shape = (720,)\n",
+      "filling off\n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "# Look at metadata for a specific variable\n",
+    "# gfs.variables.keys() will show all available variables.\n",
+    "sfctmp = gfs.variables['Temperature_surface']\n",
+    "# get info about sfctmp\n",
+    "print(sfctmp)\n",
+    "# print coord vars associated with this variable\n",
+    "for dname in sfctmp.dimensions:   \n",
+    "    print(gfs.variables[dname])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 28,
+     "slide_type": "subslide"
     },
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "##Missing values\n",
+    "- when `data == var.missing_value` somewhere, a masked array is returned.\n",
+    "- illustrate with soil moisture data (only defined over land)\n",
+    "- white areas on plot are masked values over water."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 31
+    },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 32,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Dealing with dates and times\n",
-      "- time variables usually measure relative to a fixed date using a certain calendar, with units specified like ***`hours since YY:MM:DD hh-mm-ss`***.\n",
-      "- **`num2date`** and **`date2num`** convenience functions provided to convert between these numeric time coordinates and handy python datetime instances.  \n",
-      "- **`date2index`** finds the time index corresponding to a datetime instance."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "shape=(361, 720), type=<class 'numpy.ma.core.MaskedArray'>, missing_value=nan\n"
      ]
     },
     {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from netCDF4 import num2date, date2num, date2index\n",
-      "timedim = sfctmp.dimensions[0] # time dim name\n",
-      "print('name of time dimension = %s' % timedim)\n",
-      "times = gfs.variables[timedim] # time coord var\n",
-      "print('units = %s, values = %s' % (times.units, times[:]))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 34
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXIAAAD7CAYAAAB37B+tAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJztnX+MJkeZ378Ptmf5dbPLgM8/N7dWwCc23NhGwr6EO7Fc\nwNhSgi9ShNlVyAlOEQpZQAQns2sHZ+OLHQb5yEm3AkXHDzmEmeCYHwIFDtvEm3CKzE/bY1gc2zpW\nYn322vHCDhd0s177yR/dNVNvvVXV1d3V3VX9Ph9pNO/bb/+o7q769tNPPfUUMTMEQRCEfHnR0AUQ\nBEEQ2iFCLgiCkDki5IIgCJkjQi4IgpA5IuSCIAiZI0IuCIKQOWcPcVAikphHQRCEBjAz2RY6/wC8\nGMB3ADwI4EcADpXLDwE4DuCB8u9abZuDAB4D8AiAqx37Zd9xU/1T55/jn5Rdyj0LZc+13KFld2mn\n1yJn [...]
+      "text/plain": [
+       "<matplotlib.figure.Figure at 0x1125c5a90>"
+      ]
      },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "name of time dimension = time1\n",
-        "units = Hour since 2015-03-10T12:00:00Z, values = [   0.    3.    6.    9.   12.   15.   18.   21.   24.   27.   30.   33.\n",
-        "   36.   39.   42.   45.   48.   51.   54.   57.   60.   63.   66.   69.\n",
-        "   72.   75.   78.   81.   84.   87.   90.   93.   96.   99.  102.  105.\n",
-        "  108.  111.  114.  117.  120.  123.  126.  129.  132.  135.  138.  141.\n",
-        "  144.  147.  150.  153.  156.  159.  162.  165.  168.  171.  174.  177.\n",
-        "  180.  183.  186.  189.  192.  195.  198.  201.  204.  207.  210.  213.\n",
-        "  216.  219.  222.  225.  228.  231.  234.  237.  240.  252.  264.  276.\n",
-        "  288.  300.  312.  324.  336.  348.  360.  372.  384.]\n"
-       ]
-      }
-     ],
-     "prompt_number": 17
+     "metadata": {},
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "soilmvar = gfs.variables['Volumetric_Soil_Moisture_Content_depth_below_surface_layer']\n",
+    "# flip the data in latitude so North Hemisphere is up on the plot\n",
+    "soilm = soilmvar[0,0,::-1,:] \n",
+    "print('shape=%s, type=%s, missing_value=%s' % \\\n",
+    "      (soilm.shape, type(soilm), soilmvar.missing_value))\n",
+    "import matplotlib.pyplot as plt\n",
+    "%matplotlib inline\n",
+    "cs = plt.contourf(soilm)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 32,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dates = num2date(times[:], times.units)\n",
-      "print([date.strftime('%Y-%m-%d %H:%M:%S') for date in dates[:10]]) # print only first ten..."
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 35,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "['2015-03-10 12:00:00', '2015-03-10 15:00:00', '2015-03-10 18:00:00', '2015-03-10 21:00:00', '2015-03-11 00:00:00', '2015-03-11 03:00:00', '2015-03-11 06:00:00', '2015-03-11 09:00:00', '2015-03-11 12:00:00', '2015-03-11 15:00:00']\n"
-       ]
-      }
-     ],
-     "prompt_number": 18
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "##Packed integer data\n",
+    "There is a similar feature for variables with `scale_factor` and `add_offset` attributes.\n",
+    "\n",
+    "- short integer data will automatically be returned as float data, with the scale and offset applied.  "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 32,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 35,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "###Get index associated with a specified date, extract forecast data for that date."
-     ]
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Dealing with dates and times\n",
+    "- time variables usually measure relative to a fixed date using a certain calendar, with units specified like ***`hours since YY:MM:DD hh-mm-ss`***.\n",
+    "- **`num2date`** and **`date2num`** convenience functions provided to convert between these numeric time coordinates and handy python datetime instances.  \n",
+    "- **`date2index`** finds the time index corresponding to a datetime instance."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 22,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 34
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from datetime import datetime, timedelta\n",
-      "date = datetime.now() + timedelta(days=3)\n",
-      "print(date)\n",
-      "ntime = date2index(date,times,select='nearest')\n",
-      "print('index = %s, date = %s' % (ntime, dates[ntime]))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 37
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "2015-03-13 14:47:13.043237\n",
-        "index = 25, date = 2015-03-13 15:00:00\n"
-       ]
-      }
-     ],
-     "prompt_number": 19
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "name of time dimension = time2\n",
+      "units = Hour since 2015-07-11T06:00:00Z, values = [   0.    3.    6.    9.   12.   15.   18.   21.   24.   27.   30.   33.\n",
+      "   36.   39.   42.   45.   48.   51.   54.   57.   60.   63.   66.   69.\n",
+      "   72.   75.   78.   81.   84.   87.   90.   93.   96.   99.  102.  105.\n",
+      "  108.  111.  114.  117.  120.  123.  126.  129.  132.  135.  138.  141.\n",
+      "  144.  147.  150.  153.  156.  159.  162.  165.  168.  171.  174.  177.\n",
+      "  180.  183.  186.  189.  192.  195.  198.  201.  204.  207.  210.  213.\n",
+      "  216.  219.  222.  225.  228.  231.  234.  237.  240.  252.  264.  276.\n",
+      "  288.  300.  312.  324.  336.  348.  360.  372.  384.]\n"
+     ]
+    }
+   ],
+   "source": [
+    "from netCDF4 import num2date, date2num, date2index\n",
+    "timedim = sfctmp.dimensions[0] # time dim name\n",
+    "print('name of time dimension = %s' % timedim)\n",
+    "times = gfs.variables[timedim] # time coord var\n",
+    "print('units = %s, values = %s' % (times.units, times[:]))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 23,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 35,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 38
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "###Get temp forecast for Boulder (near 40N, -105W)\n",
-      "- use function **`getcloses_ij`** we created before..."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "['2015-07-11 06:00:00', '2015-07-11 09:00:00', '2015-07-11 12:00:00', '2015-07-11 15:00:00', '2015-07-11 18:00:00', '2015-07-11 21:00:00', '2015-07-12 00:00:00', '2015-07-12 03:00:00', '2015-07-12 06:00:00', '2015-07-12 09:00:00']\n"
      ]
+    }
+   ],
+   "source": [
+    "dates = num2date(times[:], times.units)\n",
+    "print([date.strftime('%Y-%m-%d %H:%M:%S') for date in dates[:10]]) # print only first ten..."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 35,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "lats, lons = gfs.variables['lat'][:], gfs.variables['lon'][:]\n",
-      "# lats, lons are 1-d. Make them 2-d using numpy.meshgrid.\n",
-      "lons, lats = np.meshgrid(lons,lats)\n",
-      "j, i = getclosest_ij(lats,lons,40,-105)\n",
-      "fcst_temp = sfctmp[ntime,j,i]\n",
-      "print('Boulder forecast valid at %s UTC = %5.1f %s' % \\\n",
-      "      (dates[ntime],fcst_temp,sfctmp.units))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 39,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "Boulder forecast valid at 2015-03-13 15:00:00 UTC = 292.7 K\n"
-       ]
-      }
-     ],
-     "prompt_number": 20
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "###Get index associated with a specified date, extract forecast data for that date."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 24,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 37
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 39,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "##Simple multi-file aggregation\n",
-      "\n",
-      "What if you have a bunch of netcdf files, each with data for a different year, and you want to access all the data as if it were in one file?"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "2015-07-14 07:22:39.579246\n",
+      "index = 24, date = 2015-07-14 06:00:00\n"
      ]
+    }
+   ],
+   "source": [
+    "from datetime import datetime, timedelta\n",
+    "date = datetime.now() + timedelta(days=3)\n",
+    "print(date)\n",
+    "ntime = date2index(date,times,select='nearest')\n",
+    "print('index = %s, date = %s' % (ntime, dates[ntime]))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 38
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "!ls -l data/prmsl*nc"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 41
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "-rw-r--r--  1 jwhitaker  climate  8985332 Mar 10 09:57 data/prmsl.2000.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8968789 Mar 10 09:57 data/prmsl.2001.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8972796 Mar 10 09:57 data/prmsl.2002.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8974435 Mar 10 09:57 data/prmsl.2003.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8997438 Mar 10 09:57 data/prmsl.2004.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8976678 Mar 10 09:57 data/prmsl.2005.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8969714 Mar 10 09:57 data/prmsl.2006.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8974360 Mar 10 09:57 data/prmsl.2007.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8994260 Mar 10 09:57 data/prmsl.2008.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8974678 Mar 10 09:57 data/prmsl.2009.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8970732 Mar 10 09:57 data/prmsl.2010.nc\r\n",
-        "-rw-r--r--  1 jwhitaker  climate  8976285 Mar 10 09:57 data/prmsl.2011.nc\r\n"
-       ]
-      }
-     ],
-     "prompt_number": 21
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "###Get temp forecast for Boulder (near 40N, -105W)\n",
+    "- use function **`getcloses_ij`** we created before..."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 25,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 39,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 42
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "**`MFDataset`** uses file globbing to patch together all the files into one big Dataset.\n",
-      "You can also pass it a list of specific files.\n",
-      "\n",
-      "Limitations:\n",
-      "\n",
-      "- It can only  aggregate the data along the leftmost dimension of each variable.\n",
-      "- only works with `NETCDF3`, or `NETCDF4_CLASSIC` formatted files.\n",
-      "- kind of slow."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Boulder forecast valid at 2015-07-14 06:00:00 UTC = 296.8 K\n"
      ]
+    }
+   ],
+   "source": [
+    "lats, lons = gfs.variables['lat'][:], gfs.variables['lon'][:]\n",
+    "# lats, lons are 1-d. Make them 2-d using numpy.meshgrid.\n",
+    "lons, lats = np.meshgrid(lons,lats)\n",
+    "j, i = getclosest_ij(lats,lons,40,-105)\n",
+    "fcst_temp = sfctmp[ntime,j,i]\n",
+    "print('Boulder forecast valid at %s UTC = %5.1f %s' % \\\n",
+    "      (dates[ntime],fcst_temp,sfctmp.units))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 39,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "mf = netCDF4.MFDataset('data/prmsl*nc')\n",
-      "times = mf.variables['time']\n",
-      "dates = num2date(times[:],times.units)\n",
-      "print('starting date = %s' % dates[0])\n",
-      "print('ending date = %s'% dates[-1])\n",
-      "prmsl = mf.variables['prmsl']\n",
-      "print('times shape = %s' % times.shape)\n",
-      "print('prmsl dimensions = %s, prmsl shape = %s' %\\\n",
-      "     (prmsl.dimensions, prmsl.shape))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 43,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "starting date = 2000-01-01 00:00:00\n",
-        "ending date = 2011-12-31 00:00:00\n",
-        "times shape = 4383\n",
-        "prmsl dimensions = (u'time', u'lat', u'lon'), prmsl shape = (4383, 91, 180)\n"
-       ]
-      }
-     ],
-     "prompt_number": 22
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "##Simple multi-file aggregation\n",
+    "\n",
+    "What if you have a bunch of netcdf files, each with data for a different year, and you want to access all the data as if it were in one file?"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 26,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 41
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 43,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Closing your netCDF file\n",
-      "\n",
-      "It's good to close netCDF files, but not actually necessary when Dataset is open for read access only.\n"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "-rw-r--r--  1 jwhitaker  staff  8985332 Jul 10 06:43 data/prmsl.2000.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8968789 Jul 10 06:43 data/prmsl.2001.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8972796 Jul 10 06:43 data/prmsl.2002.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8974435 Jul 10 06:43 data/prmsl.2003.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8997438 Jul 10 06:43 data/prmsl.2004.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8976678 Jul 10 06:43 data/prmsl.2005.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8969714 Jul 10 06:43 data/prmsl.2006.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8974360 Jul 10 06:43 data/prmsl.2007.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8994260 Jul 10 06:43 data/prmsl.2008.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8974678 Jul 10 06:43 data/prmsl.2009.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8970732 Jul 10 06:43 data/prmsl.2010.nc\r\n",
+      "-rw-r--r--  1 jwhitaker  staff  8976285 Jul 10 06:43 data/prmsl.2011.nc\r\n"
      ]
+    }
+   ],
+   "source": [
+    "!ls -l data/prmsl*nc"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 42
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "f.close()\n",
-      "gfs.close()"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 45
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 23
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "**`MFDataset`** uses file globbing to patch together all the files into one big Dataset.\n",
+    "You can also pass it a list of specific files.\n",
+    "\n",
+    "Limitations:\n",
+    "\n",
+    "- It can only  aggregate the data along the leftmost dimension of each variable.\n",
+    "- only works with `NETCDF3`, or `NETCDF4_CLASSIC` formatted files.\n",
+    "- kind of slow."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 27,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 43,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 45,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "-"
-      }
-     },
-     "source": [
-      "##That's it!\n",
-      "\n",
-      "Now you're ready to start exploring your data interactively.\n",
-      "\n",
-      "To be continued with **Writing netCDF data** ...."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "starting date = 2000-01-01 00:00:00\n",
+      "ending date = 2011-12-31 00:00:00\n",
+      "times shape = 4383\n",
+      "prmsl dimensions = (u'time', u'lat', u'lon'), prmsl shape = (4383, 91, 180)\n"
      ]
     }
    ],
-   "metadata": {}
+   "source": [
+    "mf = netCDF4.MFDataset('data/prmsl*nc')\n",
+    "times = mf.variables['time']\n",
+    "dates = num2date(times[:],times.units)\n",
+    "print('starting date = %s' % dates[0])\n",
+    "print('ending date = %s'% dates[-1])\n",
+    "prmsl = mf.variables['prmsl']\n",
+    "print('times shape = %s' % times.shape)\n",
+    "print('prmsl dimensions = %s, prmsl shape = %s' %\\\n",
+    "     (prmsl.dimensions, prmsl.shape))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 43,
+     "slide_type": "subslide"
+    },
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Closing your netCDF file\n",
+    "\n",
+    "It's good to close netCDF files, but not actually necessary when Dataset is open for read access only.\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 28,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 45
+    },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "f.close()\n",
+    "gfs.close()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 45,
+     "slide_helper": "subslide_end"
+    },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "-"
+    }
+   },
+   "source": [
+    "##That's it!\n",
+    "\n",
+    "Now you're ready to start exploring your data interactively.\n",
+    "\n",
+    "To be continued with **Writing netCDF data** ...."
+   ]
   }
- ]
-}
\ No newline at end of file
+ ],
+ "metadata": {
+  "celltoolbar": "Raw Cell Format",
+  "kernelspec": {
+   "display_name": "Python 2",
+   "language": "python",
+   "name": "python2"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 2
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython2",
+   "version": "2.7.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/examples/writing_netCDF.ipynb b/examples/writing_netCDF.ipynb
index 6c763e4..4f2d7dd 100644
--- a/examples/writing_netCDF.ipynb
+++ b/examples/writing_netCDF.ipynb
@@ -1,1212 +1,1206 @@
 {
- "metadata": {
-  "name": "",
-  "signature": "sha256:2d5756ffde5fe7c0ab2e065fbc148e5808a947da2a26573885526ea00f45b576"
- },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
+ "cells": [
   {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "# Writing netCDF data\n",
-      "\n",
-      "**Important Note**: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error.  Instead, choose \"Run All\" from the Cell menu after you modify a cell."
-     ]
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from __future__ import print_function # make sure print behaves the same in 2.7 and 3.x\n",
-      "import netCDF4     # Note: python is case-sensitive!\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_number": 1,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 1
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "# Writing netCDF data\n",
+    "\n",
+    "**Important Note**: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error.  Instead, choose \"Run All\" from the Cell menu after you modify a cell."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 25,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_number": 1,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 1,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Opening a file, creating a new Dataset\n",
-      "\n",
-      "Let's create a new, empty netCDF file named 'data/new.nc', opened for writing.\n",
-      "\n",
-      "Be careful, opening a file with 'w' will clobber any existing data (unless `clobber=False` is used, in which case an exception is raised if the file already exists).\n",
-      "\n",
-      "- `mode='r'` is the default.\n",
-      "- `mode='a'` opens an existing file and allows for appending (does not clobber existing data)\n",
-      "- `format` can be one of `NETCDF3_CLASSIC`, `NETCDF3_64BIT`, `NETCDF4_CLASSIC` or `NETCDF4` (default). `NETCDF4_CLASSIC` uses HDF5 for the underlying storage layer (as does `NETCDF4`) but enforces the classic netCDF 3 data model so data can be read with older clients.  "
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "import netCDF4     # Note: python is case-sensitive!\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 1,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "try: ncfile.close()  # just to be safe, make sure dataset is not already open.\n",
-      "except: pass\n",
-      "ncfile = netCDF4.Dataset('data/new.nc',mode='w',format='NETCDF4_CLASSIC') \n",
-      "print(ncfile)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 3,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Dataset'>\n",
-        "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): \n",
-        "    groups: \n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 2
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Opening a file, creating a new Dataset\n",
+    "\n",
+    "Let's create a new, empty netCDF file named 'data/new.nc', opened for writing.\n",
+    "\n",
+    "Be careful, opening a file with 'w' will clobber any existing data (unless `clobber=False` is used, in which case an exception is raised if the file already exists).\n",
+    "\n",
+    "- `mode='r'` is the default.\n",
+    "- `mode='a'` opens an existing file and allows for appending (does not clobber existing data)\n",
+    "- `format` can be one of `NETCDF3_CLASSIC`, `NETCDF3_64BIT`, `NETCDF4_CLASSIC` or `NETCDF4` (default). `NETCDF4_CLASSIC` uses HDF5 for the underlying storage layer (as does `NETCDF4`) but enforces the classic netCDF 3 data model so data can be read with older clients.  "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 26,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 3,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 3,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Creating dimensions\n",
-      "\n",
-      "The **ncfile** object we created is a container for _dimensions_, _variables_, and _attributes_.   First, let's create some dimensions using the [`createDimension`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createDimension) method.  \n",
-      "\n",
-      "- Every dimension has a name and a length.  \n",
-      "- The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the `ncfile.dimensions` dictionary.\n",
-      "\n",
-      "Setting the dimension length to `0` or `None` makes it unlimited, so it can grow. \n",
-      "\n",
-      "- For `NETCDF4` files, any variable's dimension can be unlimited.  \n",
-      "- For `NETCDF4_CLASSIC` and `NETCDF3*` files, only one per variable can be unlimited, and it must be the leftmost (fastest varying) dimension."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Dataset'>\n",
+      "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): \n",
+      "    groups: \n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "try: ncfile.close()  # just to be safe, make sure dataset is not already open.\n",
+    "except: pass\n",
+    "ncfile = netCDF4.Dataset('data/new.nc',mode='w',format='NETCDF4_CLASSIC') \n",
+    "print(ncfile)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 3,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "lat_dim = ncfile.createDimension('lat', 73)     # latitude axis\n",
-      "lon_dim = ncfile.createDimension('lon', 144)    # longitude axis\n",
-      "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).\n",
-      "for dim in ncfile.dimensions.items():\n",
-      "    print(dim)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 5,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "('lat', <type 'netCDF4.Dimension'>: name = 'lat', size = 73\n",
-        ")\n",
-        "('lon', <type 'netCDF4.Dimension'>: name = 'lon', size = 144\n",
-        ")\n",
-        "('time', <type 'netCDF4.Dimension'> (unlimited): name = 'time', size = 0\n",
-        ")\n"
-       ]
-      }
-     ],
-     "prompt_number": 3
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Creating dimensions\n",
+    "\n",
+    "The **ncfile** object we created is a container for _dimensions_, _variables_, and _attributes_.   First, let's create some dimensions using the [`createDimension`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createDimension) method.  \n",
+    "\n",
+    "- Every dimension has a name and a length.  \n",
+    "- The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the `ncfile.dimensions` dictionary.\n",
+    "\n",
+    "Setting the dimension length to `0` or `None` makes it unlimited, so it can grow. \n",
+    "\n",
+    "- For `NETCDF4` files, any variable's dimension can be unlimited.  \n",
+    "- For `NETCDF4_CLASSIC` and `NETCDF3*` files, only one per variable can be unlimited, and it must be the leftmost (fastest varying) dimension."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 27,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 5,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 5,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Creating attributes\n",
-      "\n",
-      "netCDF attributes can be created just like you would for any python object. \n",
-      "\n",
-      "- Best to adhere to established conventions (like the [CF](http://cfconventions.org/) conventions)\n",
-      "- We won't try to adhere to any specific convention here though."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "('lat', <type 'netCDF4._netCDF4.Dimension'>: name = 'lat', size = 73\n",
+      ")\n",
+      "('lon', <type 'netCDF4._netCDF4.Dimension'>: name = 'lon', size = 144\n",
+      ")\n",
+      "('time', <type 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 0\n",
+      ")\n"
      ]
+    }
+   ],
+   "source": [
+    "lat_dim = ncfile.createDimension('lat', 73)     # latitude axis\n",
+    "lon_dim = ncfile.createDimension('lon', 144)    # longitude axis\n",
+    "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).\n",
+    "for dim in ncfile.dimensions.items():\n",
+    "    print(dim)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 5,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ncfile.title='My model data'\n",
-      "print(ncfile.title)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 7
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "My model data\n"
-       ]
-      }
-     ],
-     "prompt_number": 4
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Creating attributes\n",
+    "\n",
+    "netCDF attributes can be created just like you would for any python object. \n",
+    "\n",
+    "- Best to adhere to established conventions (like the [CF](http://cfconventions.org/) conventions)\n",
+    "- We won't try to adhere to any specific convention here though."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 28,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 7
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 8,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "Try adding some more attributes..."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "My model data\n"
      ]
+    }
+   ],
+   "source": [
+    "ncfile.title='My model data'\n",
+    "print(ncfile.title)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 8,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 8,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Creating variables\n",
-      "\n",
-      "Now let's add some variables and store some data in them.  \n",
-      "\n",
-      "- A variable has a name, a type, a shape, and some data values.  \n",
-      "- The shape of a variable is specified by a tuple of dimension names.  \n",
-      "- A variable should also have some named attributes, such as 'units', that describe the data.\n",
-      "\n",
-      "The [`createVariable`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) method takes 3 mandatory args.\n",
-      "\n",
-      "- the 1st argument is the variable name (a string). This is used as the key to access the variable object from the `variables` dictionary.\n",
-      "- the 2nd argument is the datatype (most numpy datatypes supported).  \n",
-      "- the third argument is a tuple containing the dimension names (the dimensions must be created first).  Unless this is a `NETCDF4` file, any unlimited dimension must be the leftmost one.\n",
-      "- there are lots of optional arguments (many of which are only relevant when `format='NETCDF'`) to control compression, chunking, fill_value, etc.\n"
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "Try adding some more attributes..."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 8,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Define two variables with the same names as dimensions,\n",
-      "# a conventional way to define \"coordinate variables\".\n",
-      "lat = ncfile.createVariable('lat', np.float32, ('lat',))\n",
-      "lat.units = 'degrees_north'\n",
-      "lat.long_name = 'latitude'\n",
-      "lon = ncfile.createVariable('lon', np.float32, ('lon',))\n",
-      "lon.units = 'degrees_east'\n",
-      "lon.long_name = 'longitude'\n",
-      "time = ncfile.createVariable('time', np.float64, ('time',))\n",
-      "time.units = 'hours since 1800-01-01'\n",
-      "time.long_name = 'time'\n",
-      "# Define a 3D variable to hold the data\n",
-      "temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost\n",
-      "temp.units = 'K' # degrees Kelvin\n",
-      "temp.standard_name = 'air_temperature' # this is a CF standard name\n",
-      "print(temp)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 10,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "float64 temp(time, lat, lon)\n",
-        "    units: K\n",
-        "    standard_name: air_temperature\n",
-        "unlimited dimensions: time\n",
-        "current shape = (0, 73, 144)\n",
-        "filling on, default _FillValue of 9.96920996839e+36 used\n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 5
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Creating variables\n",
+    "\n",
+    "Now let's add some variables and store some data in them.  \n",
+    "\n",
+    "- A variable has a name, a type, a shape, and some data values.  \n",
+    "- The shape of a variable is specified by a tuple of dimension names.  \n",
+    "- A variable should also have some named attributes, such as 'units', that describe the data.\n",
+    "\n",
+    "The [`createVariable`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) method takes 3 mandatory args.\n",
+    "\n",
+    "- the 1st argument is the variable name (a string). This is used as the key to access the variable object from the `variables` dictionary.\n",
+    "- the 2nd argument is the datatype (most numpy datatypes supported).  \n",
+    "- the third argument is a tuple containing the dimension names (the dimensions must be created first).  Unless this is a `NETCDF4` file, any unlimited dimension must be the leftmost one.\n",
+    "- there are lots of optional arguments (many of which are only relevant when `format='NETCDF4'`) to control compression, chunking, fill_value, etc.\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 29,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 10,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 10,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Pre-defined variable attributes (read only)\n",
-      "\n",
-      "The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. \n",
-      "\n",
-      "Note: since no data has been written yet, the length of the 'time' dimension is 0."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float64 temp(time, lat, lon)\n",
+      "    units: K\n",
+      "    standard_name: air_temperature\n",
+      "unlimited dimensions: time\n",
+      "current shape = (0, 73, 144)\n",
+      "filling on, default _FillValue of 9.96920996839e+36 used\n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "# Define two variables with the same names as dimensions,\n",
+    "# a conventional way to define \"coordinate variables\".\n",
+    "lat = ncfile.createVariable('lat', np.float32, ('lat',))\n",
+    "lat.units = 'degrees_north'\n",
+    "lat.long_name = 'latitude'\n",
+    "lon = ncfile.createVariable('lon', np.float32, ('lon',))\n",
+    "lon.units = 'degrees_east'\n",
+    "lon.long_name = 'longitude'\n",
+    "time = ncfile.createVariable('time', np.float64, ('time',))\n",
+    "time.units = 'hours since 1800-01-01'\n",
+    "time.long_name = 'time'\n",
+    "# Define a 3D variable to hold the data\n",
+    "temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost\n",
+    "temp.units = 'K' # degrees Kelvin\n",
+    "temp.standard_name = 'air_temperature' # this is a CF standard name\n",
+    "print(temp)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 10,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print(\"-- Some pre-defined attributes for variable temp:\")\n",
-      "print(\"temp.dimensions:\", temp.dimensions)\n",
-      "print(\"temp.shape:\", temp.shape)\n",
-      "print(\"temp.dtype:\", temp.dtype)\n",
-      "print(\"temp.ndim:\", temp.ndim)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 12,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "-- Some pre-defined attributes for variable temp:\n",
-        "temp.dimensions: (u'time', u'lat', u'lon')\n",
-        "temp.shape: (0, 73, 144)\n",
-        "temp.dtype: float64\n",
-        "temp.ndim: 3\n"
-       ]
-      }
-     ],
-     "prompt_number": 6
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Pre-defined variable attributes (read only)\n",
+    "\n",
+    "The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. \n",
+    "\n",
+    "Note: since no data has been written yet, the length of the 'time' dimension is 0."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 30,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 12,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 12,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Writing data\n",
-      "\n",
-      "To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "-- Some pre-defined attributes for variable temp:\n",
+      "('temp.dimensions:', (u'time', u'lat', u'lon'))\n",
+      "('temp.shape:', (0, 73, 144))\n",
+      "('temp.dtype:', dtype('float64'))\n",
+      "('temp.ndim:', 3)\n"
      ]
+    }
+   ],
+   "source": [
+    "print(\"-- Some pre-defined attributes for variable temp:\")\n",
+    "print(\"temp.dimensions:\", temp.dimensions)\n",
+    "print(\"temp.shape:\", temp.shape)\n",
+    "print(\"temp.dtype:\", temp.dtype)\n",
+    "print(\"temp.ndim:\", temp.ndim)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 12,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3\n",
-      "# Write latitudes, longitudes.\n",
-      "# Note: the \":\" is necessary in these \"write\" statements\n",
-      "lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole\n",
-      "lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward\n",
-      "# create a 3D array of random numbers\n",
-      "data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))\n",
-      "# Write the data.  This writes the whole 3D netCDF variable all at once.\n",
-      "temp[:,:,:] = data_arr  # Appends data along unlimited dimension\n",
-      "print(\"-- Wrote data, temp.shape is now \", temp.shape)\n",
-      "# read data back from variable (by slicing it), print min and max\n",
-      "print(\"-- Min/Max values:\", temp[:,:,:].min(), temp[:,:,:].max())"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 14
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "-- Wrote data, temp.shape is now  (3, 73, 144)\n",
-        "-- Min/Max values: 280.00201974 329.999706175\n"
-       ]
-      }
-     ],
-     "prompt_number": 7
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Writing data\n",
+    "\n",
+    "To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 31,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 14
     },
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 15,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "- You can just treat a netCDF Variable object like a numpy array and assign values to it.\n",
-      "- Variables automatically grow along unlimited dimensions (unlike numpy arrays)\n",
-      "- The above writes the whole 3D variable all at once,  but you can write it a slice at a time instead.\n",
-      "\n",
-      "Let's add another time slice....\n"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "('-- Wrote data, temp.shape is now ', (3, 73, 144))\n",
+      "('-- Min/Max values:', 280.00283562143028, 329.99987991477548)\n"
      ]
+    }
+   ],
+   "source": [
+    "nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3\n",
+    "# Write latitudes, longitudes.\n",
+    "# Note: the \":\" is necessary in these \"write\" statements\n",
+    "lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole\n",
+    "lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward\n",
+    "# create a 3D array of random numbers\n",
+    "data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))\n",
+    "# Write the data.  This writes the whole 3D netCDF variable all at once.\n",
+    "temp[:,:,:] = data_arr  # Appends data along unlimited dimension\n",
+    "print(\"-- Wrote data, temp.shape is now \", temp.shape)\n",
+    "# read data back from variable (by slicing it), print min and max\n",
+    "print(\"-- Min/Max values:\", temp[:,:,:].min(), temp[:,:,:].max())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 15,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# create a 2D array of random numbers\n",
-      "data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))\n",
-      "temp[3,:,:] = data_slice   # Appends the 4th time slice\n",
-      "print(\"-- Wrote more data, temp.shape is now \", temp.shape)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 15,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "-- Wrote more data, temp.shape is now  (4, 73, 144)\n"
-       ]
-      }
-     ],
-     "prompt_number": 8
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "- You can just treat a netCDF Variable object like a numpy array and assign values to it.\n",
+    "- Variables automatically grow along unlimited dimensions (unlike numpy arrays)\n",
+    "- The above writes the whole 3D variable all at once,  but you can write it a slice at a time instead.\n",
+    "\n",
+    "Let's add another time slice....\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 32,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 15,
+     "slide_type": "subslide"
     },
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 17
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "Note that we have not yet written any data to the time variable.  It automatically grew as we appended data along the time dimension to the variable `temp`, but the data is missing."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "('-- Wrote more data, temp.shape is now ', (4, 73, 144))\n"
      ]
+    }
+   ],
+   "source": [
+    "# create a 2D array of random numbers\n",
+    "data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))\n",
+    "temp[3,:,:] = data_slice   # Appends the 4th time slice\n",
+    "print(\"-- Wrote more data, temp.shape is now \", temp.shape)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 17
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print(time)\n",
-      "times_arr = time[:]\n",
-      "print(type(times_arr),times_arr)  # dashes indicate masked values (where data has not yet been written)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 18,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "float64 time(time)\n",
-        "    units: hours since 1800-01-01\n",
-        "    long_name: time\n",
-        "unlimited dimensions: time\n",
-        "current shape = (4,)\n",
-        "filling on, default _FillValue of 9.96920996839e+36 used\n",
-        "\n",
-        "<class 'numpy.ma.core.MaskedArray'> [-- -- -- --]\n"
-       ]
-      }
-     ],
-     "prompt_number": 9
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "Note that we have not yet written any data to the time variable.  It automatically grew as we appended data along the time dimension to the variable `temp`, but the data is missing."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 33,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 18,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 18,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "Let's add write some data into the time variable.  \n",
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "float64 time(time)\n",
+      "    units: hours since 1800-01-01\n",
+      "    long_name: time\n",
+      "unlimited dimensions: time\n",
+      "current shape = (4,)\n",
+      "filling on, default _FillValue of 9.96920996839e+36 used\n",
       "\n",
-      "- Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable."
+      "(<class 'numpy.ma.core.MaskedArray'>, masked_array(data = [-- -- -- --],\n",
+      "             mask = [ True  True  True  True],\n",
+      "       fill_value = 9.96920996839e+36)\n",
+      ")\n"
      ]
+    }
+   ],
+   "source": [
+    "print(time)\n",
+    "times_arr = time[:]\n",
+    "print(type(times_arr),times_arr)  # dashes indicate masked values (where data has not yet been written)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 18,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from datetime import datetime\n",
-      "from netCDF4 import date2num,num2date\n",
-      "# 1st 4 days of October.\n",
-      "dates = [datetime(2014,10,1,0),datetime(2014,10,2,0),datetime(2014,10,3,0),datetime(2014,10,4,0)]\n",
-      "print(dates)\n",
-      "times = date2num(dates, time.units)\n",
-      "print(times, time.units) # numeric values\n",
-      "time[:] = times\n",
-      "# read time data back, convert to datetime instances, check values.\n",
-      "print(num2date(time[:],time.units))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 20,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "[datetime.datetime(2014, 10, 1, 0, 0), datetime.datetime(2014, 10, 2, 0, 0), datetime.datetime(2014, 10, 3, 0, 0), datetime.datetime(2014, 10, 4, 0, 0)]\n",
-        "[ 1882440.  1882464.  1882488.  1882512.] hours since 1800-01-01\n",
-        "[datetime.datetime(2014, 10, 1, 0, 0) datetime.datetime(2014, 10, 2, 0, 0)\n",
-        " datetime.datetime(2014, 10, 3, 0, 0) datetime.datetime(2014, 10, 4, 0, 0)]\n"
-       ]
-      }
-     ],
-     "prompt_number": 10
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "Let's add write some data into the time variable.  \n",
+    "\n",
+    "- Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 34,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 20,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 20,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Closing a netCDF file\n",
-      "\n",
-      "It's **important** to close a netCDF file you opened for writing:\n",
-      "\n",
-      "- flushes buffers to make sure all data gets written\n",
-      "- releases memory resources used by open netCDF files"
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[datetime.datetime(2014, 10, 1, 0, 0), datetime.datetime(2014, 10, 2, 0, 0), datetime.datetime(2014, 10, 3, 0, 0), datetime.datetime(2014, 10, 4, 0, 0)]\n",
+      "(array([ 1882440.,  1882464.,  1882488.,  1882512.]), u'hours since 1800-01-01')\n",
+      "[datetime.datetime(2014, 10, 1, 0, 0) datetime.datetime(2014, 10, 2, 0, 0)\n",
+      " datetime.datetime(2014, 10, 3, 0, 0) datetime.datetime(2014, 10, 4, 0, 0)]\n"
      ]
+    }
+   ],
+   "source": [
+    "from datetime import datetime\n",
+    "from netCDF4 import date2num,num2date\n",
+    "# 1st 4 days of October.\n",
+    "dates = [datetime(2014,10,1,0),datetime(2014,10,2,0),datetime(2014,10,3,0),datetime(2014,10,4,0)]\n",
+    "print(dates)\n",
+    "times = date2num(dates, time.units)\n",
+    "print(times, time.units) # numeric values\n",
+    "time[:] = times\n",
+    "# read time data back, convert to datetime instances, check values.\n",
+    "print(num2date(time[:],time.units))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 20,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": true,
-     "input": [
-      "# first print the Dataset object to see what we've got\n",
-      "print(ncfile)\n",
-      "# close the Dataset.\n",
-      "ncfile.close(); print('Dataset is closed!')"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 22,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Dataset'>\n",
-        "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
-        "    title: My model data\n",
-        "    dimensions(sizes): lat(73), lon(144), time(4)\n",
-        "    variables(dimensions): float32 \u001b[4mlat\u001b[0m(lat), float32 \u001b[4mlon\u001b[0m(lon), float64 \u001b[4mtime\u001b[0m(time), float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
-        "    groups: \n",
-        "\n",
-        "Dataset is closed!\n"
-       ]
-      }
-     ],
-     "prompt_number": 11
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Closing a netCDF file\n",
+    "\n",
+    "It's **important** to close a netCDF file you opened for writing:\n",
+    "\n",
+    "- flushes buffers to make sure all data gets written\n",
+    "- releases memory resources used by open netCDF files"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 35,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 22,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 22,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "# Advanced features\n",
-      "\n",
-      "So far we've only exercised features associated with the old netCDF version 3 data model.  netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer.  \n",
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Dataset'>\n",
+      "root group (NETCDF4_CLASSIC data model, file format HDF5):\n",
+      "    title: My model data\n",
+      "    dimensions(sizes): lat(73), lon(144), time(4)\n",
+      "    variables(dimensions): float32 \u001b[4mlat\u001b[0m(lat), float32 \u001b[4mlon\u001b[0m(lon), float64 \u001b[4mtime\u001b[0m(time), float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
+      "    groups: \n",
       "\n",
-      "Let's create a new file with `format='NETCDF4'` so we can try out some of these features."
+      "Dataset is closed!\n"
      ]
+    }
+   ],
+   "source": [
+    "# first print the Dataset object to see what we've got\n",
+    "print(ncfile)\n",
+    "# close the Dataset.\n",
+    "ncfile.close(); print('Dataset is closed!')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 22,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ncfile = netCDF4.Dataset('data/new2.nc','w',format='NETCDF4')\n",
-      "print(ncfile)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 25,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Dataset'>\n",
-        "root group (NETCDF4 data model, file format HDF5):\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): \n",
-        "    groups: \n",
-        "\n"
-       ]
-      }
-     ],
-     "prompt_number": 12
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "# Advanced features\n",
+    "\n",
+    "So far we've only exercised features associated with the old netCDF version 3 data model.  netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer.  \n",
+    "\n",
+    "Let's create a new file with `format='NETCDF4'` so we can try out some of these features."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 36,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 25,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 25,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "## Creating Groups\n",
-      "\n",
-      "netCDF version 4 added support for organizing data in hierarchical groups.\n",
-      "\n",
-      "- analogous to directories in a filesystem. \n",
-      "- Groups serve as containers for variables, dimensions and attributes, as well as other groups. \n",
-      "- A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n",
-      "\n",
-      "- groups are created using the [`createGroup`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createGroup) method.\n",
-      "- takes a single argument (a string, which is the name of the Group instance).  This string is used as a key to access the group instances in the `groups` dictionary.\n",
-      "\n",
-      "Here we create two groups to hold data for two different model runs."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Dataset'>\n",
+      "root group (NETCDF4 data model, file format HDF5):\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): \n",
+      "    groups: \n",
+      "\n"
      ]
+    }
+   ],
+   "source": [
+    "ncfile = netCDF4.Dataset('data/new2.nc','w',format='NETCDF4')\n",
+    "print(ncfile)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 25,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "grp1 = ncfile.createGroup('model_run1')\n",
-      "grp2 = ncfile.createGroup('model_run2')\n",
-      "for grp in ncfile.groups.items():\n",
-      "    print(grp)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 27,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "('model_run1', <type 'netCDF4.Group'>\n",
-        "group /model_run1:\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): \n",
-        "    groups: \n",
-        ")\n",
-        "('model_run2', <type 'netCDF4.Group'>\n",
-        "group /model_run2:\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): \n",
-        "    groups: \n",
-        ")\n"
-       ]
-      }
-     ],
-     "prompt_number": 13
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "## Creating Groups\n",
+    "\n",
+    "netCDF version 4 added support for organizing data in hierarchical groups.\n",
+    "\n",
+    "- analagous to directories in a filesystem. \n",
+    "- Groups serve as containers for variables, dimensions and attributes, as well as other groups. \n",
+    "- A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n",
+    "\n",
+    "- groups are created using the [`createGroup`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createGroup) method.\n",
+    "- takes a single argument (a string, which is the name of the Group instance).  This string is used as a key to access the group instances in the `groups` dictionary.\n",
+    "\n",
+    "Here we create two groups to hold data for two different model runs."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 37,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 27,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 27,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "Create some dimensions in the root group."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "('model_run1', <type 'netCDF4._netCDF4.Group'>\n",
+      "group /model_run1:\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): \n",
+      "    groups: \n",
+      ")\n",
+      "('model_run2', <type 'netCDF4._netCDF4.Group'>\n",
+      "group /model_run2:\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): \n",
+      "    groups: \n",
+      ")\n"
      ]
+    }
+   ],
+   "source": [
+    "grp1 = ncfile.createGroup('model_run1')\n",
+    "grp2 = ncfile.createGroup('model_run2')\n",
+    "for grp in ncfile.groups.items():\n",
+    "    print(grp)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 27,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "lat_dim = ncfile.createDimension('lat', 73)     # latitude axis\n",
-      "lon_dim = ncfile.createDimension('lon', 144)    # longitude axis\n",
-      "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to)."
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 29
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 14
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "Create some dimensions in the root group."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 38,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 29
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 30
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "Now create a variable in grp1 and grp2.  The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).\n",
-      "\n",
-      "- These variables are create with **zlib compression**, another nifty feature of netCDF 4. \n",
-      "- The data are automatically compressed when data is written to the file, and uncompressed when the data is read.  \n",
-      "- This can really save disk space, especially when used in conjunction with the [**least_significant_digit**](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) keyword argument, which causes the data to be quantized (truncated) before compression.  This makes the compression lossy, but more efficient."
-     ]
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "lat_dim = ncfile.createDimension('lat', 73)     # latitude axis\n",
+    "lon_dim = ncfile.createDimension('lon', 144)    # longitude axis\n",
+    "time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to)."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 30
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n",
-      "temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n",
-      "for grp in ncfile.groups.items():  # shows that each group now contains 1 variable\n",
-      "    print(grp)"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 31,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "('model_run1', <type 'netCDF4.Group'>\n",
-        "group /model_run1:\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
-        "    groups: \n",
-        ")\n",
-        "('model_run2', <type 'netCDF4.Group'>\n",
-        "group /model_run2:\n",
-        "    dimensions(sizes): \n",
-        "    variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
-        "    groups: \n",
-        ")\n"
-       ]
-      }
-     ],
-     "prompt_number": 15
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "Now create a variable in grp1 and grp2.  The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).\n",
+    "\n",
+    "- These variables are create with **zlib compression**, another nifty feature of netCDF 4. \n",
+    "- The data are automatically compressed when data is written to the file, and uncompressed when the data is read.  \n",
+    "- This can really save disk space, especially when used in conjunction with the [**least_significant_digit**](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVariable) keyword argument, which causes the data to be quantized (truncated) before compression.  This makes the compression lossy, but more efficient."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 39,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 31,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 31,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "##Creating a variable with a compound data type\n",
-      "\n",
-      "- Compound data types map directly to numpy structured (a.k.a 'record' arrays). \n",
-      "- Structured arrays are akin to C structs, or derived types in Fortran. \n",
-      "- They allow for the construction of table-like structures composed of combinations of other data types, including other compound types. \n",
-      "- Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. \n",
-      "\n",
-      "Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). \n",
-      "\n",
-      "- The compound data type is created with the [`createCompoundType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createCompoundType) method."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "('model_run1', <type 'netCDF4._netCDF4.Group'>\n",
+      "group /model_run1:\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
+      "    groups: \n",
+      ")\n",
+      "('model_run2', <type 'netCDF4._netCDF4.Group'>\n",
+      "group /model_run2:\n",
+      "    dimensions(sizes): \n",
+      "    variables(dimensions): float64 \u001b[4mtemp\u001b[0m(time,lat,lon)\n",
+      "    groups: \n",
+      ")\n"
      ]
+    }
+   ],
+   "source": [
+    "temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n",
+    "temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\n",
+    "for grp in ncfile.groups.items():  # shows that each group now contains 1 variable\n",
+    "    print(grp)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 31,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# create complex128 numpy structured data type\n",
-      "complex128 = np.dtype([('real',np.float64),('imag',np.float64)])\n",
-      "# using this numpy dtype, create a netCDF compound data type object\n",
-      "# the string name can be used as a key to access the datatype from the cmptypes dictionary.\n",
-      "complex128_t = ncfile.createCompoundType(complex128,'complex128')\n",
-      "# create a variable with this data type, write some data to it.\n",
-      "cmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))\n",
-      "# write some data to this variable\n",
-      "# first create some complex random data\n",
-      "nlats = len(lat_dim); nlons = len(lon_dim)\n",
-      "data_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))\n",
-      "# write this complex data to a numpy complex128 structured array\n",
-      "data_arr = np.empty((nlats,nlons),complex128)\n",
-      "data_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag\n",
-      "cmplxvar[0] = data_arr  # write the data to the variable (appending to time dimension)\n",
-      "print(cmplxvar)\n",
-      "data_out = cmplxvar[0] # read one value of data back from variable\n",
-      "print(data_out.dtype, data_out.shape, data_out[0,0])"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 33,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "compound cmplx_var(time, lat, lon)\n",
-        "compound data type: [('real', '<f8'), ('imag', '<f8')]\n",
-        "path = /model_run1\n",
-        "unlimited dimensions: time\n",
-        "current shape = (1, 73, 144)\n",
-        "\n",
-        "[('real', '<f8'), ('imag', '<f8')] (73, 144) (0.3734430270175605, 0.01936452636106112)\n"
-       ]
-      }
-     ],
-     "prompt_number": 16
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "##Creating a variable with a compound data type\n",
+    "\n",
+    "- Compound data types map directly to numpy structured (a.k.a 'record' arrays). \n",
+    "- Structured arrays are akin to C structs, or derived types in Fortran. \n",
+    "- They allow for the construction of table-like structures composed of combinations of other data types, including other compound types. \n",
+    "- Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. \n",
+    "\n",
+    "Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). \n",
+    "\n",
+    "- The compound data type is created with the [`createCompoundType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createCompoundType) method."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 40,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 33,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 33,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "##Creating a variable with a variable-length (vlen) data type\n",
-      "\n",
-      "netCDF 4 has support for variable-length or \"ragged\" arrays. These are arrays of variable length sequences having the same type. \n",
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "compound cmplx_var(time, lat, lon)\n",
+      "compound data type: [('real', '<f8'), ('imag', '<f8')]\n",
+      "path = /model_run1\n",
+      "unlimited dimensions: time\n",
+      "current shape = (1, 73, 144)\n",
       "\n",
-      "- To create a variable-length data type, use the [`createVLType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVLType) method.\n",
-      "- The numpy datatype of the variable-length sequences and the name of the new datatype must be specified. "
+      "(dtype([('real', '<f8'), ('imag', '<f8')]), (73, 144), (0.578177705604801, 0.18086070805676357))\n"
      ]
+    }
+   ],
+   "source": [
+    "# create complex128 numpy structured data type\n",
+    "complex128 = np.dtype([('real',np.float64),('imag',np.float64)])\n",
+    "# using this numpy dtype, create a netCDF compound data type object\n",
+    "# the string name can be used as a key to access the datatype from the cmptypes dictionary.\n",
+    "complex128_t = ncfile.createCompoundType(complex128,'complex128')\n",
+    "# create a variable with this data type, write some data to it.\n",
+    "cmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))\n",
+    "# write some data to this variable\n",
+    "# first create some complex random data\n",
+    "nlats = len(lat_dim); nlons = len(lon_dim)\n",
+    "data_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))\n",
+    "# write this complex data to a numpy complex128 structured array\n",
+    "data_arr = np.empty((nlats,nlons),complex128)\n",
+    "data_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag\n",
+    "cmplxvar[0] = data_arr  # write the data to the variable (appending to time dimension)\n",
+    "print(cmplxvar)\n",
+    "data_out = cmplxvar[0] # read one value of data back from variable\n",
+    "print(data_out.dtype, data_out.shape, data_out[0,0])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 33,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 35
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 17
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "##Creating a variable with a variable-length (vlen) data type\n",
+    "\n",
+    "netCDF 4 has support for variable-length or \"ragged\" arrays. These are arrays of variable length sequences having the same type. \n",
+    "\n",
+    "- To create a variable-length data type, use the [`createVLType`](http://unidata.github.io/netcdf4-python/netCDF4.Dataset-class.html#createVLType) method.\n",
+    "- The numpy datatype of the variable-length sequences and the name of the new datatype must be specified. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 41,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 35
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 36
-      },
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "source": [
-      "A new variable can then be created using this datatype."
-     ]
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 36
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 37,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [],
-     "prompt_number": 18
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "source": [
+    "A new variable can then be created using this datatype."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 42,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 37,
+     "slide_helper": "subslide_end"
     },
-    {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 37,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype `object`). \n",
-      "\n",
-      "- These are arrays whose elements are Python object pointers, and can contain any type of python object. \n",
-      "- For this application, they must contain 1-D numpy arrays all of the same type but of varying length. \n",
-      "- Fill with 1-D random numpy int64 arrays of random length between 1 and 10."
-     ]
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [],
+   "source": [
+    "vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 37,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "vlen_data = np.empty((nlats,nlons),object)\n",
-      "for i in range(nlons):\n",
-      "    for j in range(nlats):\n",
-      "        size = np.random.randint(1,10,size=1) # random length of sequence\n",
-      "        vlen_data[j,i] = np.random.randint(0,10,size=size)# generate random sequence\n",
-      "vlvar[0] = vlen_data # append along unlimited dimension (time)\n",
-      "print(vlvar)\n",
-      "print('data =\\n',vlvar[:])"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 39,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "<type 'netCDF4.Variable'>\n",
-        "vlen phony_vlen_var(time, lat, lon)\n",
-        "vlen data type: int64\n",
-        "path = /model_run2\n",
-        "unlimited dimensions: time\n",
-        "current shape = (1, 73, 144)\n",
-        "\n",
-        "data =\n",
-        " [[[array([6, 4, 3, 7, 8, 2, 9, 5]) array([9, 9, 2, 2, 9, 9, 6, 4])\n",
-        "   array([8, 3, 6]) ..., array([3, 0, 7, 2, 5]) array([7, 7, 9, 7, 3])\n",
-        "   array([5, 1])]\n",
-        "  [array([2, 5, 0, 6, 9]) array([3, 6, 4, 0, 5, 0, 7, 5])\n",
-        "   array([5, 5, 7, 6, 3, 1]) ..., array([2]) array([2, 6]) array([9, 4])]\n",
-        "  [array([3, 7, 0, 4, 7, 3, 1, 8, 1]) array([3, 4, 8])\n",
-        "   array([8, 8, 3, 9, 8, 7, 5, 3, 6]) ..., array([3, 1, 5, 3, 6, 6, 4])\n",
-        "   array([7, 3, 9, 1, 8, 6, 3]) array([1, 0, 0, 2])]\n",
-        "  ..., \n",
-        "  [array([7, 0, 3, 9]) array([1, 8, 4, 7, 5, 5, 2])\n",
-        "   array([8, 1, 1, 6, 5, 4, 7]) ..., array([4]) array([7, 5]) array([1, 6])]\n",
-        "  [array([8, 8, 5, 4, 3, 3, 5, 5, 9]) array([0, 6, 8, 5, 5, 9, 8, 6])\n",
-        "   array([6, 1, 0, 0, 5, 5]) ..., array([8, 4, 4, 5])\n",
-        "   array([1, 6, 3, 2, 3, 3, 2, 3]) array([6, 0, 9, 3, 2, 8, 6, 9])]\n",
-        "  [array([8, 7, 4]) array([4, 3, 3, 6, 3, 4]) array([1, 7, 6, 8, 3, 1])\n",
-        "   ..., array([6, 2, 5, 6, 3, 9]) array([2, 4, 3]) array([3, 4])]]]\n"
-       ]
-      }
-     ],
-     "prompt_number": 19
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype `object`). \n",
+    "\n",
+    "- These are arrays whose elements are Python object pointers, and can contain any type of python object. \n",
+    "- For this application, they must contain 1-D numpy arrays all of the same type but of varying length. \n",
+    "- Fill with 1-D random numpy int64 arrays of random length between 1 and 10."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 43,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 39,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 39,
-       "slide_type": "subslide"
-      },
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "Close the Dataset and examine the contents with ncdump."
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<type 'netCDF4._netCDF4.Variable'>\n",
+      "vlen phony_vlen_var(time, lat, lon)\n",
+      "vlen data type: int64\n",
+      "path = /model_run2\n",
+      "unlimited dimensions: time\n",
+      "current shape = (1, 73, 144)\n",
+      "\n",
+      "('data =\\n', array([[[array([0, 4, 0, 9, 2, 2, 2, 4, 2]), array([7, 5, 4, 4, 9, 8, 0]),\n",
+      "         array([3, 6, 6, 8, 2, 7]), ..., array([5, 0, 0, 8, 8, 1, 5, 3]),\n",
+      "         array([4, 2, 7]), array([0])],\n",
+      "        [array([5, 6, 6, 6, 1, 0, 7]), array([7]),\n",
+      "         array([7, 5, 8, 9, 6, 9, 3]), ..., array([0, 6, 5, 4]),\n",
+      "         array([7, 1, 9, 7, 7, 2]), array([1, 4, 0])],\n",
+      "        [array([4, 3, 1]), array([6, 3, 9, 7, 8]), array([8]), ...,\n",
+      "         array([6, 5, 8, 0]), array([0]), array([0, 9, 6, 2, 4])],\n",
+      "        ..., \n",
+      "        [array([8, 4, 4]), array([4, 1, 6]), array([1, 4, 2, 3, 9]), ...,\n",
+      "         array([9, 1]), array([7, 2, 5, 1, 5, 8, 2]),\n",
+      "         array([2, 9, 9, 1, 4, 6, 3, 5, 2])],\n",
+      "        [array([4, 7, 9, 8, 2, 3, 6, 6]),\n",
+      "         array([1, 4, 1, 6, 1, 1, 2, 3, 9]),\n",
+      "         array([9, 5, 6, 2, 4, 3, 8, 2, 9]), ..., array([9, 5, 7]),\n",
+      "         array([3, 9]), array([4, 2, 6, 9])],\n",
+      "        [array([8, 9, 9, 2, 2, 8, 8, 5]), array([3]),\n",
+      "         array([8, 8, 0, 2, 9, 2, 3, 0, 9]), ..., array([7]),\n",
+      "         array([5, 1, 0, 6, 8, 6]), array([8, 6, 3, 6, 9, 8, 4, 2, 5])]]], dtype=object))\n"
      ]
+    }
+   ],
+   "source": [
+    "vlen_data = np.empty((nlats,nlons),object)\n",
+    "for i in range(nlons):\n",
+    "    for j in range(nlats):\n",
+    "        size = np.random.randint(1,10,size=1) # random length of sequence\n",
+    "        vlen_data[j,i] = np.random.randint(0,10,size=size)# generate random sequence\n",
+    "vlvar[0] = vlen_data # append along unlimited dimension (time)\n",
+    "print(vlvar)\n",
+    "print('data =\\n',vlvar[:])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 39,
+     "slide_type": "subslide"
     },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ncfile.close()\n",
-      "!ncdump -h data/new2.nc"
-     ],
-     "language": "python",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 41,
-       "slide_helper": "subslide_end"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "fragment"
-      }
-     },
-     "outputs": [
-      {
-       "output_type": "stream",
-       "stream": "stdout",
-       "text": [
-        "netcdf new2 {\r\n",
-        "types:\r\n",
-        "  compound complex128 {\r\n",
-        "    double real ;\r\n",
-        "    double imag ;\r\n",
-        "  }; // complex128\r\n",
-        "  int64(*) phony_vlen ;\r\n",
-        "dimensions:\r\n",
-        "\tlat = 73 ;\r\n",
-        "\tlon = 144 ;\r\n",
-        "\ttime = UNLIMITED ; // (1 currently)\r\n",
-        "\r\n",
-        "group: model_run1 {\r\n",
-        "  variables:\r\n",
-        "  \tdouble temp(time, lat, lon) ;\r\n",
-        "  \tcomplex128 cmplx_var(time, lat, lon) ;\r\n",
-        "  } // group model_run1\r\n",
-        "\r\n",
-        "group: model_run2 {\r\n",
-        "  variables:\r\n",
-        "  \tdouble temp(time, lat, lon) ;\r\n",
-        "  \tphony_vlen phony_vlen_var(time, lat, lon) ;\r\n",
-        "  } // group model_run2\r\n",
-        "}\r\n"
-       ]
-      }
-     ],
-     "prompt_number": 20
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "Close the Dataset and examine the contents with ncdump."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 44,
+   "metadata": {
+    "collapsed": false,
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 41,
+     "slide_helper": "subslide_end"
     },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "fragment"
+    }
+   },
+   "outputs": [
     {
-     "cell_type": "markdown",
-     "metadata": {
-      "internals": {
-       "frag_helper": "fragment_end",
-       "frag_number": 41,
-       "slide_helper": "subslide_end",
-       "slide_type": "subslide"
-      },
-      "slide_helper": "slide_end",
-      "slideshow": {
-       "slide_type": "slide"
-      }
-     },
-     "source": [
-      "##Other interesting and useful projects using netcdf4-python\n",
-      "\n",
-      "- [Xray](http://xray.readthedocs.org/en/stable/): N-dimensional variant of the core [pandas](http://pandas.pydata.org) data structure that can operate on netcdf variables.\n",
-      "- [Iris](http://scitools.org.uk/iris/): a data model to create a data abstraction layer which isolates analysis and visualisation code from data format specifics.  Uses netcdf4-python to access netcdf data (can also handle GRIB).\n",
-      "- [Biggus](https://github.com/SciTools/biggus): Virtual large arrays (from netcdf variables) with lazy evaluation.\n",
-      "- [cf-python](http://cfpython.bitbucket.org/): Implements the [CF](http://cfconventions.org) data model for the reading, writing and processing of data and metadata. "
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "netcdf new2 {\r\n",
+      "types:\r\n",
+      "  compound complex128 {\r\n",
+      "    double real ;\r\n",
+      "    double imag ;\r\n",
+      "  }; // complex128\r\n",
+      "  int64(*) phony_vlen ;\r\n",
+      "dimensions:\r\n",
+      "\tlat = 73 ;\r\n",
+      "\tlon = 144 ;\r\n",
+      "\ttime = UNLIMITED ; // (1 currently)\r\n",
+      "\r\n",
+      "group: model_run1 {\r\n",
+      "  variables:\r\n",
+      "  \tdouble temp(time, lat, lon) ;\r\n",
+      "  \tcomplex128 cmplx_var(time, lat, lon) ;\r\n",
+      "  } // group model_run1\r\n",
+      "\r\n",
+      "group: model_run2 {\r\n",
+      "  variables:\r\n",
+      "  \tdouble temp(time, lat, lon) ;\r\n",
+      "  \tphony_vlen phony_vlen_var(time, lat, lon) ;\r\n",
+      "  } // group model_run2\r\n",
+      "}\r\n"
      ]
     }
    ],
-   "metadata": {}
+   "source": [
+    "ncfile.close()\n",
+    "!ncdump -h data/new2.nc"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "internals": {
+     "frag_helper": "fragment_end",
+     "frag_number": 41,
+     "slide_helper": "subslide_end",
+     "slide_type": "subslide"
+    },
+    "slide_helper": "slide_end",
+    "slideshow": {
+     "slide_type": "slide"
+    }
+   },
+   "source": [
+    "##Other interesting and useful projects using netcdf4-python\n",
+    "\n",
+    "- [Xray](http://xray.readthedocs.org/en/stable/): N-dimensional variant of the core [pandas](http://pandas.pydata.org) data structure that can operate on netcdf variables.\n",
+    "- [Iris](http://scitools.org.uk/iris/): a data model to create a data abstraction layer which isolates analysis and visualisation code from data format specifics.  Uses netcdf4-python to access netcdf data (can also handle GRIB).\n",
+    "- [Biggus](https://github.com/SciTools/biggus): Virtual large arrays (from netcdf variables) with lazy evaluation.\n",
+    "- [cf-python](http://cfpython.bitbucket.org/): Implements the [CF](http://cfconventions.org) data model for the reading, writing and processing of data and metadata. "
+   ]
   }
- ]
-}
\ No newline at end of file
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 2",
+   "language": "python",
+   "name": "python2"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 2
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython2",
+   "version": "2.7.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/include/netCDF4.pxi b/include/netCDF4.pxi
index d62fc91..3224cdb 100644
--- a/include/netCDF4.pxi
+++ b/include/netCDF4.pxi
@@ -5,21 +5,9 @@ cdef extern from "stdlib.h":
 
 # hdf5 version info.
 cdef extern from "H5public.h":
-    cdef char *H5_VERS_INFO
-    cdef char *H5_VERS_SUBRELEASE
-    cdef enum:
-        H5_VERS_MAJOR
-        H5_VERS_MINOR
-        H5_VERS_RELEASE
+    ctypedef int herr_t
+    int H5get_libversion( unsigned int *majnum, unsigned int *minnum, unsigned int *relnum )
  
-# netcdf version info.
-#cdef extern from "netcdf_meta.h":
-#    cdef char *NC_VERSION_NOTE
-#    cdef enum:
-#        NC_VERSION_MAJOR
-#        NC_VERSION_MINOR
-#        NC_VERSION_PATCH
-
 cdef extern from *:
     ctypedef char* const_char_ptr "const char*"
  
@@ -30,20 +18,6 @@ cdef extern from "netcdf.h":
     ctypedef struct nc_vlen_t:
         size_t len                 # Length of VL data (in base type units) 
         void *p                    # Pointer to VL data 
-# default fill values.
-# could define these in the anonymous enum, but then they
-# would be assumed to be integers.
-#define NC_FILL_BYTE	((signed char)-127)
-#define NC_FILL_CHAR	((char)0)
-#define NC_FILL_SHORT	((short)-32767)
-#define NC_FILL_INT	(-2147483647L)
-#define NC_FILL_FLOAT	(9.9692099683868690e+36f) /* near 15 * 2^119 */
-#define NC_FILL_DOUBLE	(9.9692099683868690e+36)
-#define NC_FILL_UBYTE   (255)
-#define NC_FILL_USHORT  (65535)
-#define NC_FILL_UINT    (4294967295U)
-#define NC_FILL_INT64   ((long long)-9223372036854775806)
-#define NC_FILL_UINT64  ((unsigned long long)18446744073709551614)
     float NC_FILL_FLOAT
     long NC_FILL_INT
     double NC_FILL_DOUBLE
diff --git a/netCDF4/_netCDF4.pyx b/netCDF4/_netCDF4.pyx
index 8a5818e..49b01c2 100644
--- a/netCDF4/_netCDF4.pyx
+++ b/netCDF4/_netCDF4.pyx
@@ -1,5 +1,5 @@
 """
-Version 1.2.9
+Version 1.3.0
 -------------
 - - - 
 
@@ -37,7 +37,7 @@ Requires
 ========
 
  - Python 2.7 or later (python 3 works too).
- - [numpy array module](http://numpy.scipy.org), version 1.7.0 or later.
+ - [numpy array module](http://numpy.scipy.org), version 1.9.0 or later.
  - [Cython](http://cython.org), version 0.19 or later.
  - [setuptools](https://pypi.python.org/pypi/setuptools), version 18.0 or
    later.
@@ -69,9 +69,9 @@ Install
  easiest if all the C libs are built as shared libraries.
  - By default, the utility `nc-config`, installed with netcdf 4.1.2 or higher,
  will be run used to determine where all the dependencies live.
- - If `nc-config` is not in your default `$PATH`, rename the
- file `setup.cfg.template` to `setup.cfg`, then edit
- in a text editor (follow the instructions in the comments).
+ - If `nc-config` is not in your default `$PATH`
+ edit the `setup.cfg` file
+ in a text editor and follow the instructions in the comments.
  In addition to specifying the path to `nc-config`,
  you can manually set the paths to all the libraries and their include files
  (in case `nc-config` does not do the right thing).
@@ -703,8 +703,11 @@ for storing numpy complex arrays.  Here's an example:
     complex128 [ 0.54030231+0.84147098j -0.84147098+0.54030231j  -0.54030231-0.84147098j]
 
 Compound types can be nested, but you must create the 'inner'
-ones first. All of the compound types defined for a `netCDF4.Dataset` or `netCDF4.Group` are stored in a
-Python dictionary, just like variables and dimensions. As always, printing
+ones first. All possible numpy structured arrays cannot be
+represented as Compound variables - an error message will be
+raise if you try to create one that is not supported.
+All of the compound types defined for a `netCDF4.Dataset` or `netCDF4.Group` are stored 
+in a Python dictionary, just like variables and dimensions. As always, printing
 objects gives useful summary information in an interactive session:
 
     :::python
@@ -918,7 +921,7 @@ from cpython.buffer cimport PyObject_GetBuffer, PyBuffer_Release, PyBUF_SIMPLE,
 
 # pure python utilities
 from .utils import (_StartCountStride, _quantize, _find_dim, _walk_grps,
-                    _out_array_shape, _sortbylist, _tostr)
+                    _out_array_shape, _sortbylist, _tostr, _safecast)
 # try to use built-in ordered dict in python >= 2.7
 try:
     from collections import OrderedDict
@@ -933,7 +936,7 @@ except ImportError:
     # python3: zip is already python2's itertools.izip
     pass
 
-__version__ = "1.2.9"
+__version__ = "1.3.0"
 
 # Initialize numpy
 import posixpath
@@ -953,14 +956,12 @@ include "netCDF4.pxi"
 # check for required version of netcdf-4 and hdf5.
 
 def _gethdf5libversion():
-    majorvers = H5_VERS_MAJOR
-    minorvers = H5_VERS_MINOR
-    releasevers = H5_VERS_RELEASE
-    patchstring = H5_VERS_SUBRELEASE.decode('ascii')
-    if not patchstring:
-        return '%d.%d.%d' % (majorvers,minorvers,releasevers)
-    else:
-        return '%d.%d.%d-%s' % (majorvers,minorvers,releasevers,patchstring)
+    cdef unsigned int majorvers, minorvers, releasevers
+    cdef herr_t ierr
+    ierr = H5get_libversion( &majorvers, &minorvers, &releasevers)
+    if ierr < 0:
+        raise RuntimeError('error getting HDF5 library version info')
+    return '%d.%d.%d' % (majorvers,minorvers,releasevers)
 
 def getlibversion():
     """
@@ -1083,14 +1084,12 @@ cdef _get_att_names(int grpid, int varid):
     else:
         with nogil:
             ierr = nc_inq_varnatts(grpid, varid, &numatts)
-    if ierr != NC_NOERR:
-        raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr, err_cls=AttributeError)
     attslist = []
     for n from 0 <= n < numatts:
         with nogil:
             ierr = nc_inq_attname(grpid, varid, n, namstring)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr, err_cls=AttributeError)
         # attribute names are assumed to be utf-8
         attslist.append(namstring.decode('utf-8'))
     return attslist
@@ -1108,15 +1107,13 @@ cdef _get_att(grp, int varid, name, encoding='utf-8'):
     _grpid = grp._grpid
     with nogil:
         ierr = nc_inq_att(_grpid, varid, attname, &att_type, &att_len)
-    if ierr != NC_NOERR:
-        raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr, err_cls=AttributeError)
     # attribute is a character or string ...
     if att_type == NC_CHAR:
         value_arr = numpy.empty(att_len,'S1')
         with nogil:
             ierr = nc_get_att_text(_grpid, varid, attname, <char *>value_arr.data)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr, err_cls=AttributeError)
         if name == '_FillValue' and python3:
             # make sure _FillValue for character arrays is a byte on python 3
             # (issue 271).
@@ -1132,8 +1129,7 @@ cdef _get_att(grp, int varid, name, encoding='utf-8'):
         try:
             with nogil:
                 ierr = nc_get_att_string(_grpid, varid, attname, values)
-            if ierr != NC_NOERR:
-                raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr, err_cls=AttributeError)
             try:
                 result = [values[j].decode(encoding,errors='replace').replace('\x00','')
                           for j in range(att_len)]
@@ -1167,8 +1163,7 @@ cdef _get_att(grp, int varid, name, encoding='utf-8'):
                     raise KeyError('attribute %s has unsupported datatype' % attname)
         with nogil:
             ierr = nc_get_att(_grpid, varid, attname, value_arr.data)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr, err_cls=AttributeError)
         if value_arr.shape == ():
             # return a scalar for a scalar array
             return value_arr.item()
@@ -1189,8 +1184,7 @@ cdef _get_format(int grpid):
     cdef int ierr, formatp
     with nogil:
         ierr = nc_inq_format(grpid, &formatp)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     if formatp not in _reverse_format_dict:
         raise ValueError('format not supported by python interface')
     return _reverse_format_dict[formatp]
@@ -1201,8 +1195,7 @@ cdef _get_full_format(int grpid):
     IF HAS_NC_INQ_FORMAT_EXTENDED:
         with nogil:
             ierr = nc_inq_format_extended(grpid, &formatp, &modep)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         if formatp == NC_FORMAT_NC3:
             return 'NETCDF3'
         elif formatp == NC_FORMAT_NC_HDF5:
@@ -1234,8 +1227,8 @@ cdef issue485_workaround(int grpid, int varid, char* attname):
     ierr = nc_inq_att(grpid, varid, attname, &att_type, &att_len)
     if ierr == NC_NOERR and att_type == NC_CHAR:
         ierr = nc_del_att(grpid, varid, attname)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
+
 
 cdef _set_att(grp, int varid, name, value,\
               nc_type xtype=-99, force_ncstring=False):
@@ -1298,8 +1291,7 @@ cdef _set_att(grp, int varid, name, value,\
                     ierr = nc_put_att_string(grp._grpid, varid, attname, 1, &datstring)
             else:
                 ierr = nc_put_att_text(grp._grpid, varid, attname, lenarr, datstring)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr, err_cls=AttributeError)
     # a 'regular' array type ('f4','i4','f8' etc)
     else:
         if value_arr.dtype.kind == 'V': # compound attribute.
@@ -1310,8 +1302,7 @@ cdef _set_att(grp, int varid, name, value,\
             xtype = _nptonctype[value_arr.dtype.str[1:]]
         lenarr = PyArray_SIZE(value_arr)
         ierr = nc_put_att(grp._grpid, varid, attname, xtype, lenarr, value_arr.data)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr, err_cls=AttributeError)
 
 cdef _get_types(group):
     # Private function to create `netCDF4.CompoundType`,
@@ -1325,14 +1316,12 @@ cdef _get_types(group):
     # get the number of user defined types in this group.
     with nogil:
         ierr = nc_inq_typeids(_grpid, &ntypes, NULL)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     if ntypes > 0:
         typeids = <nc_type *>malloc(sizeof(nc_type) * ntypes)
         with nogil:
             ierr = nc_inq_typeids(_grpid, &ntypes, typeids)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
     # create empty dictionary for CompoundType instances.
     cmptypes = OrderedDict()
     vltypes = OrderedDict()
@@ -1343,8 +1332,7 @@ cdef _get_types(group):
             with nogil:
                 ierr = nc_inq_user_type(_grpid, xtype, namstring,
                                         NULL,NULL,NULL,&classp)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             if classp == NC_COMPOUND: # a compound
                 name = namstring.decode('utf-8')
                 # read the compound type info from the file,
@@ -1391,8 +1379,7 @@ cdef _get_dims(group):
     _grpid = group._grpid
     with nogil:
         ierr = nc_inq_ndims(_grpid, &numdims)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     # create empty dictionary for dimensions.
     dimensions = OrderedDict()
     if numdims > 0:
@@ -1400,16 +1387,14 @@ cdef _get_dims(group):
         if group.data_model == 'NETCDF4':
             with nogil:
                 ierr = nc_inq_dimids(_grpid, &numdims, dimids, 0)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
         else:
             for n from 0 <= n < numdims:
                 dimids[n] = n
         for n from 0 <= n < numdims:
             with nogil:
                 ierr = nc_inq_dimname(_grpid, dimids[n], namstring)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             name = namstring.decode('utf-8')
             dimensions[name] = Dimension(group, name, id=dimids[n])
         free(dimids)
@@ -1425,21 +1410,18 @@ cdef _get_grps(group):
     _grpid = group._grpid
     with nogil:
         ierr = nc_inq_grps(_grpid, &numgrps, NULL)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     # create dictionary containing `netCDF4.Group` instances for groups in this group
     groups = OrderedDict()
     if numgrps > 0:
         grpids = <int *>malloc(sizeof(int) * numgrps)
         with nogil:
             ierr = nc_inq_grps(_grpid, NULL, grpids)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         for n from 0 <= n < numgrps:
             with nogil:
                 ierr = nc_inq_grpname(grpids[n], namstring)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             name = namstring.decode('utf-8')
             groups[name] = Group(group, name, id=grpids[n])
         free(grpids)
@@ -1458,8 +1440,7 @@ cdef _get_vars(group):
     _grpid = group._grpid
     with nogil:
         ierr = nc_inq_nvars(_grpid, &numvars)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr, err_cls=AttributeError)
     # create empty dictionary for variables.
     variables = OrderedDict()
     if numvars > 0:
@@ -1468,8 +1449,7 @@ cdef _get_vars(group):
         if group.data_model == 'NETCDF4':
             with nogil:
                 ierr = nc_inq_varids(_grpid, &numvars, varids)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
         else:
             for n from 0 <= n < numvars:
                 varids[n] = n
@@ -1479,25 +1459,20 @@ cdef _get_vars(group):
             # get variable name.
             with nogil:
                 ierr = nc_inq_varname(_grpid, varid, namstring)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             name = namstring.decode('utf-8')
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
             # get variable type.
             with nogil:
                 ierr = nc_inq_vartype(_grpid, varid, &xtype)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             # get endian-ness of variable.
             endianness = None
             with nogil:
                 ierr = nc_inq_var_endian(_grpid, varid, &iendian)
-            if ierr == NC_NOERR:
-                if iendian == NC_ENDIAN_LITTLE:
-                    endianness = '<'
-                elif iendian == NC_ENDIAN_BIG:
-                    endianness = '>'
+            if ierr == NC_NOERR and iendian == NC_ENDIAN_LITTLE:
+                endianness = '<'
+            elif iendian == NC_ENDIAN_BIG:
+                endianness = '>'
             # check to see if it is a supported user-defined type.
             try:
                 datatype = _nctonptype[xtype]
@@ -1510,6 +1485,7 @@ cdef _get_vars(group):
                     with nogil:
                         ierr = nc_inq_user_type(_grpid, xtype, namstring_cmp,
                                                 NULL, NULL, NULL, &classp)
+                    _ensure_nc_success(ierr)
                     if classp == NC_COMPOUND: # a compound type
                         # create CompoundType instance describing this compound type.
                         try:
@@ -1541,14 +1517,12 @@ cdef _get_vars(group):
             # get number of dimensions.
             with nogil:
                 ierr = nc_inq_varndims(_grpid, varid, &numdims)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             dimids = <int *>malloc(sizeof(int) * numdims)
             # get dimension ids.
             with nogil:
                 ierr = nc_inq_vardimid(_grpid, varid, dimids)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             # loop over dimensions, retrieve names.
             # if not found in current group, look in parents.
             # QUESTION:  what if grp1 has a dimension named 'foo'
@@ -1577,6 +1551,7 @@ cdef _get_vars(group):
     return variables
 
 cdef _ensure_nc_success(ierr, err_cls=RuntimeError):
+    # print netcdf error message, raise error.
     if ierr != NC_NOERR:
         raise err_cls((<char *>nc_strerror(ierr)).decode('ascii'))
 
@@ -1587,7 +1562,8 @@ _private_atts = \
 ['_grpid','_grp','_varid','groups','dimensions','variables','dtype','data_model','disk_format',
  '_nunlimdim','path','parent','ndim','mask','scale','cmptypes','vltypes','enumtypes','_isprimitive',
  'file_format','_isvlen','_isenum','_iscompound','_cmptype','_vltype','_enumtype','name',
- '__orthogoral_indexing__','keepweakref','_has_lsd', '_buffer','chartostring']
+ '__orthogoral_indexing__','keepweakref','_has_lsd',
+ '_buffer','chartostring','_no_get_vars']
 __pdoc__ = {}
 
 cdef class Dataset:
@@ -1715,7 +1691,8 @@ references to the parent Dataset or Group.
     the parent Dataset or Group.""" 
 
     def __init__(self, filename, mode='r', clobber=True, format='NETCDF4',
-                 diskless=False, persist=False, keepweakref=False, memory=None, **kwargs):
+                 diskless=False, persist=False, keepweakref=False,
+                 memory=None, encoding=None, **kwargs):
         """
         **`__init__(self, filename, mode="r", clobber=True, diskless=False,
         persist=False, keepweakref=False, format='NETCDF4')`**
@@ -1783,6 +1760,9 @@ references to the parent Dataset or Group.
         
         **`memory`**: if not `None`, open file with contents taken from this block of memory.
         Must be a sequence of bytes.  Note this only works with "r" mode.
+
+        **`encoding`**: encoding used to encode filename string into bytes.
+        Default is None (`sys.getdefaultfileencoding()` is used).
         """
         cdef int grpid, ierr, numgrps, numdims, numvars
         cdef char *path
@@ -1795,7 +1775,11 @@ references to the parent Dataset or Group.
         if diskless and __netcdf4libversion__ < '4.2.1':
             #diskless = False # don't raise error, instead silently ignore
             raise ValueError('diskless mode requires netcdf lib >= 4.2.1, you have %s' % __netcdf4libversion__)
-        bytestr = _strencode(str(filename))
+        # convert filename into string (from os.path object for example),
+        # encode into bytes.
+        if encoding is None:
+            encoding = sys.getfilesystemencoding()
+        bytestr = _strencode(_tostr(filename), encoding=encoding)
         path = bytestr
 
         if memory is not None and (mode != 'r' or type(memory) != bytes):
@@ -1926,15 +1910,19 @@ references to the parent Dataset or Group.
         else:
             raise IndexError('%s not found in %s' % (lastname,group.path))
 
-    def filepath(self):
+    def filepath(self,encoding=None):
         """
-**`filepath(self)`**
+**`filepath(self,encoding=None)`**
 
 Get the file system path (or the opendap URL) which was used to
-open/create the Dataset. Requires netcdf >= 4.1.2"""
+open/create the Dataset. Requires netcdf >= 4.1.2.  The path
+is decoded into a string using `sys.getfilesystemencoding()` by default, this can be
+changed using the `encoding` kwarg."""
         cdef int ierr
         cdef size_t pathlen
         cdef char *c_path
+        if encoding is None:
+            encoding = sys.getfilesystemencoding()
         IF HAS_NC_INQ_PATH:
             with nogil:
                 ierr = nc_inq_path(self._grpid, &pathlen, NULL)
@@ -1951,7 +1939,7 @@ open/create the Dataset. Requires netcdf >= 4.1.2"""
                 py_path = c_path[:pathlen] # makes a copy of pathlen bytes from c_string
             finally:
                 free(c_path)
-            return py_path.decode('ascii')
+            return py_path.decode(encoding)
         ELSE:
             msg = """
 filepath method not enabled.  To enable, install Cython, make sure you have
@@ -2811,8 +2799,7 @@ Read-only class variables:
             if grp.data_model != 'NETCDF4': grp._redef()
             ierr = nc_def_dim(self._grpid, dimname, lendim, &self._dimid)
             if grp.data_model != 'NETCDF4': grp._enddef()
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
 
     def _getname(self):
         # private method to get name associated with instance.
@@ -2821,8 +2808,7 @@ Read-only class variables:
         _grpid = self._grp._grpid
         with nogil:
             ierr = nc_inq_dimname(_grpid, self._dimid, namstring)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         return namstring.decode('utf-8')
 
     property name:
@@ -2859,8 +2845,7 @@ Read-only class variables:
         cdef size_t lengthp
         with nogil:
             ierr = nc_inq_dimlen(self._grpid, self._dimid, &lengthp)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         return lengthp
 
     def group(self):
@@ -2879,8 +2864,7 @@ returns `True` if the `netCDF4.Dimension` instance is unlimited, `False` otherwi
         cdef int *unlimdimids
         if self._data_model == 'NETCDF4':
             ierr = nc_inq_unlimdims(self._grpid, &numunlimdims, NULL)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             if numunlimdims == 0:
                 return False
             else:
@@ -2888,8 +2872,7 @@ returns `True` if the `netCDF4.Dimension` instance is unlimited, `False` otherwi
                 dimid = self._dimid
                 with nogil:
                     ierr = nc_inq_unlimdims(self._grpid, &numunlimdims, unlimdimids)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
                 unlimdim_ids = []
                 for n from 0 <= n < numunlimdims:
                     unlimdim_ids.append(unlimdimids[n])
@@ -2968,7 +2951,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
     cdef public int _varid, _grpid, _nunlimdim
     cdef public _name, ndim, dtype, mask, scale, chartostring,  _isprimitive, _iscompound,\
     _isvlen, _isenum, _grp, _cmptype, _vltype, _enumtype,\
-    __orthogonal_indexing__, _has_lsd
+    __orthogonal_indexing__, _has_lsd, _no_get_vars
     # Docstrings for class variables (used by pdoc).
     __pdoc__['Variable.dimensions'] = \
     """A tuple containing the names of the
@@ -2994,6 +2977,9 @@ behavior is similar to Fortran or Matlab, but different than numpy.
     arrays to string arrays when `_Encoding` variable attribute is set.
     Default is `True`, can be reset using
     `netCDF4.Variable.set_auto_chartostring` method."""
+    __pdoc__['Variable._no_get_vars'] = \
+    """If True (default), netcdf routine `nc_get_vars` is not used for strided slicing
+    slicing. Can be re-set using `netCDF4.Variable.use_nc_get_vars` method."""
     __pdoc__['Variable.least_significant_digit'] = \
     """Describes the power of ten of the 
     smallest decimal place in the data the contains a reliable value.  Data is
@@ -3236,17 +3222,15 @@ behavior is similar to Fortran or Matlab, but different than numpy.
             if grp.data_model.startswith('NETCDF4') and chunk_cache is not None:
                 ierr = nc_get_var_chunk_cache(self._grpid, self._varid, &sizep,
                         &nelemsp, &preemptionp)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
                 # reset chunk cache size, leave other parameters unchanged.
                 sizep = chunk_cache
                 ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
                         nelemsp, preemptionp)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
             if ierr != NC_NOERR:
                 if grp.data_model != 'NETCDF4': grp._enddef()
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
             # set zlib, shuffle, chunking, fletcher32 and endian
             # variable settings.
             # don't bother for NETCDF3* formats.
@@ -3263,13 +3247,13 @@ behavior is similar to Fortran or Matlab, but different than numpy.
                         ierr = nc_def_var_deflate(self._grpid, self._varid, 0, 1, ideflate_level)
                     if ierr != NC_NOERR:
                         if grp.data_model != 'NETCDF4': grp._enddef()
-                        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                        _ensure_nc_success(ierr)
                 # set checksum.
                 if fletcher32 and ndims: # don't bother for scalar variable
                     ierr = nc_def_var_fletcher32(self._grpid, self._varid, 1)
                     if ierr != NC_NOERR:
                         if grp.data_model != 'NETCDF4': grp._enddef()
-                        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                        _ensure_nc_success(ierr)
                 # set chunking stuff.
                 if ndims: # don't bother for scalar variable.
                     if contiguous:
@@ -3296,7 +3280,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
                         free(chunksizesp)
                         if ierr != NC_NOERR:
                             if grp.data_model != 'NETCDF4': grp._enddef()
-                            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                            _ensure_nc_success(ierr)
                 # set endian-ness of variable
                 if endian == 'little':
                     ierr = nc_def_var_endian(self._grpid, self._varid, NC_ENDIAN_LITTLE)
@@ -3308,7 +3292,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
                     raise ValueError("'endian' keyword argument must be 'little','big' or 'native', got '%s'" % endian)
                 if ierr != NC_NOERR:
                     if grp.data_model != 'NETCDF4': grp._enddef()
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                    _ensure_nc_success(ierr)
             else:
                 if endian != 'native':
                     msg="only endian='native' allowed for NETCDF3 files"
@@ -3327,7 +3311,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
                         ierr = nc_def_var_fill(self._grpid, self._varid, 1, NULL)
                     if ierr != NC_NOERR:
                         if grp.data_model != 'NETCDF4': grp._enddef()
-                        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                        _ensure_nc_success(ierr)
                 else:
                     # cast fill_value to type of variable.
                     # also make sure it is written in native byte order
@@ -3352,8 +3336,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         # set ndim attribute (number of dimensions).
         with nogil:
             ierr = nc_inq_varndims(self._grpid, self._varid, &numdims)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         self.ndim = numdims
         self._name = name
         # default for automatically applying scale_factor and
@@ -3365,6 +3348,8 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         self.chartostring = True
         if 'least_significant_digit' in self.ncattrs():
             self._has_lsd = True
+        # avoid calling nc_get_vars for strided slices by default.
+        self._no_get_vars = True
 
     def __array__(self):
         # numpy special method that returns a numpy array.
@@ -3415,8 +3400,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         ncdump_var.append('current shape = %s\n' % repr(self.shape))
         with nogil:
             ierr = nc_inq_var_fill(self._grpid,self._varid,&no_fill,NULL)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         if self._isprimitive:
             if no_fill != 1:
                 try:
@@ -3443,21 +3427,18 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         # get number of dimensions for this variable.
         with nogil:
             ierr = nc_inq_varndims(self._grpid, self._varid, &numdims)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         dimids = <int *>malloc(sizeof(int) * numdims)
         # get dimension ids.
         with nogil:
             ierr = nc_inq_vardimid(self._grpid, self._varid, dimids)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         # loop over dimensions, retrieve names.
         dimensions = ()
         for nn from 0 <= nn < numdims:
             with nogil:
                 ierr = nc_inq_dimname(self._grpid, dimids[nn], namstring)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             name = namstring.decode('utf-8')
             dimensions = dimensions + (name,)
         free(dimids)
@@ -3470,8 +3451,7 @@ behavior is similar to Fortran or Matlab, but different than numpy.
         _grpid = self._grp._grpid
         with nogil:
             ierr = nc_inq_varname(_grpid, self._varid, namstring)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         return namstring.decode('utf-8')
 
     property name:
@@ -3598,8 +3578,7 @@ attributes."""
         if self._grp.data_model != 'NETCDF4': self._grp._redef()
         ierr = nc_del_att(self._grpid, self._varid, attname)
         if self._grp.data_model != 'NETCDF4': self._grp._enddef()
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
 
     def filters(self):
         """
@@ -3611,12 +3590,10 @@ return dictionary containing HDF5 filter parameters."""
         if self._grp.data_model not in ['NETCDF4_CLASSIC','NETCDF4']: return
         with nogil:
             ierr = nc_inq_var_deflate(self._grpid, self._varid, &ishuffle, &ideflate, &ideflate_level)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         with nogil:
             ierr = nc_inq_var_fletcher32(self._grpid, self._varid, &ifletcher32)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         if ideflate:
             filtdict['zlib']=True
             filtdict['complevel']=ideflate_level
@@ -3636,8 +3613,7 @@ return endian-ness (`little,big,native`) of variable (as stored in HDF5 file).""
             return 'native'
         with nogil:
             ierr = nc_inq_var_endian(self._grpid, self._varid, &iendian)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         if iendian == NC_ENDIAN_LITTLE:
             return 'little'
         elif iendian == NC_ENDIAN_BIG:
@@ -3660,8 +3636,7 @@ each dimension is returned."""
         chunksizesp = <size_t *>malloc(sizeof(size_t) * ndims)
         with nogil:
             ierr = nc_inq_var_chunking(self._grpid, self._varid, &icontiguous, chunksizesp)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         chunksizes=[]
         for n from 0 <= n < ndims:
             chunksizes.append(chunksizesp[n])
@@ -3684,8 +3659,7 @@ details."""
         with nogil:
             ierr = nc_get_var_chunk_cache(self._grpid, self._varid, &sizep,
                    &nelemsp, &preemptionp)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         size = sizep; nelems = nelemsp; preemption = preemptionp
         return (size,nelems,preemption)
 
@@ -3715,8 +3689,7 @@ details."""
             preemptionp = preemption_orig
         ierr = nc_set_var_chunk_cache(self._grpid, self._varid, sizep,
                nelemsp, preemptionp)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
 
     def __delattr__(self,name):
         # if it's a netCDF attribute, remove it
@@ -3746,8 +3719,15 @@ details."""
                 # make sure these attributes written in same data type as variable.
                 # also make sure it is written in native byte order
                 # (the same as the data)
-                value = numpy.array(value, self.dtype)
-                if not value.dtype.isnative: value.byteswap(True)
+                valuea = numpy.array(value, self.dtype)
+                # check to see if array cast is safe
+                if _safecast(numpy.array(value),valuea):
+                    value = valuea
+                    if not value.dtype.isnative: value.byteswap(True)
+                else: # otherwise don't do it, but issue a warning
+                    msg="WARNING: %s cannot be safely cast to variable dtype" \
+                    % name
+                    warnings.warn(msg)
             self.setncattr(name, value)
         elif not name.endswith('__'):
             if hasattr(self,name):
@@ -3787,8 +3767,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         bytestr = _strencode(newname)
         newnamec = bytestr
         ierr = nc_rename_att(self._grpid, self._varid, oldnamec, newnamec)
-        if ierr != NC_NOERR:
-            raise AttributeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
 
     def __getitem__(self, elem):
         # This special method is used to index the netCDF variable
@@ -3796,7 +3775,8 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         # is a perfect match for the "start", "count" and "stride"
         # arguments to the nc_get_var() function, and is much more easy
         # to use.
-        start, count, stride, put_ind = _StartCountStride(elem,self.shape)
+        start, count, stride, put_ind =\
+        _StartCountStride(elem,self.shape,dimensions=self.dimensions,grp=self._grp,no_get_vars=self._no_get_vars)
         datashape = _out_array_shape(count)
         if self._isvlen:
             data = numpy.empty(datashape, dtype='O')
@@ -3907,7 +3887,8 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         # and/or _FillValues.
         totalmask = numpy.zeros(data.shape, numpy.bool)
         fill_value = None
-        if hasattr(self, 'missing_value'):
+        safe_missval = self._check_safecast('missing_value')
+        if safe_missval:
             mval = numpy.array(self.missing_value, self.dtype)
             # create mask from missing values. 
             mvalmask = numpy.zeros(data.shape, numpy.bool)
@@ -3930,7 +3911,8 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
                 fill_value = mval[0]
                 totalmask += mvalmask
         # set mask=True for data == fill value
-        if hasattr(self, '_FillValue'):
+        safe_fillval = self._check_safecast('_FillValue')
+        if safe_fillval:
             fval = numpy.array(self._FillValue, self.dtype)
             # is _FillValue a NaN?
             try:
@@ -3952,8 +3934,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         else:
             with nogil:
                 ierr = nc_inq_var_fill(self._grpid,self._varid,&no_fill,NULL)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             # if no_fill is not 1, and not a byte variable, then use default fill value.
             # from http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-c/Fill-Values.html#Fill-Values
             # "If you need a fill value for a byte variable, it is recommended
@@ -3984,13 +3965,16 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         # look for valid_min, valid_max.  No special
         # treatment of byte data as described at
         # http://www.unidata.ucar.edu/software/netcdf/docs/attribute_conventions.html).
-        if hasattr(self, 'valid_range') and len(self.valid_range) == 2:
+        safe_validrange = self._check_safecast('valid_range')
+        safe_validmin = self._check_safecast('valid_min')
+        safe_validmax = self._check_safecast('valid_max')
+        if safe_validrange and len(self.valid_range) == 2:
             validmin = numpy.array(self.valid_range[0], self.dtype)
             validmax = numpy.array(self.valid_range[1], self.dtype)
         else:
-            if hasattr(self, 'valid_min'):
+            if safe_validmin:
                 validmin = numpy.array(self.valid_min, self.dtype)
-            if hasattr(self, 'valid_max'):
+            if safe_validmax:
                 validmax = numpy.array(self.valid_max, self.dtype)
         # http://www.unidata.ucar.edu/software/netcdf/docs/attribute_conventions.html).
         # "If the data type is byte and _FillValue 
@@ -4001,7 +3985,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
         # If the _FillValue is positive then it defines a valid maximum,
         #  otherwise it defines a valid minimum."
         byte_type = self.dtype.str[1:] in ['u1','i1']
-        if hasattr(self, '_FillValue'):
+        if safe_fillval:
             fval = numpy.array(self._FillValue, self.dtype)
         else:
             fval = numpy.array(default_fillvals[self.dtype.str[1:]],self.dtype)
@@ -4089,8 +4073,7 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
             strdata[0] = bytestr
             ierr = nc_put_vara(self._grpid, self._varid,
                                startp, countp, strdata)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             free(strdata)
         else: # regular VLEN
             if data.dtype != self.dtype:
@@ -4101,12 +4084,26 @@ rename a `netCDF4.Variable` attribute named `oldname` to `newname`."""
             vldata[0].p = data2.data
             ierr = nc_put_vara(self._grpid, self._varid,
                                startp, countp, vldata)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
             free(vldata)
         free(startp)
         free(countp)
 
+    def _check_safecast(self, attname):
+        # check to see that variable attribute exists
+        # can can be safely cast to variable data type.
+        if hasattr(self, attname):
+            att = numpy.array(self.getncattr(attname))
+        else:
+            return False
+        atta = numpy.array(att, self.dtype)
+        is_safe = _safecast(att,atta)
+        if not is_safe:
+            msg="""WARNING: %s not used since it
+cannot be safely cast to variable data type""" % attname
+            warnings.warn(msg)
+        return is_safe
+
     def __setitem__(self, elem, data):
         # This special method is used to assign to the netCDF variable
         # using "extended slice syntax". The extended slice syntax
@@ -4328,6 +4325,20 @@ The default value of `chartostring` is `True`
         else:
             self.chartostring = False
 
+    def use_nc_get_vars(self,use_nc_get_vars):
+        """
+**`use_nc_get_vars(self,_no_get_vars)`**
+
+enable the use of netcdf library routine `nc_get_vars`
+to retrieve strided variable slices.  By default,
+`nc_get_vars` not used since it slower than multiple calls
+to the unstrided read routine `nc_get_vara` in most cases.
+        """
+        if not use_nc_get_vars:
+            self._no_get_vars = True
+        else:
+            self._no_get_vars = False
+
     def set_auto_maskandscale(self,maskandscale):
         """
 **`set_auto_maskandscale(self,maskandscale)`**
@@ -4524,8 +4535,7 @@ The default value of `mask` is `True`
             else:
                 ierr = nc_put_vars(self._grpid, self._varid,
                                    startp, countp, stridep, data.data)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
         elif self._isvlen:
             if data.dtype.char !='O':
                 raise TypeError('data to put in string variable must be an object array containing Python strings')
@@ -4553,8 +4563,7 @@ The default value of `mask` is `True`
                     raise IndexError('strides must all be 1 for string variables')
                     #ierr = nc_put_vars(self._grpid, self._varid,
                     #                   startp, countp, stridep, strdata)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
                 free(strdata)
             else:
                 # regular vlen.
@@ -4581,8 +4590,7 @@ The default value of `mask` is `True`
                     raise IndexError('strides must all be 1 for vlen variables')
                     #ierr = nc_put_vars(self._grpid, self._varid,
                     #                   startp, countp, stridep, vldata)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
                 # free the pointer array.
                 free(vldata)
         free(startp)
@@ -4646,7 +4654,7 @@ The default value of `mask` is `True`
             if ierr == NC_EINVALCOORDS:
                 raise IndexError
             elif ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                _ensure_nc_success(ierr)
         elif self._isvlen:
             # allocate array of correct primitive type.
             data = numpy.empty(shapeout, 'O')
@@ -4670,7 +4678,7 @@ The default value of `mask` is `True`
                 if ierr == NC_EINVALCOORDS:
                     raise IndexError
                 elif ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                   _ensure_nc_success(ierr)
                 # loop over elements of object array, fill array with
                 # contents of strdata.
                 # use _Encoding attribute to decode string to bytes - if
@@ -4703,7 +4711,7 @@ The default value of `mask` is `True`
                 if ierr == NC_EINVALCOORDS:
                     raise IndexError
                 elif ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                    _ensure_nc_success(ierr)
                 # loop over elements of object array, fill array with
                 # contents of vlarray struct, put array in object array.
                 for i from 0<=i<totelem:
@@ -4786,7 +4794,17 @@ the user.
         method of a `netCDF4.Dataset` or `netCDF4.Group` instance, not using this class directly.
         """
         cdef nc_type xtype
-        dt = numpy.dtype(dt,align=True)
+        # convert dt to a numpy datatype object
+        # and make sure the isalignedstruct flag is set to True
+        # (so padding is added to the fields to match what a
+        # C compiler would output for a similar C-struct).
+        # This is needed because nc_get_vara is
+        # apparently expecting the data buffer to include
+        # padding to match what a C struct would have.
+        # (this may or may not be still true, but empirical
+        # evidence suggests that segfaults occur if this
+        # alignment step is skipped - see issue #705).
+        dt = _set_alignment(numpy.dtype(dt))
         if 'typeid' in kwargs:
             xtype = kwargs['typeid']
         else:
@@ -4809,6 +4827,27 @@ the user.
         # raise error is user tries to pickle a CompoundType object.
         raise NotImplementedError('CompoundType is not picklable')
 
+def _set_alignment(dt):
+    # recursively set alignment flag in nested structured data type
+    names = dt.names; formats = []
+    for name in names:
+        fmt = dt.fields[name][0]
+        if fmt.kind == 'V':
+            if fmt.shape == ():
+                dtx = _set_alignment(dt.fields[name][0])
+            else:
+                if fmt.subdtype[0].kind == 'V': # structured dtype
+                    raise TypeError('nested structured dtype arrays not supported')
+                else:
+                    dtx = dt.fields[name][0]
+        else:
+            # primitive data type
+            dtx = dt.fields[name][0]
+        formats.append(dtx)
+    # leave out offsets, they will be re-computed to preserve alignment.
+    dtype_dict = {'names':names,'formats':formats}
+    return numpy.dtype(dtype_dict, align=True)
+
 cdef _def_compound(grp, object dt, object dtype_name):
     # private function used to construct a netcdf compound data type
     # from a numpy dtype object by CompoundType.__init__.
@@ -4822,8 +4861,7 @@ cdef _def_compound(grp, object dt, object dtype_name):
     namstring = bytestr
     size = dt.itemsize
     ierr = nc_def_compound(grp._grpid, size, namstring, &xtype)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     names = list(dt.fields.keys())
     formats = [v[0] for v in dt.fields.values()]
     offsets = [v[1] for v in dt.fields.values()]
@@ -4842,8 +4880,7 @@ cdef _def_compound(grp, object dt, object dtype_name):
                 raise ValueError('Unsupported compound type element')
             ierr = nc_insert_compound(grp._grpid, xtype, namstring,
                                       offset, xtype_tmp)
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
         else:
             if format.shape ==  (): # nested scalar compound type
                 # find this compound type in this group or it's parents.
@@ -4853,33 +4890,33 @@ cdef _def_compound(grp, object dt, object dtype_name):
                 ierr = nc_insert_compound(grp._grpid, xtype,\
                                           nested_namstring,\
                                           offset, xtype_tmp)
-                if ierr != NC_NOERR:
-                    raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
-            else: # array compound element
+                _ensure_nc_success(ierr)
+            else: # nested array compound element
                 ndims = len(format.shape)
                 dim_sizes = <int *>malloc(sizeof(int) * ndims)
                 for n from 0 <= n < ndims:
                     dim_sizes[n] = format.shape[n]
-                if format.subdtype[0].str[1] != 'V': # primitive type.
+                if format.subdtype[0].kind != 'V': # primitive type.
                     try:
                         xtype_tmp = _nptonctype[format.subdtype[0].str[1:]]
                     except KeyError:
                         raise ValueError('Unsupported compound type element')
                     ierr = nc_insert_array_compound(grp._grpid,xtype,namstring,
                            offset,xtype_tmp,ndims,dim_sizes)
-                    if ierr != NC_NOERR:
-                        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                    _ensure_nc_success(ierr)
                 else: # nested array compound type.
-                    # find this compound type in this group or it's parents.
-                    xtype_tmp = _find_cmptype(grp, format.subdtype[0])
-                    bytestr = _strencode(name)
-                    nested_namstring = bytestr
-                    ierr = nc_insert_array_compound(grp._grpid,xtype,\
-                                                    nested_namstring,\
-                                                    offset,xtype_tmp,\
-                                                    ndims,dim_sizes)
-                    if ierr != NC_NOERR:
-                        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+                    raise TypeError('nested structured dtype arrays not supported')
+                    # this code is untested and probably does not work, disable
+                    # for now...
+                #   # find this compound type in this group or it's parents.
+                #   xtype_tmp = _find_cmptype(grp, format.subdtype[0])
+                #   bytestr = _strencode(name)
+                #   nested_namstring = bytestr
+                #   ierr = nc_insert_array_compound(grp._grpid,xtype,\
+                #                                   nested_namstring,\
+                #                                   offset,xtype_tmp,\
+                #                                   ndims,dim_sizes)
+                #   _ensure_nc_success(ierr)
                 free(dim_sizes)
     return xtype
 
@@ -4890,7 +4927,8 @@ cdef _find_cmptype(grp, dtype):
     match = False
     for cmpname, cmpdt in grp.cmptypes.items():
         xtype = cmpdt._nc_type
-        names1 = dtype.names; names2 = cmpdt.dtype.names
+        names1 = dtype.fields.keys()
+        names2 = cmpdt.dtype.fields.keys()
         formats1 = [v[0] for v in dtype.fields.values()]
         formats2 = [v[0] for v in cmpdt.dtype.fields.values()]
         # match names, formats, but not offsets (they may be changed
@@ -4925,8 +4963,7 @@ cdef _read_compound(group, nc_type xtype, endian=None):
     _grpid = group._grpid
     with nogil:
         ierr = nc_inq_compound(_grpid, xtype, cmp_namstring, NULL, &nfields)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     name = cmp_namstring.decode('utf-8')
     # loop over fields.
     names = []
@@ -4942,8 +4979,7 @@ cdef _read_compound(group, nc_type xtype, endian=None):
                                          &field_typeid,
                                          &numdims,
                                          NULL)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         dim_sizes = <int *>malloc(sizeof(int) * numdims)
         with nogil:
             ierr = nc_inq_compound_field(_grpid,
@@ -4954,8 +4990,7 @@ cdef _read_compound(group, nc_type xtype, endian=None):
                                          &field_typeid,
                                          &numdims,
                                          dim_sizes)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         field_name = field_namstring.decode('utf-8')
         names.append(field_name)
         offsets.append(offset)
@@ -5076,8 +5111,7 @@ cdef _def_vlen(grp, object dt, object dtype_name):
             # specified numpy data type.
             xtype_tmp = _nptonctype[dt.str[1:]]
             ierr = nc_def_vlen(grp._grpid, namstring, xtype_tmp, &xtype);
-            if ierr != NC_NOERR:
-                raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+            _ensure_nc_success(ierr)
         else:
             raise KeyError("unsupported datatype specified for VLEN")
     return xtype, dt
@@ -5098,8 +5132,7 @@ cdef _read_vlen(group, nc_type xtype, endian=None):
     else:
         with nogil:
             ierr = nc_inq_vlen(_grpid, xtype, vl_namstring, &vlsize, &base_xtype)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         name = vl_namstring.decode('utf-8')
         try:
             datatype = _nctonptype[base_xtype]
@@ -5190,8 +5223,7 @@ cdef _def_enum(grp, object dt, object dtype_name, object enum_dict):
         # specified numpy data type.
         xtype_tmp = _intnptonctype[dt.str[1:]]
         ierr = nc_def_enum(grp._grpid, xtype_tmp, namstring, &xtype);
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
     else:
         msg="unsupported datatype specified for Enum (must be integer)"
         raise KeyError(msg)
@@ -5201,8 +5233,7 @@ cdef _def_enum(grp, object dt, object dtype_name, object enum_dict):
         bytestr = _strencode(field)
         namstring = bytestr
         ierr = nc_insert_enum(grp._grpid, xtype, namstring, value_arr.data)
-        if ierr != NC_NOERR:
-            raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
     return xtype, dt
 
 cdef _read_enum(group, nc_type xtype, endian=None):
@@ -5220,8 +5251,7 @@ cdef _read_enum(group, nc_type xtype, endian=None):
     with nogil:
         ierr = nc_inq_enum(_grpid, xtype, enum_namstring, &base_xtype, NULL,\
                 &nmembers)
-    if ierr != NC_NOERR:
-        raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+    _ensure_nc_success(ierr)
     name = enum_namstring.decode('utf-8')
     try:
         datatype = _nctonptype[base_xtype]
@@ -5235,8 +5265,7 @@ cdef _read_enum(group, nc_type xtype, endian=None):
         with nogil:
             ierr = nc_inq_enum_member(_grpid, xtype, nmem, \
                                       enum_namstring, &enum_val)
-        if ierr != NC_NOERR:
-           raise RuntimeError((<char *>nc_strerror(ierr)).decode('ascii'))
+        _ensure_nc_success(ierr)
         name = enum_namstring.decode('utf-8')
         enum_dict[name] = int(enum_val)
     return EnumType(group, dt, name, enum_dict, typeid=xtype)
@@ -5284,8 +5313,8 @@ def _dateparse(timestr):
         _parse_date( isostring.strip() )
     if year >= MINYEAR:
         basedate = datetime(year, month, day, hour, minute, second)
-        # add utc_offset to basedate time instance (which is timezone naive)
-        basedate += timedelta(days=utc_offset/1440.)
+        # subtract utc_offset from basedate time instance (which is timezone naive)
+        basedate -= timedelta(days=utc_offset/1440.)
     else:
         if not utc_offset:
             basedate = netcdftime.datetime(year, month, day, hour, minute, second)
diff --git a/netCDF4/utils.py b/netCDF4/utils.py
index fc1a34b..ac165d7 100644
--- a/netCDF4/utils.py
+++ b/netCDF4/utils.py
@@ -20,6 +20,18 @@ except NameError:
     # no bytes type in python < 2.6
     bytes = str
 
+def _safecast(a,b):
+    # check to see if array a can be safely cast
+    # to array b.  A little less picky than numpy.can_cast.
+    try:
+        is_safe = ((a == b) | (np.isnan(a) & np.isnan(b))).all()
+        #is_safe = np.allclose(a, b, equal_nan=True) # numpy 1.10.0
+    except:
+        try:
+            is_safe = (a == b).all() # string arrays.
+        except:
+            is_safe = False
+    return is_safe
 
 def _sortbylist(A,B):
     # sort one list (A) using the values from another list (B)
@@ -74,7 +86,7 @@ least_significant_digit=1, bits will be 4.
         return datout
 
 def _StartCountStride(elem, shape, dimensions=None, grp=None, datashape=None,\
-        put=False):
+        put=False, no_get_vars = True):
     """Return start, count, stride and indices needed to store/extract data
     into/from a netCDF variable.
 
@@ -122,12 +134,10 @@ def _StartCountStride(elem, shape, dimensions=None, grp=None, datashape=None,\
     sequences used to slice the netCDF Variable (Variable[elem]).
     shape : tuple containing the current shape of the netCDF variable.
     dimensions : sequence
-      The name of the dimensions. This is only useful to find out
-      whether or not some dimensions are unlimited. Only needed within
+      The name of the dimensions.
       __setitem__.
     grp  : netCDF Group
       The netCDF group to which the variable being set belongs to.
-      Only needed within __setitem__.
     datashape : sequence
       The shape of the data that is being stored. Only needed by __setitime__
     put : True|False (default False).  If called from __setitem__, put is True.
@@ -179,12 +189,19 @@ def _StartCountStride(elem, shape, dimensions=None, grp=None, datashape=None,\
                 elem.append(slice(None,None,None))
     else:   # Convert single index to sequence
         elem = [elem]
-
+    
+    hasEllipsis=False
+    for e in elem:
+        if type(e)==type(Ellipsis):
+            if hasEllipsis:
+                raise IndexError("At most one ellipsis allowed in a slicing expression")
+            hasEllipsis=True
     # replace boolean arrays with sequences of integers.
     newElem = []
     IndexErrorMsg=\
     "only integers, slices (`:`), ellipsis (`...`), and 1-d integer or boolean arrays are valid indices"
-    for i, e in enumerate(elem):
+    i=0
+    for e in elem:
         # string-like object try to cast to int
         # needs to be done first, since strings are iterable and
         # hard to distinguish from something castable to an iterable numpy array.
@@ -239,6 +256,24 @@ Boolean array must have the same shape as the data along this dimension."""
             newElem.append(e)
         # slice or ellipsis object
         elif type(e) == slice or type(e) == type(Ellipsis):
+            if no_get_vars and type(e) == slice and e.step not in [None,-1,1] and\
+               dimensions is not None and grp is not None:
+                # convert strided slice to integer sequence if possible
+                # (this will avoid nc_get_vars, which is slow - issue #680).
+                start = e.start if e.start is not None else 0
+                step = e.step
+                if e.stop is None and dimensions is not None and grp is not None:
+                    stop = len(_find_dim(grp, dimensions[i]))
+                else:
+                    stop = e.stop
+                    if stop < 0:
+                        stop = len(_find_dim(grp, dimensions[i])) + stop
+                try:
+                    ee = np.arange(start,stop,e.step)
+                    if len(ee) > 0:
+                        e = ee
+                except:
+                    pass
             newElem.append(e)
         else:  # castable to a scalar int, otherwise invalid
             try:
@@ -246,20 +281,20 @@ Boolean array must have the same shape as the data along this dimension."""
                 newElem.append(e)
             except:
                 raise IndexError(IndexErrorMsg)
+        if type(e)==type(Ellipsis): 
+            i+=1+nDims-len(elem)
+        else:
+            i+=1
     elem = newElem
 
     # replace Ellipsis and integer arrays with slice objects, if possible.
-    hasEllipsis = False
     newElem = []
     for e in elem:
         ea = np.asarray(e)
         # Replace ellipsis with slices.
         if type(e) == type(Ellipsis):
-            if hasEllipsis:
-                raise IndexError("At most one ellipsis allowed in a slicing expression")
             # The ellipsis stands for the missing dimensions.
             newElem.extend((slice(None, None, None),) * (nDims - len(elem) + 1))
-            hasEllipsis = True
         # Replace sequence of indices with slice object if possible.
         elif np.iterable(e) and len(e) > 1:
             start = e[0]
@@ -269,8 +304,13 @@ Boolean array must have the same shape as the data along this dimension."""
                 ee = range(start,stop,step)
             except ValueError: # start, stop or step is not valid for a range
                 ee = False
-            if ee and len(e) == len(ee) and (e == np.arange(start,stop,step)).all():
-                newElem.append(slice(start,stop,step))
+            if no_get_vars and ee and len(e) == len(ee) and (e == np.arange(start,stop,step)).all():
+                # don't convert to slice unless abs(stride) == 1
+                # (nc_get_vars is very slow, issue #680)
+                if step not in [1,-1]:
+                    newElem.append(e)
+                else:
+                    newElem.append(slice(start,stop,step))
             else:
                 newElem.append(e)
         elif np.iterable(e) and len(e) == 1:
diff --git a/netcdftime/_netcdftime.pyx b/netcdftime/_netcdftime.pyx
index 2bc1dcf..5302684 100644
--- a/netcdftime/_netcdftime.pyx
+++ b/netcdftime/_netcdftime.pyx
@@ -31,12 +31,19 @@ _calendars = ['standard', 'gregorian', 'proleptic_gregorian',
 __version__ = '1.4.1'
 
 # Adapted from http://delete.me.uk/2005/03/iso8601.html
+# Note: This regex ensures that all ISO8601 timezone formats are accepted -
+# but, due to legacy support for other timestrings, not all incorrect formats can be rejected.
+# For example, the TZ spec "+01:0" will still work even though the minutes value is only one character long.
 ISO8601_REGEX = re.compile(r"(?P<year>[+-]?[0-9]{1,4})(-(?P<month>[0-9]{1,2})(-(?P<day>[0-9]{1,2})"
                            r"(((?P<separator1>.)(?P<hour>[0-9]{1,2}):(?P<minute>[0-9]{1,2})(:(?P<second>[0-9]{1,2})(\.(?P<fraction>[0-9]+))?)?)?"
-                           r"((?P<separator2>.?)(?P<timezone>Z|(([-+])([0-9]{1,2}):([0-9]{1,2}))))?)?)?)?"
+                           r"((?P<separator2>.?)(?P<timezone>Z|(([-+])([0-9]{2})((:([0-9]{2}))|([0-9]{2}))?)))?)?)?)?"
                            )
+# Note: The re module apparently does not support branch reset groups that allow
+# redifinition of the same group name in alternative branches as PCRE does.
+# Using two different group names is also somewhat ugly, but other solutions might
+# hugely inflate the expression. feel free to contribute a better solution.
 TIMEZONE_REGEX = re.compile(
-    "(?P<prefix>[+-])(?P<hours>[0-9]{1,2}):(?P<minutes>[0-9]{1,2})")
+       "(?P<prefix>[+-])(?P<hours>[0-9]{2})(?:(?::(?P<minutes1>[0-9]{2}))|(?P<minutes2>[0-9]{2}))?")
 
 def JulianDayFromDate(date, calendar='standard'):
     """
@@ -797,19 +804,19 @@ units to datetime objects.
                     jdelta.append(_360DayFromDate(d) - self._jd0)
         if not isscalar:
             jdelta = numpy.array(jdelta)
-        # convert to desired units, subtract time zone offset.
+        # convert to desired units, add time zone offset.
         if self.units in microsec_units:
-            jdelta = jdelta * 86400. * 1.e6  - self.tzoffset * 60. * 1.e6
+            jdelta = jdelta * 86400. * 1.e6  + self.tzoffset * 60. * 1.e6
         elif self.units in millisec_units:
-            jdelta = jdelta * 86400. * 1.e3  - self.tzoffset * 60. * 1.e3
+            jdelta = jdelta * 86400. * 1.e3  + self.tzoffset * 60. * 1.e3
         elif self.units in sec_units:
-            jdelta = jdelta * 86400. - self.tzoffset * 60.
+            jdelta = jdelta * 86400. + self.tzoffset * 60.
         elif self.units in min_units:
-            jdelta = jdelta * 1440. - self.tzoffset
+            jdelta = jdelta * 1440. + self.tzoffset
         elif self.units in hr_units:
-            jdelta = jdelta * 24. - self.tzoffset / 60.
+            jdelta = jdelta * 24. + self.tzoffset / 60.
         elif self.units in day_units:
-            jdelta = jdelta - self.tzoffset / 1440.
+            jdelta = jdelta + self.tzoffset / 1440.
         else:
             raise ValueError('unsupported time units')
         if isscalar:
@@ -851,19 +858,19 @@ units to datetime objects.
         if not isscalar:
             time_value = numpy.array(time_value, dtype='d')
             shape = time_value.shape
-        # convert to desired units, add time zone offset.
+        # convert to desired units, subtract time zone offset.
         if self.units in microsec_units:
-            jdelta = time_value / 86400000000. + self.tzoffset / 1440.
+            jdelta = time_value / 86400000000. - self.tzoffset / 1440.
         elif self.units in millisec_units:
-            jdelta = time_value / 86400000. + self.tzoffset / 1440.
+            jdelta = time_value / 86400000. - self.tzoffset / 1440.
         elif self.units in sec_units:
-            jdelta = time_value / 86400. + self.tzoffset / 1440.
+            jdelta = time_value / 86400. - self.tzoffset / 1440.
         elif self.units in min_units:
-            jdelta = time_value / 1440. + self.tzoffset / 1440.
+            jdelta = time_value / 1440. - self.tzoffset / 1440.
         elif self.units in hr_units:
-            jdelta = time_value / 24. + self.tzoffset / 1440.
+            jdelta = time_value / 24. - self.tzoffset / 1440.
         elif self.units in day_units:
-            jdelta = time_value + self.tzoffset / 1440.
+            jdelta = time_value - self.tzoffset / 1440.
         else:
             raise ValueError('unsupported time units')
         jd = self._jd0 + jdelta
@@ -916,8 +923,13 @@ cdef _parse_timezone(tzstring):
     if tzstring is None:
         return 0
     m = TIMEZONE_REGEX.match(tzstring)
-    prefix, hours, minutes = m.groups()
-    hours, minutes = int(hours), int(minutes)
+    prefix, hours, minutes1, minutes2 = m.groups()
+    hours = int(hours)
+# Note: Minutes don't have to be specified in tzstring, 
+# so if the group is not found it means minutes is 0.
+# Also, due to the timezone regex definition, there are two mutually
+# exclusive groups that might hold the minutes value, so check both.
+    minutes = int(minutes1) if minutes1 is not None else int(minutes2) if minutes2 is not None else 0
     if prefix == "-":
         hours = -hours
         minutes = -minutes
diff --git a/setup.cfg b/setup.cfg
index d95d728..d4f522a 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -5,8 +5,6 @@
 # will be used to determine the locations of required libraries.
 # Usually, nothing else is needed.
 use_ncconfig=True
-# use cython to compile netCDF4.pyx (if cython is available).
-use_cython=True
 # path to nc-config script (use if not found in unix PATH).
 #ncconfig=/usr/local/bin/nc-config 
 [directories]
diff --git a/setup.cfg.template b/setup.cfg.template
deleted file mode 100644
index d95d728..0000000
--- a/setup.cfg.template
+++ /dev/null
@@ -1,50 +0,0 @@
-# Rename this file to setup.cfg to set build options.
-# Follow instructions below for editing.
-[options]
-# if true, the nc-config script (installed with netcdf 4.1.2 and higher)
-# will be used to determine the locations of required libraries.
-# Usually, nothing else is needed.
-use_ncconfig=True
-# use cython to compile netCDF4.pyx (if cython is available).
-use_cython=True
-# path to nc-config script (use if not found in unix PATH).
-#ncconfig=/usr/local/bin/nc-config 
-[directories]
-#
-# If nc-config doesn't do the trick, you can specify the locations
-# of the libraries and headers manually below
-#
-# uncomment and set to netCDF install location.
-# Include files should be located in netCDF4_dir/include and
-# the library should be located in netCDF4_dir/lib.
-# If the libraries and include files are installed in separate locations,
-# use netCDF4_libdir and netCDF4_incdir to specify the locations
-# separately.  
-#netCDF4_dir = /usr/local
-# uncomment and set to HDF5 install location.
-# Include files should be located in HDF5_dir/include and
-# the library should be located in HDF5_dir/lib.
-# If the libraries and include files are installed in separate locations,
-# use HDF5_libdir and HDF5_incdir to specify the locations
-# separately.  
-#HDF5_dir = /usr/local
-# if HDF5 was built with szip support as a static lib,
-# uncomment and set to szip lib install location.
-# If the libraries and include files are installed in separate locations,
-# use szip_libdir and szip_incdir.
-#szip_dir = /usr/local
-# if netcdf lib was build statically with HDF4 support,
-# uncomment and set to hdf4 lib (libmfhdf and libdf) nstall location.
-# If the libraries and include files are installed in separate locations,
-# use hdf4_libdir and hdf4_incdir.
-#hdf4_dir = /usr/local
-# if netcdf lib was build statically with HDF4 support,
-# uncomment and set to jpeg lib install location (hdf4 needs jpeg).
-# If the libraries and include files are installed in separate locations,
-# use jpeg_libdir and jpeg_incdir.
-#jpeg_dir = /usr/local
-# if netcdf lib was build statically with OpenDAP support,
-# uncomment and set to curl lib install location.
-# If the libraries and include files are installed in separate locations,
-# use curl_libdir and curl_incdir.
-#curl_dir = /usr/local
diff --git a/setup.py b/setup.py
index 4de3b0e..119dfec 100644
--- a/setup.py
+++ b/setup.py
@@ -2,9 +2,10 @@ import os, sys, subprocess
 import os.path as osp
 from setuptools import setup, Extension
 from distutils.dist import Distribution
+
 setuptools_extra_kwargs = {
-    "install_requires":  ["numpy>=1.7"],
-    "setup_requires":  ['setuptools>=18.0',"cython>=0.19"],
+    "install_requires": ["numpy>=1.7"],
+    "setup_requires": ['setuptools>=18.0', "cython>=0.19"],
     "entry_points": {
         'console_scripts': [
             'ncinfo = netCDF4.utils:ncinfo',
@@ -16,31 +17,29 @@ setuptools_extra_kwargs = {
 
 if sys.version_info[0] < 3:
     import ConfigParser as configparser
+
     open_kwargs = {}
 else:
     import configparser
-    open_kwargs = {'encoding':'utf-8'}
+
+    open_kwargs = {'encoding': 'utf-8'}
+
 
 def check_hdf5version(hdf5_includedir):
     try:
-        f = open(os.path.join(hdf5_includedir,'H5pubconf-64.h'),**open_kwargs)
+        f = open(os.path.join(hdf5_includedir, 'H5public.h'), **open_kwargs)
     except IOError:
-        try:
-            f = open(os.path.join(hdf5_includedir,'H5pubconf-32.h'),**open_kwargs)
-        except IOError:
-            try:
-                f = open(os.path.join(hdf5_includedir,'H5pubconf.h'),**open_kwargs)
-            except IOError:
-                return None
+        return None
     hdf5_version = None
     for line in f:
-        if line.startswith('#define H5_VERSION'):
-            hdf5_version = line.split()[2]
+        if line.startswith('#define H5_VERS_INFO'):
+            hdf5_version = line.split('"')[1]
     return hdf5_version
 
+
 def check_ifnetcdf4(netcdf4_includedir):
     try:
-        f = open(os.path.join(netcdf4_includedir,'netcdf.h'),**open_kwargs)
+        f = open(os.path.join(netcdf4_includedir, 'netcdf.h'), **open_kwargs)
     except IOError:
         return False
     isnetcdf4 = False
@@ -49,6 +48,7 @@ def check_ifnetcdf4(netcdf4_includedir):
             isnetcdf4 = True
     return isnetcdf4
 
+
 def check_api(inc_dirs):
     has_rename_grp = False
     has_nc_inq_path = False
@@ -75,9 +75,10 @@ def check_api(inc_dirs):
                 has_cdf5_format = True
         break
 
-    return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended,\
+    return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
            has_cdf5_format, has_nc_open_mem
 
+
 def getnetcdfvers(libdirs):
     """
     Get the version string for the first netcdf lib found in libdirs.
@@ -91,14 +92,13 @@ def getnetcdfvers(libdirs):
     elif sys.platform.startswith('cygwin'):
         bindirs = []
         for d in libdirs:
-            bindirs.append(os.path.dirname(d)+'/bin')
+            bindirs.append(os.path.dirname(d) + '/bin')
         regexp = re.compile(r'^cygnetcdf-\d.dll')
     elif sys.platform.startswith('darwin'):
         regexp = re.compile(r'^libnetcdf.dylib')
     else:
         regexp = re.compile(r'^libnetcdf.so')
 
-
     if sys.platform.startswith('cygwin'):
         dirs = bindirs
     else:
@@ -107,7 +107,8 @@ def getnetcdfvers(libdirs):
         try:
             candidates = [x for x in os.listdir(d) if regexp.match(x)]
             if len(candidates) != 0:
-                candidates.sort(key=lambda x: len(x))   # Prefer libfoo.so to libfoo.so.X.Y.Z
+                candidates.sort(
+                    key=lambda x: len(x))  # Prefer libfoo.so to libfoo.so.X.Y.Z
                 path = os.path.abspath(os.path.join(d, candidates[0]))
             lib = ctypes.cdll.LoadLibrary(path)
             inq_libvers = lib.nc_inq_libvers
@@ -115,10 +116,11 @@ def getnetcdfvers(libdirs):
             vers = lib.nc_inq_libvers()
             return vers.split()[0]
         except Exception:
-            pass   # We skip invalid entries, because that's what the C compiler does
+            pass  # We skip invalid entries, because that's what the C compiler does
 
     return None
 
+
 HDF5_dir = os.environ.get('HDF5_DIR')
 HDF5_incdir = os.environ.get('HDF5_INCDIR')
 HDF5_libdir = os.environ.get('HDF5_LIBDIR')
@@ -148,60 +150,96 @@ if USE_SETUPCFG is not None:
 else:
     USE_SETUPCFG = True
 
-
 setup_cfg = 'setup.cfg'
 # contents of setup.cfg will override env vars, unless
 # USE_SETUPCFG evaluates to True. Exception is use_ncconfig,
 # which does not take precedence ofver USE_NCCONFIG env var.
 ncconfig = None
 use_ncconfig = None
-use_cython = True
 if USE_SETUPCFG and os.path.exists(setup_cfg):
     sys.stdout.write('reading from setup.cfg...\n')
     config = configparser.SafeConfigParser()
     config.read(setup_cfg)
-    try: HDF5_dir = config.get("directories", "HDF5_dir")
-    except: pass
-    try: HDF5_libdir = config.get("directories", "HDF5_libdir")
-    except: pass
-    try: HDF5_incdir = config.get("directories", "HDF5_incdir")
-    except: pass
-    try: netCDF4_dir = config.get("directories", "netCDF4_dir")
-    except: pass
-    try: netCDF4_libdir = config.get("directories", "netCDF4_libdir")
-    except: pass
-    try: netCDF4_incdir = config.get("directories", "netCDF4_incdir")
-    except: pass
-    try: szip_dir = config.get("directories", "szip_dir")
-    except: pass
-    try: szip_libdir = config.get("directories", "szip_libdir")
-    except: pass
-    try: szip_incdir = config.get("directories", "szip_incdir")
-    except: pass
-    try: hdf4_dir = config.get("directories", "hdf4_dir")
-    except: pass
-    try: hdf4_libdir = config.get("directories", "hdf4_libdir")
-    except: pass
-    try: hdf4_incdir = config.get("directories", "hdf4_incdir")
-    except: pass
-    try: jpeg_dir = config.get("directories", "jpeg_dir")
-    except: pass
-    try: jpeg_libdir = config.get("directories", "jpeg_libdir")
-    except: pass
-    try: jpeg_incdir = config.get("directories", "jpeg_incdir")
-    except: pass
-    try: curl_dir = config.get("directories", "curl_dir")
-    except: pass
-    try: curl_libdir = config.get("directories", "curl_libdir")
-    except: pass
-    try: curl_incdir = config.get("directories", "curl_incdir")
-    except: pass
-    try: use_ncconfig = config.getboolean("options", "use_ncconfig")
-    except: pass
-    try: ncconfig = config.get("options", "ncconfig")
-    except: pass
-    try: use_cython = config.getboolean("options", "use_cython")
-    except: pass
+    try:
+        HDF5_dir = config.get("directories", "HDF5_dir")
+    except:
+        pass
+    try:
+        HDF5_libdir = config.get("directories", "HDF5_libdir")
+    except:
+        pass
+    try:
+        HDF5_incdir = config.get("directories", "HDF5_incdir")
+    except:
+        pass
+    try:
+        netCDF4_dir = config.get("directories", "netCDF4_dir")
+    except:
+        pass
+    try:
+        netCDF4_libdir = config.get("directories", "netCDF4_libdir")
+    except:
+        pass
+    try:
+        netCDF4_incdir = config.get("directories", "netCDF4_incdir")
+    except:
+        pass
+    try:
+        szip_dir = config.get("directories", "szip_dir")
+    except:
+        pass
+    try:
+        szip_libdir = config.get("directories", "szip_libdir")
+    except:
+        pass
+    try:
+        szip_incdir = config.get("directories", "szip_incdir")
+    except:
+        pass
+    try:
+        hdf4_dir = config.get("directories", "hdf4_dir")
+    except:
+        pass
+    try:
+        hdf4_libdir = config.get("directories", "hdf4_libdir")
+    except:
+        pass
+    try:
+        hdf4_incdir = config.get("directories", "hdf4_incdir")
+    except:
+        pass
+    try:
+        jpeg_dir = config.get("directories", "jpeg_dir")
+    except:
+        pass
+    try:
+        jpeg_libdir = config.get("directories", "jpeg_libdir")
+    except:
+        pass
+    try:
+        jpeg_incdir = config.get("directories", "jpeg_incdir")
+    except:
+        pass
+    try:
+        curl_dir = config.get("directories", "curl_dir")
+    except:
+        pass
+    try:
+        curl_libdir = config.get("directories", "curl_libdir")
+    except:
+        pass
+    try:
+        curl_incdir = config.get("directories", "curl_incdir")
+    except:
+        pass
+    try:
+        use_ncconfig = config.getboolean("options", "use_ncconfig")
+    except:
+        pass
+    try:
+        ncconfig = config.get("options", "ncconfig")
+    except:
+        pass
 
 # make sure USE_NCCONFIG from environment takes
 # precendence over use_ncconfig from setup.cfg (issue #341).
@@ -215,70 +253,112 @@ if USE_NCCONFIG:
     # if NETCDF4_DIR env var is set, look for nc-config in NETCDF4_DIR/bin.
     if ncconfig is None:
         if netCDF4_dir is not None:
-            ncconfig = os.path.join(netCDF4_dir,'bin/nc-config')
-        else: # otherwise, just hope it's in the users PATH.
+            ncconfig = os.path.join(netCDF4_dir, 'bin/nc-config')
+        else:  # otherwise, just hope it's in the users PATH.
             ncconfig = 'nc-config'
     try:
-        retcode = subprocess.call([ncconfig,'--libs'], stdout=subprocess.PIPE)
+        retcode = subprocess.call([ncconfig, '--libs'], stdout=subprocess.PIPE)
     except:
         retcode = 1
 else:
     retcode = 1
 
 try:
-    HAS_PKG_CONFIG = subprocess.call(['pkg-config','--libs', 'hdf5'], stdout=subprocess.PIPE) == 0
+    HAS_PKG_CONFIG = subprocess.call(['pkg-config', '--libs', 'hdf5'],
+                                     stdout=subprocess.PIPE) == 0
 except OSError:
     HAS_PKG_CONFIG = False
 
-if not retcode: # Try nc-config.
+def _populate_hdf5_info(dirstosearch, inc_dirs, libs, lib_dirs):
+    global HDF5_incdir, HDF5_dir, HDF5_libdir
+
+    if HAS_PKG_CONFIG:
+        dep = subprocess.Popen(['pkg-config', '--cflags', 'hdf5'],
+                               stdout=subprocess.PIPE).communicate()[0]
+        inc_dirs.extend([str(i[2:].decode()) for i in dep.split() if
+                         i[0:2].decode() == '-I'])
+        dep = subprocess.Popen(['pkg-config', '--libs', 'hdf5'],
+                               stdout=subprocess.PIPE).communicate()[0]
+        libs.extend(
+            [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l'])
+        lib_dirs.extend(
+            [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-L'])
+        dep = subprocess.Popen(['pkg-config', '--cflags', 'hdf5'],
+                               stdout=subprocess.PIPE).communicate()[0]
+        inc_dirs.extend(
+            [str(i[2:].decode()) for i in dep.split() if i[0:2].decode() == '-I'])
+    else:
+        if HDF5_incdir is None and HDF5_dir is None:
+            sys.stdout.write("""
+    HDF5_DIR environment variable not set, checking some standard locations ..\n""")
+            for direc in dirstosearch:
+                sys.stdout.write('checking %s ...\n' % direc)
+                hdf5_version = check_hdf5version(os.path.join(direc, 'include'))
+                if hdf5_version is None:
+                    continue
+                else:
+                    HDF5_dir = direc
+                    HDF5_incdir = os.path.join(direc, 'include')
+                    sys.stdout.write('%s found in %s\n' %
+                                    (hdf5_version,HDF5_dir))
+                    break
+            if HDF5_dir is None:
+                raise ValueError('did not find HDF5 headers')
+        else:
+            if HDF5_incdir is None:
+                HDF5_incdir = os.path.join(HDF5_dir, 'include')
+            hdf5_version = check_hdf5version(HDF5_incdir)
+            if hdf5_version is None:
+                raise ValueError('did not find HDF5 headers in %s' % HDF5_incdir)
+            else:
+                sys.stdout.write('%s found in %s\n' %
+                                (hdf5_version,HDF5_dir))
+
+        if HDF5_libdir is None and HDF5_dir is not None:
+            HDF5_libdir = os.path.join(HDF5_dir, 'lib')
+
+        if HDF5_libdir is not None: lib_dirs.append(HDF5_libdir)
+        if HDF5_incdir is not None: inc_dirs.append(HDF5_incdir)
+
+        libs.extend(['hdf5_hl', 'hdf5'])
+
+
+dirstosearch = [os.path.expanduser('~'), '/usr/local', '/sw', '/opt',
+                '/opt/local', '/usr']
+
+if not retcode:  # Try nc-config.
     sys.stdout.write('using nc-config ...\n')
-    dep=subprocess.Popen([ncconfig,'--libs'],stdout=subprocess.PIPE).communicate()[0]
-    libs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l' ]
-    lib_dirs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-L' ]
-    dep=subprocess.Popen([ncconfig,'--cflags'],stdout=subprocess.PIPE).communicate()[0]
-    inc_dirs = [str(i[2:].decode()) for i in dep.split() if i[0:2].decode() == '-I']
-elif HAS_PKG_CONFIG: # Try pkg-config.
+    dep = subprocess.Popen([ncconfig, '--libs'],
+                           stdout=subprocess.PIPE).communicate()[0]
+    libs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l']
+    lib_dirs = [str(l[2:].decode()) for l in dep.split() if
+                l[0:2].decode() == '-L']
+    dep = subprocess.Popen([ncconfig, '--cflags'],
+                           stdout=subprocess.PIPE).communicate()[0]
+    inc_dirs = [str(i[2:].decode()) for i in dep.split() if
+                i[0:2].decode() == '-I']
+
+    _populate_hdf5_info(dirstosearch, inc_dirs, libs, lib_dirs)
+elif HAS_PKG_CONFIG:  # Try pkg-config.
     sys.stdout.write('using pkg-config ...\n')
-    dep=subprocess.Popen(['pkg-config','--libs','netcdf'],stdout=subprocess.PIPE).communicate()[0]
-    libs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l' ]
-    lib_dirs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-L' ]
-    dep=subprocess.Popen(['pkg-config','--cflags', 'hdf5'],stdout=subprocess.PIPE).communicate()[0]
-    inc_dirs = [str(i[2:].decode()) for i in dep.split() if i[0:2].decode() == '-I']
-    dep=subprocess.Popen(['pkg-config','--libs','hdf5'],stdout=subprocess.PIPE).communicate()[0]
-    libs.extend([str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l' ])
-    lib_dirs.extend([str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-L' ])
-    dep=subprocess.Popen(['pkg-config','--cflags', 'hdf5'],stdout=subprocess.PIPE).communicate()[0]
-    inc_dirs.extend([str(i[2:].decode()) for i in dep.split() if i[0:2].decode() == '-I'])
+    dep = subprocess.Popen(['pkg-config', '--libs', 'netcdf'],
+                           stdout=subprocess.PIPE).communicate()[0]
+    libs = [str(l[2:].decode()) for l in dep.split() if l[0:2].decode() == '-l']
+    lib_dirs = [str(l[2:].decode()) for l in dep.split() if
+                l[0:2].decode() == '-L']
+
+    inc_dirs = []
+    _populate_hdf5_info(dirstosearch, inc_dirs, libs, lib_dirs)
 # If nc-config and pkg-config both didn't work (it won't on Windows), fall back on brute force method.
 else:
-    dirstosearch =  [os.path.expanduser('~'), '/usr/local', '/sw', '/opt', '/opt/local', '/usr']
+    lib_dirs = []
+    inc_dirs = []
+    libs = []
 
-    if HDF5_incdir is None and HDF5_dir is None:
-        sys.stdout.write("""
-HDF5_DIR environment variable not set, checking some standard locations ..\n""")
-        for direc in dirstosearch:
-            sys.stdout.write('checking %s ...\n' % direc)
-            hdf5_version = check_hdf5version(os.path.join(direc, 'include'))
-            if hdf5_version is None or hdf5_version[1:6] < '1.8.0':
-                continue
-            else:
-                HDF5_dir = direc
-                HDF5_incdir = os.path.join(direc, 'include')
-                sys.stdout.write('HDF5 found in %s\n' % HDF5_dir)
-                break
-        if HDF5_dir is None:
-            raise ValueError('did not find HDF5 headers')
-    else:
-        if HDF5_incdir is None:
-            HDF5_incdir = os.path.join(HDF5_dir, 'include')
-        hdf5_version = check_hdf5version(HDF5_incdir)
-        if hdf5_version is None:
-            raise ValueError('did not find HDF5 headers in %s' % HDF5_incdir)
-        elif hdf5_version[1:6] < '1.8.0':
-            raise ValueError('HDF5 version >= 1.8.0 is required')
+    _populate_hdf5_info(dirstosearch, inc_dirs, libs, lib_dirs)
 
     if netCDF4_incdir is None and netCDF4_dir is None:
-        sys.stdout.write( """
+        sys.stdout.write("""
 NETCDF4_DIR environment variable not set, checking standard locations.. \n""")
         for direc in dirstosearch:
             sys.stdout.write('checking %s ...\n' % direc)
@@ -297,23 +377,19 @@ NETCDF4_DIR environment variable not set, checking standard locations.. \n""")
             netCDF4_incdir = os.path.join(netCDF4_dir, 'include')
         isnetcdf4 = check_ifnetcdf4(netCDF4_incdir)
         if not isnetcdf4:
-            raise ValueError('did not find netCDF version 4 headers %s' % netCDF4_incdir)
-
-    if HDF5_libdir is None and HDF5_dir is not None:
-        HDF5_libdir = os.path.join(HDF5_dir, 'lib')
+            raise ValueError(
+                'did not find netCDF version 4 headers %s' % netCDF4_incdir)
 
     if netCDF4_libdir is None and netCDF4_dir is not None:
         netCDF4_libdir = os.path.join(netCDF4_dir, 'lib')
 
-    if sys.platform=='win32':
-        libs = ['netcdf', 'hdf5_hl', 'hdf5', 'zlib']
+    if sys.platform == 'win32':
+        libs.extend(['netcdf', 'zlib'])
     else:
-        libs = ['netcdf', 'hdf5_hl', 'hdf5', 'z']
+        libs.extend(['netcdf', 'z'])
 
-    if netCDF4_libdir is not None: lib_dirs = [netCDF4_libdir]
-    if HDF5_libdir is not None: lib_dirs.append(HDF5_libdir)
-    if netCDF4_incdir is not None: inc_dirs = [netCDF4_incdir]
-    if HDF5_incdir is not None: inc_dirs.append(HDF5_incdir)
+    if netCDF4_libdir is not None: lib_dirs.append(netCDF4_libdir)
+    if netCDF4_incdir is not None: inc_dirs.append(netCDF4_incdir)
 
     # add szip to link if desired.
     if szip_libdir is None and szip_dir is not None:
@@ -353,7 +429,7 @@ NETCDF4_DIR environment variable not set, checking standard locations.. \n""")
         lib_dirs.append(curl_libdir)
         inc_dirs.append(curl_incdir)
 
-if sys.platform=='win32':
+if sys.platform == 'win32':
     runtime_lib_dirs = []
 else:
     runtime_lib_dirs = lib_dirs
@@ -361,11 +437,12 @@ else:
 # Do not require numpy for just querying the package
 # Taken from the h5py setup file.
 if any('--' + opt in sys.argv for opt in Distribution.display_option_names +
-       ['help-commands', 'help']) or sys.argv[1] == 'egg_info':
+        ['help-commands', 'help']) or sys.argv[1] == 'egg_info':
     pass
 else:
     # append numpy include dir.
     import numpy
+
     inc_dirs.append(numpy.get_include())
 
 # get netcdf library version.
@@ -387,7 +464,8 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
         os.remove(netcdf4_src_c)
     # this determines whether renameGroup and filepath methods will work.
     has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
-    has_cdf5_format, has_nc_open_mem = check_api(inc_dirs)
+        has_cdf5_format, has_nc_open_mem = check_api(inc_dirs)
+
     f = open(osp.join('include', 'constants.pyx'), 'w')
     if has_rename_grp:
         sys.stdout.write('netcdf lib has group rename capability\n')
@@ -407,7 +485,8 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
         sys.stdout.write('netcdf lib has nc_inq_format_extended function\n')
         f.write('DEF HAS_NC_INQ_FORMAT_EXTENDED = 1\n')
     else:
-        sys.stdout.write('netcdf lib does not have nc_inq_format_extended function\n')
+        sys.stdout.write(
+            'netcdf lib does not have nc_inq_format_extended function\n')
         f.write('DEF HAS_NC_INQ_FORMAT_EXTENDED = 0\n')
 
     if has_nc_open_mem:
@@ -426,40 +505,42 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
 
     f.close()
     ext_modules = [Extension("netCDF4._netCDF4",
-                            [netcdf4_src_root + '.pyx'],
-                            libraries=libs,
-                            library_dirs=lib_dirs,
-                            include_dirs=inc_dirs+['include'],
-                            runtime_library_dirs=runtime_lib_dirs),
-                  Extension('netcdftime._netcdftime', ['netcdftime/_netcdftime.pyx'])]
+                             [netcdf4_src_root + '.pyx'],
+                             libraries=libs,
+                             library_dirs=lib_dirs,
+                             include_dirs=inc_dirs + ['include'],
+                             runtime_library_dirs=runtime_lib_dirs),
+                   Extension('netcdftime._netcdftime',
+                             ['netcdftime/_netcdftime.pyx'])]
 else:
     ext_modules = None
 
-setup(name = "netCDF4",
-  cmdclass = cmdclass,
-  version = "1.2.9",
-  long_description = "netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types.  It is implemented on top of HDF5.  This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library.  The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project has a `Sub [...]
-  author            = "Jeff Whitaker",
-  author_email      = "jeffrey.s.whitaker at noaa.gov",
-  url               = "http://github.com/Unidata/netcdf4-python",
-  download_url      = "http://python.org/pypi/netCDF4",
-  platforms         = ["any"],
-  license           = "OSI Approved",
-  description = "Provides an object-oriented python interface to the netCDF version 4 library.",
-  keywords = ['numpy','netcdf','data','science','network','oceanography','meteorology','climate'],
-  classifiers = ["Development Status :: 3 - Alpha",
-                 "Programming Language :: Python :: 2",
-                 "Programming Language :: Python :: 2.6",
-                 "Programming Language :: Python :: 2.7",
-                 "Programming Language :: Python :: 3",
-                 "Programming Language :: Python :: 3.3",
-                 "Programming Language :: Python :: 3.4",
-                 "Programming Language :: Python :: 3.5",
-                 "Intended Audience :: Science/Research",
-                 "License :: OSI Approved",
-                 "Topic :: Software Development :: Libraries :: Python Modules",
-                 "Topic :: System :: Archiving :: Compression",
-                 "Operating System :: OS Independent"],
-  packages = ['netcdftime', 'netCDF4'],
-  ext_modules = ext_modules,
-  **setuptools_extra_kwargs)
+setup(name="netCDF4",
+      cmdclass=cmdclass,
+      version="1.3.0",
+      long_description="netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types.  It is implemented on top of HDF5.  This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library.  The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project has a `S [...]
+      author="Jeff Whitaker",
+      author_email="jeffrey.s.whitaker at noaa.gov",
+      url="http://github.com/Unidata/netcdf4-python",
+      download_url="http://python.org/pypi/netCDF4",
+      platforms=["any"],
+      license="OSI Approved",
+      description="Provides an object-oriented python interface to the netCDF version 4 library.",
+      keywords=['numpy', 'netcdf', 'data', 'science', 'network', 'oceanography',
+                'meteorology', 'climate'],
+      classifiers=["Development Status :: 3 - Alpha",
+                   "Programming Language :: Python :: 2",
+                   "Programming Language :: Python :: 2.6",
+                   "Programming Language :: Python :: 2.7",
+                   "Programming Language :: Python :: 3",
+                   "Programming Language :: Python :: 3.3",
+                   "Programming Language :: Python :: 3.4",
+                   "Programming Language :: Python :: 3.5",
+                   "Intended Audience :: Science/Research",
+                   "License :: OSI Approved",
+                   "Topic :: Software Development :: Libraries :: Python Modules",
+                   "Topic :: System :: Archiving :: Compression",
+                   "Operating System :: OS Independent"],
+      packages=['netcdftime', 'netCDF4'],
+      ext_modules=ext_modules,
+      **setuptools_extra_kwargs)
diff --git a/test/tst_compound_alignment.py b/test/tst_compound_alignment.py
index fa25495..0e1c369 100644
--- a/test/tst_compound_alignment.py
+++ b/test/tst_compound_alignment.py
@@ -1,4 +1,4 @@
-""" This illustrates a bug when a structured array is extracted from a netCDF4.Variable using the slicing operation. 
+""" This illustrates a bug when a structured array is extracted from a netCDF4.Variable using the slicing operation.
 
 Bug is observed with EPD 7.3-1 and 7.3-2 (64-bit)
 """
@@ -72,7 +72,7 @@ cells     = numpy.array([ (387, 289, 65.64321899414062, -167.90093994140625, 355
         (395, 291, 65.70501708984375, -168.0037078857422, 3535, -10136, 8939, -16617, 6, 34129, 1, 0, 211, 587, 521, 310, 202, 76, 50, 1495, 2367, 3067, 2738, 3014, 6259, 12580, 6585, 17570, 7971, 8892, 935, 550, 701, 3061, 2505, 3297, 3223, 2541, 3409, 340, 340, 9979, 10894, 7916, 7332, 7873, 8761, 14838, 16775, 17160, 20953, 13, 6, 6, 15, 15, 15, 15, 0, 10, 5, 7, 8, 4, 5, 4, 7, 0, 0, 11, 12, 15, 4, 8, 3, 2, 6, 3, 15, 15, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 28, 6, 6, 6, 3, 6, 0, 0, 2, 0,  [...]
         (396, 288, 65.72332000732422, -167.9132080078125, 3564, -10143, 8942, -16608, 5, 34233, 2, 0, 193, 492, 506, 293, 149, 65528, 23, 1499, 2334, 3021, 2614, 2881, 5710, 11442, 5980, 15879, 7044, 7474, 823, 504, 637, 3017, 2504, 3268, 3212, 2535, 3451, 347, 347, 10063, 11038, 7836, 7335, 7844, 8742, 14915, 16850, 17220, 20894, 13, 8, 6, 15, 15, 15, 15, 0, 10, 5, 8, 8, 4, 5, 4, 7, 0, 0, 13, 14, 15, 5, 12, 2, 2, 6, 3, 15, 15, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 28, 6, 6, 6, 0, 6, 0, 0, 2 [...]
         (396, 289, 65.72077178955078, -167.94512939453125, 3554, -10146, 8941, -16611, 6, 34198, 2, 0, 195, 501, 510, 300, 157, 65528, 28, 1481, 2334, 3013, 2621, 2904, 5781, 11557, 6051, 16093, 7147, 7624, 847, 504, 637, 3017, 2504, 3253, 3205, 2490, 3410, 332, 332, 10017, 10938, 7821, 7291, 7805, 8707, 14853, 16850, 17207, 21072, 13, 8, 6, 15, 15, 15, 15, 0, 10, 5, 8, 8, 4, 5, 4, 7, 0, 0, 13, 14, 15, 5, 12, 2, 2, 6, 3, 15, 15, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 28, 6, 6, 6, 0, 6, 0, 124 [...]
-        (396, 290, 65.71821594238281, -167.9770050048828, 3545, -10149, 8941, -16614, 9, 34164, 1, 0, 200, 526, 511, 301, 170, 65528, 35, 1480, 2350, 3029, 2645, 2928, 5907, 11842, 6208, 16528, 7384, 7988, 870, 527, 661, 3054, 2504, 3291, 3235, 2490, 3424, 354, 354, 10039, 10988, 7958, 7395, 7902, 8811, 14853, 16836, 17231, 20852, 13, 7, 6, 15, 15, 15, 15, 0, 10, 5, 8, 8, 4, 5, 4, 7, 0, 0, 12, 13, 15, 5, 12, 2, 2, 6, 3, 15, 15, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 28, 6, 6, 6, 0, 6, 0, 0, 2 [...]
+        (396, 290, 65.71821594238281, -167.9770050048828, 3545, -10149, 8941, -16614, 9, 34164, 1, 0, 200, 526, 511, 301, 170, 65528, 35, 1480, 2350, 3029, 2645, 2928, 5907, 11842, 6208, 16528, 7384, 7988, 870, 527, 661, 3054, 2504, 3291, 3235, 2490, 3424, 354, 354, 10039, 10988, 7958, 7395, 7902, 8811, 14853, 16836, 17231, 20852, 13, 7, 6, 15, 15, 15, 15, 0, 10, 5, 8, 8, 4, 5, 4, 7, 0, 0, 12, 13, 15, 5, 12, 2, 2, 6, 3, 15, 15, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 28, 28, 6, 6, 6, 0, 6, 0, 0, 2 [...]
       dtype=[('mxd03_granule_row', '<i2'), ('mxd03_granule_column', '<i2'), ('mxd03_latitude', '<f4'), ('mxd03_longitude', '<f4'), ('mxd03_sensor_zenith', '<i2'), ('mxd03_sensor_azimuth', '<i2'), ('mxd03_solar_zenith', '<i2'), ('mxd03_solar_azimuth', '<i2'), ('mxd03_height', '<i2'), ('mxd03_range', '<u2'), ('mxd03_land_sea_mask', '|u1'), ('mxd03_gflags', '|u1'), ('mxd02_band_1A', '<u2'), ('mxd02_band_2A', '<u2'), ('mxd02_band_3A', '<u2'), ('mxd02_band_4A', '<u2'), ('mxd02_band_5A', '<u2' [...]
 
 FILE_NAME = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
@@ -98,8 +98,8 @@ class CompoundAlignTestCase(unittest.TestCase):
     def runTest(self):
         f = netCDF4.Dataset(self.file, 'r')
         new_cells = f.variables["cells"][:]
-        assert new_cells.shape == cells.shape 
-        assert new_cells.dtype.names == cells.dtype.names
+        assert new_cells.shape == cells.shape
+        assert sorted(new_cells.dtype.names) == sorted(cells.dtype.names)
         for name in cells.dtype.names:
             assert cells[name].dtype.name == new_cells[name].dtype.name
             assert cells[name].shape == new_cells[name].shape
diff --git a/test/tst_compoundvar.py b/test/tst_compoundvar.py
index ef9ea0e..1ad87f8 100644
--- a/test/tst_compoundvar.py
+++ b/test/tst_compoundvar.py
@@ -6,11 +6,9 @@ from netCDF4 import Dataset, CompoundType
 import numpy as np
 from numpy.testing import assert_array_equal, assert_array_almost_equal
 
-
 # test compound data types.
 
 FILE_NAME = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name
-#FILE_NAME = 'test.nc'
 DIM_NAME = 'phony_dim'
 GROUP_NAME = 'phony_group'
 VAR_NAME = 'phony_compound_var'
@@ -20,11 +18,19 @@ TYPE_NAME3 = 'cmp3'
 TYPE_NAME4 = 'cmp4'
 TYPE_NAME5 = 'cmp5'
 DIM_SIZE=3
+# unaligned data types (note they are nested)
 dtype1=np.dtype([('i', 'i2'), ('j', 'i8')])
 dtype2=np.dtype([('x', 'f4',), ('y', 'f8',(3,2))])
 dtype3=np.dtype([('xx', dtype1), ('yy', dtype2)])
 dtype4=np.dtype([('xxx',dtype3),('yyy','f8', (4,))])
 dtype5=np.dtype([('x1', dtype1), ('y1', dtype2)])
+# aligned data types
+dtype1a = np.dtype({'names':['i','j'],'formats':['<i2','<i8']},align=True)
+dtype2a = np.dtype({'names':['x','y'],'formats':['<f4',('<f8', (3, 2))]},align=True)
+dtype3a = np.dtype({'names':['xx','yy'],'formats':[dtype1a,dtype2a]},align=True)
+dtype4a = np.dtype({'names':['xxx','yyy'],'formats':[dtype3a,('f8', (4,))]},align=True)
+dtype5a = np.dtype({'names':['x1','y1'],'formats':[dtype1a,dtype2a]},align=True)
+
 data = np.zeros(DIM_SIZE,dtype4)
 data['xxx']['xx']['i']=1
 data['xxx']['xx']['j']=2
@@ -60,6 +66,22 @@ class VariablesTestCase(unittest.TestCase):
         vv = g.createVariable(VAR_NAME,cmptype5, DIM_NAME)
         v[:] = data
         vv[:] = datag
+        # try reading the data back before the file is closed
+        dataout = v[:]
+        dataoutg = vv[:]
+        assert (cmptype4 == dtype4a) # data type should be aligned
+        assert (dataout.dtype == dtype4a) # data type should be aligned
+        assert(list(f.cmptypes.keys()) ==\
+               [TYPE_NAME1,TYPE_NAME2,TYPE_NAME3,TYPE_NAME4,TYPE_NAME5])
+        assert_array_equal(dataout['xxx']['xx']['i'],data['xxx']['xx']['i'])
+        assert_array_equal(dataout['xxx']['xx']['j'],data['xxx']['xx']['j'])
+        assert_array_almost_equal(dataout['xxx']['yy']['x'],data['xxx']['yy']['x'])
+        assert_array_almost_equal(dataout['xxx']['yy']['y'],data['xxx']['yy']['y'])
+        assert_array_almost_equal(dataout['yyy'],data['yyy'])
+        assert_array_equal(dataoutg['x1']['i'],datag['x1']['i'])
+        assert_array_equal(dataoutg['x1']['j'],datag['x1']['j'])
+        assert_array_almost_equal(dataoutg['y1']['x'],datag['y1']['x'])
+        assert_array_almost_equal(dataoutg['y1']['y'],datag['y1']['y'])
         f.close()
 
     def tearDown(self):
@@ -75,6 +97,8 @@ class VariablesTestCase(unittest.TestCase):
         vv = g.variables[VAR_NAME]
         dataout = v[:]
         dataoutg = vv[:]
+        # make sure data type is aligned
+        assert (f.cmptypes['cmp4'] == dtype4a)
         assert(list(f.cmptypes.keys()) ==\
                [TYPE_NAME1,TYPE_NAME2,TYPE_NAME3,TYPE_NAME4,TYPE_NAME5])
         assert_array_equal(dataout['xxx']['xx']['i'],data['xxx']['xx']['i'])
diff --git a/test/tst_filepath.py b/test/tst_filepath.py
index 84835dd..79ef9a7 100644
--- a/test/tst_filepath.py
+++ b/test/tst_filepath.py
@@ -1,4 +1,5 @@
-import os
+import os, sys, shutil
+import tempfile
 import unittest
 import netCDF4
 
@@ -11,5 +12,16 @@ class test_filepath(unittest.TestCase):
     def test_filepath(self):
         assert self.nc.filepath() == str(self.netcdf_file)
 
+    def test_filepath_with_non_ascii_characters(self):
+        # create nc-file in a filepath using a cp1252 string
+        tmpdir = tempfile.mkdtemp()
+        filepath = os.path.join(tmpdir,b'Pl\xc3\xb6n.nc'.decode('cp1252'))
+        nc = netCDF4.Dataset(filepath,'w',encoding='cp1252')
+        filepatho = nc.filepath(encoding='cp1252')
+        assert filepath == filepatho
+        assert filepath.encode('cp1252') == filepatho.encode('cp1252')
+        nc.close()
+        shutil.rmtree(tmpdir)
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/test/tst_netcdftime.py b/test/tst_netcdftime.py
index c925c94..f4b31e8 100644
--- a/test/tst_netcdftime.py
+++ b/test/tst_netcdftime.py
@@ -1,7 +1,7 @@
 from netcdftime import utime, JulianDayFromDate, DateFromJulianDay
 from netcdftime import datetime as datetimex
 from netcdftime import DatetimeNoLeap, DatetimeAllLeap, Datetime360Day, DatetimeJulian, \
-    DatetimeGregorian, DatetimeProlepticGregorian
+    DatetimeGregorian, DatetimeProlepticGregorian, _parse_date
 from netCDF4 import Dataset, num2date, date2num, date2index, num2date
 import copy
 import numpy
@@ -226,9 +226,9 @@ class netcdftimeTestCase(unittest.TestCase):
         # check timezone offset
         d = datetime(2012, 2, 29, 15)
         # mixed_tz is -6 hours from UTC, mixed is UTC so
-        # difference in elapsed time is 6 hours.
+        # difference in elapsed time is -6 hours.
         assert(self.cdftime_mixed_tz.date2num(
-            d) - self.cdftime_mixed.date2num(d) == 6)
+            d) - self.cdftime_mixed.date2num(d) == -6)
 
         # Check comparisons with Python datetime types
 
@@ -391,15 +391,15 @@ class netcdftimeTestCase(unittest.TestCase):
         # date after gregorian switch, python datetime used
         date = datetime(1682,10,15) # assumed UTC
         num = date2num(date,units)
-        # UTC is 7 hours ahead of units, so num should be 7
-        assert (num == 7)
+        # UTC is 7 hours ahead of units, so num should be -7
+        assert (num == -7)
         assert (num2date(num, units) == date)
         units = 'hours since 1482-10-15 -07:00 UTC'
         # date before gregorian switch, netcdftime datetime used
         date = datetime(1482,10,15)
         num = date2num(date,units)
         date2 = num2date(num, units)
-        assert (num == 7)
+        assert (num == -7)
         assert (date2.year == date.year)
         assert (date2.month == date.month)
         assert (date2.day == date.day)
@@ -483,6 +483,28 @@ class netcdftimeTestCase(unittest.TestCase):
         assert (d.month == 1)
         assert (d.day == 1)
         assert (d.hour == 0)
+        # issue 685: wrong time zone conversion
+        # 'The following times all refer to the same moment: "18:30Z", "22:30+04", "1130-0700", and "15:00-03:30'
+        # (https://en.wikipedia.org/w/index.php?title=ISO_8601&oldid=787811367#Time_offsets_from_UTC)
+        # test num2date
+        utc_date = datetime(2000,1,1,18,30)
+        for units in ("hours since 2000-01-01 22:30+04:00", "hours since 2000-01-01 11:30-07:00", "hours since 2000-01-01 15:00-03:30"):
+            d = num2date(0, units, calendar="standard")
+            assert(numpy.abs((d-utc_date).total_seconds()) < 1.e-3)
+            # also test with negative values to cover 2nd code path
+            d = num2date(-1, units, calendar="standard")
+            assert(numpy.abs((d - \
+                (utc_date-timedelta(hours=1))).total_seconds()) < 1.e-3)
+
+            n = date2num(utc_date, units, calendar="standard")
+            # n should always be 0 as all units refer to the same point in time
+            self.assertEqual(n, 0)
+        # explicitly test 2nd code path for date2num
+        units = "hours since 2000-01-01 22:30+04:00"
+        n = date2num(utc_date, units, calendar="julian")
+        # n should always be 0 as all units refer to the same point in time
+        assert_almost_equal(n, 0)
+
 
 
 class TestDate2index(unittest.TestCase):
@@ -931,5 +953,40 @@ class DateTime(unittest.TestCase):
         for func in [not_comparable_1, not_comparable_2, not_comparable_3, not_comparable_4]:
             self.assertRaises(TypeError, func)
 
+class issue17TestCase(unittest.TestCase):
+    """Regression tests for issue #17/#669."""
+    # issue 17 / 699: timezone formats not supported correctly
+    # valid timezone formats are: +-hh, +-hh:mm, +-hhmm
+
+    def setUp(self):
+        pass
+
+    def test_parse_date_tz(self):
+        "Test timezone parsing in _parse_date"
+
+        # these should succeed and are ISO8601 compliant
+        expected_parsed_date = (2017, 5, 1, 0, 0, 0, 60.0)
+        for datestr in ("2017-05-01 00:00+01:00", "2017-05-01 00:00+0100", "2017-05-01 00:00+01"):
+            d = _parse_date(datestr)
+            assert_equal(d, expected_parsed_date)
+        # some more tests with non-zero minutes, should all be ISO compliant and work
+        expected_parsed_date = (2017, 5, 1, 0, 0, 0, 85.0)
+        for datestr in ("2017-05-01 00:00+01:25", "2017-05-01 00:00+0125"):
+            d = _parse_date(datestr)
+            assert_equal(d, expected_parsed_date)
+        # these are NOT ISO8601 compliant and should not even be parseable but will be parsed with timezone anyway
+        # because, due to support of other legacy time formats, they are difficult to reject
+        # ATTENTION: only the hours part of this will be parsed, single-digit minutes will be ignored!
+        expected_parsed_date = (2017, 5, 1, 0, 0, 0, 60.0)
+        for datestr in ("2017-05-01 00:00+01:0", "2017-05-01 00:00+01:", "2017-05-01 00:00+01:5"):
+            d = _parse_date(datestr)
+            assert_equal(d, expected_parsed_date)
+        # these should not even be parseable as datestrings but are parseable anyway with ignored timezone
+        # this is because the module also supports some legacy, non-standard time strings
+        expected_parsed_date = (2017, 5, 1, 0, 0, 0, 0.0)
+        for datestr in ("2017-05-01 00:00+1",):
+            d = _parse_date(datestr)
+            assert_equal(d, expected_parsed_date)
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/test/tst_types.py b/test/tst_types.py
index e673c3b..53da815 100644
--- a/test/tst_types.py
+++ b/test/tst_types.py
@@ -37,6 +37,9 @@ class PrimitiveTypesTestCase(unittest.TestCase):
         v2 = file.createVariable('issue273', NP.dtype('S1'), 'n2',\
                 fill_value='\x00')
         v2[:] = issue273_data
+        v3 = file.createVariable('issue707',NP.int8,'n2')
+        v3.setncattr('missing_value',255)
+        v3[:]=-1
         file.close()
 
     def tearDown(self):
@@ -83,6 +86,10 @@ class PrimitiveTypesTestCase(unittest.TestCase):
         else:
             assert(v2._FillValue == u'') # python 2
         assert(str(issue273_data) == str(v2[:]))
+        # isse 707 (don't apply missing_value if cast to variable type is
+        # unsafe)
+        v3 = file.variables['issue707']
+        assert_array_equal(v3[:],-1*NP.ones(n2dim,v3.dtype))
         file.close()
 
 if __name__ == '__main__':
diff --git a/test/tst_utils.py b/test/tst_utils.py
index 8b67ca7..7c6e3f0 100644
--- a/test/tst_utils.py
+++ b/test/tst_utils.py
@@ -61,7 +61,10 @@ class TestgetStartCountStride(unittest.TestCase):
         # this one should be converted to a slice
         elem = [slice(None), [1,3,5], 8]
         start, count, stride, put_ind = _StartCountStride(elem, (50, 6, 10))
-        assert_equal(put_ind[...,1].squeeze(), slice(None,None,None))
+        # pull request #683 now does not convert integer sequences to strided
+        # slices.
+        #assert_equal(put_ind[...,1].squeeze(), slice(None,None,None))
+        assert_equal(put_ind[...,1].squeeze(), [0,1,2])
 
 
     def test_multiple_sequences(self):
@@ -181,7 +184,33 @@ class TestgetStartCountStride(unittest.TestCase):
         assert_equal(count, 0)
         assert_equal(_out_array_shape(count), (0,))
 
-
+    def test_ellipsis(self):
+        elem=(Ellipsis, slice(1, 4))
+        start, count, stride, put_ind = _StartCountStride(elem, (22,25,4))
+        assert_equal(start[0,0,0], [0, 0, 1])
+        assert_equal(count[0,0,0], (22, 25, 3))
+        assert_equal(put_ind[0,0,0], (slice(None), slice(None), slice(None)))
+
+        elem=(Ellipsis, [15,16,17,18,19], slice(None), slice(None))
+        start, count, stride, put_ind = _StartCountStride(elem, (2,10,20,10,10))
+        assert_equal(start[0,0,0,0,0], [0, 0, 15, 0, 0])
+        assert_equal(count[0,0,0,0,0], (2, 10, 5, 10, 10))
+        assert_equal(put_ind[0,0,0,0,0], (slice(None), slice(None), slice(None), slice(None), slice(None)))
+        
+        try:
+            elem=(Ellipsis, [15,16,17,18,19], slice(None))
+            start, count, stride, put_ind = _StartCountStride(elem, (2,10,20,10,10))
+            assert_equal(None, 'Should throw an exception')
+        except IndexError as e:
+            assert_equal(str(e), "integer index exceeds dimension size")
+            
+        try:
+            elem=(Ellipsis, [15,16,17,18,19], Ellipsis)
+            start, count, stride, put_ind = _StartCountStride(elem, (2,10, 20,10,10))
+            assert_equal(None, 'Should throw an exception')
+        except IndexError as e:
+            assert_equal(str(e), "At most one ellipsis allowed in a slicing expression")
+            
 class TestsetStartCountStride(unittest.TestCase):
 
     def test_basic(self):
@@ -189,7 +218,7 @@ class TestsetStartCountStride(unittest.TestCase):
         grp = FakeGroup({'x':False, 'y':False, 'time':True})
 
         elem=(slice(None), slice(None), 1)
-        start, count, stride, take_ind = _StartCountStride(elem, (22, 25, 1), ['x', 'y', 'time'], grp, (22,25))
+        start, count, stride, take_ind = _StartCountStride(elem, (22, 25, 1), ['x', 'y', 'time'], grp, (22,25), put=True)
         assert_equal(start[0][0][0], [0, 0, 1])
         assert_equal(count[0][0][0], (22, 25, 1))
         assert_equal(take_ind[0][0][0], (slice(None), slice(None), -1))
@@ -205,7 +234,7 @@ class TestsetStartCountStride(unittest.TestCase):
         grp = FakeGroup({'x':False, 'y':False})
 
         elem=([0,4,5], slice(20, None))
-        start, count, stride, take_ind = _StartCountStride(elem, (22, 25), ['x', 'y'], grp, (3,5))
+        start, count, stride, take_ind = _StartCountStride(elem, (22, 25), ['x', 'y'], grp, (3,5), put=True)
         assert_equal(start[0][0], (0, 20))
         assert_equal(start[1][0], (4, 20))
         assert_equal(start[2][0], (5, 20))
@@ -215,12 +244,11 @@ class TestsetStartCountStride(unittest.TestCase):
         assert_equal(take_ind[1][0], (1, slice(None)))
         assert_equal(take_ind[2][0], (2, slice(None)))
 
-
     def test_booleans(self):
         grp = FakeGroup({'x':False, 'y':False, 'z':False})
 
         elem=([0,4,5], np.array([False, True, False, True, True]), slice(None))
-        start, count, stride, take_ind = _StartCountStride(elem, (10, 5, 12), ['x', 'y', 'z'], grp, (3, 3, 12))
+        start, count, stride, take_ind = _StartCountStride(elem, (10, 5, 12), ['x', 'y', 'z'], grp, (3, 3, 12), put=True)
         assert_equal(start[0][0][0], (0, 1, 0))
         assert_equal(start[1][0][0], (4, 1, 0))
         assert_equal(start[2][0][0], (5, 1, 0))
@@ -244,15 +272,52 @@ class TestsetStartCountStride(unittest.TestCase):
         assert_equal(take_ind[2][0][0], (2, slice(None), slice(None)))
 
 
-        elem = (slice(None, None, 2), slice(None), slice(None))
-        start, count, stride, take_ind = _StartCountStride(elem, (0, 6, 7),\
-                ['time', 'x', 'y'], grp, (10, 6, 7),put=True)
-        assert_equal(start[0][0][0], (0,0,0))
-        assert_equal(count[0][0][0], (5, 6, 7))
-        assert_equal(stride[0][0][0], (2, 1, 1))
-        assert_equal(take_ind[0][0][0], 3*(slice(None),))
-
+        # pull request #683 broke this, since _StartCountStride now uses
+        # Dimension.__len__.
+        #elem = (slice(None, None, 2), slice(None), slice(None))
+        #start, count, stride, take_ind = _StartCountStride(elem, (0, 6, 7),\
+        #        ['time', 'x', 'y'], grp, (10, 6, 7),put=True)
+        #assert_equal(start[0][0][0], (0,0,0))
+        #assert_equal(count[0][0][0], (5, 6, 7))
+        #assert_equal(stride[0][0][0], (2, 1, 1))
+        #assert_equal(take_ind[0][0][0], 3*(slice(None),))
+     
+    def test_ellipsis(self):
+        grp = FakeGroup({'x':False, 'y':False, 'time':True})
 
+        elem=(Ellipsis, slice(1, 4))
+        start, count, stride, take_ind = _StartCountStride(elem, (22,25,1),\
+            ['x', 'y', 'time'], grp, (22,25,3), put=True)
+        assert_equal(start[0,0,0], [0, 0, 1])
+        assert_equal(count[0,0,0], (22, 25, 3))
+        assert_equal(take_ind[0,0,0], (slice(None), slice(None), slice(None)))
+        
+        grp = FakeGroup({'time':True, 'h':False, 'z':False, 'y':False, 'x':False})
+
+        elem=(Ellipsis, [15,16,17,18,19], slice(None), slice(None))
+        start, count, stride, take_ind = _StartCountStride(elem, (2,10,20,10,10),\
+            ['time', 'h', 'z', 'y', 'x'], grp, (2,10,5,10,10), put=True)
+        assert_equal(start[0,0,0,0,0], [0, 0, 15, 0, 0])
+        assert_equal(count[0,0,0,0,0], [2, 10, 5, 10, 10])
+        assert_equal(stride[0,0,0,0,0], [1, 1, 1, 1, 1])
+        assert_equal(take_ind[0,0,0,0,0], (slice(None), slice(None), slice(None), slice(None), slice(None)))
+        
+        try:
+            elem=(Ellipsis, [15,16,17,18,19], slice(None))
+            start, count, stride, take_ind = _StartCountStride(elem, (2,10,20,10,10),\
+               ['time', 'z', 'y', 'x'], grp, (2,10,5,10,10), put=True)
+            assert_equal(None, 'Should throw an exception')
+        except IndexError as e:
+            assert_equal(str(e), "integer index exceeds dimension size")
+            
+        try:
+            elem=(Ellipsis, [15,16,17,18,19], Ellipsis)
+            start, count, stride, take_ind = _StartCountStride(elem, (2,10, 20,10,10),\
+               ['time', 'z', 'y', 'x'], grp, (2,10,5,10,10), put=True)
+            assert_equal(None, 'Should throw an exception')
+        except IndexError as e:
+            assert_equal(str(e), "At most one ellipsis allowed in a slicing expression")
+       
 class FakeGroup(object):
     """Create a fake group instance by passing a dictionary of booleans
     keyed by dimension name."""

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-grass/netcdf4-python.git



More information about the Pkg-grass-devel mailing list