[Git][debian-gis-team/satpy][upstream] New upstream version 0.19.1

Antonio Valentino gitlab at salsa.debian.org
Mon Jan 13 21:40:21 GMT 2020



Antonio Valentino pushed to branch upstream at Debian GIS Project / satpy


Commits:
6c017186 by Antonio Valentino at 2020-01-13T21:25:15+00:00
New upstream version 0.19.1
- - - - -


27 changed files:

- .git_archival.txt
- .travis.yml
- AUTHORS.md
- CHANGELOG.md
- appveyor.yml
- doc/source/dev_guide/custom_reader.rst
- satpy/etc/readers/ahi_hsd.yaml
- satpy/etc/readers/avhrr_l1b_aapp.yaml
- satpy/etc/readers/olci_l2.yaml
- satpy/etc/readers/viirs_edr_flood.yaml
- satpy/readers/_geos_area.py
- satpy/readers/aapp_l1b.py
- satpy/readers/abi_base.py
- satpy/readers/ami_l1b.py
- satpy/readers/tropomi_l2.py
- satpy/readers/utils.py
- satpy/readers/yaml_reader.py
- satpy/tests/reader_tests/test_abi_l1b.py
- satpy/tests/reader_tests/test_abi_l2_nc.py
- satpy/tests/reader_tests/test_ami_l1b.py
- satpy/tests/reader_tests/test_glm_l2.py
- satpy/tests/reader_tests/test_tropomi_l2.py
- satpy/tests/test_yaml_reader.py
- satpy/tests/writer_tests/test_cf.py
- satpy/tests/writer_tests/test_scmi.py
- satpy/writers/cf_writer.py
- satpy/writers/scmi.py


Changes:

=====================================
.git_archival.txt
=====================================
@@ -1 +1 @@
-ref-names: HEAD -> master, tag: v0.19.0
\ No newline at end of file
+ref-names: HEAD -> master, tag: v0.19.1
\ No newline at end of file


=====================================
.travis.yml
=====================================
@@ -3,7 +3,7 @@ env:
     global:
         # Set defaults to avoid repeating in most cases
         - PYTHON_VERSION=$TRAVIS_PYTHON_VERSION
-        - NUMPY_VERSION=stable
+        - NUMPY_VERSION=1.17
         - MAIN_CMD='python setup.py'
         - CONDA_DEPENDENCIES='xarray!=0.13.0 dask distributed toolz Cython sphinx cartopy pillow matplotlib scipy pyyaml pyproj pyresample coveralls coverage codecov behave netcdf4 h5py h5netcdf gdal rasterio imageio pyhdf mock libtiff geoviews zarr six python-eccodes'
         - PIP_DEPENDENCIES='trollsift trollimage pyspectral pyorbital libtiff'


=====================================
AUTHORS.md
=====================================
@@ -27,8 +27,8 @@ The following people have made contributions to this project:
 - [David Hoese (djhoese)](https://github.com/djhoese)
 - [Marc Honnorat (honnorat)](https://github.com/honnorat)
 - [Mikhail Itkin (mitkin)](https://github.com/mitkin)
-- [JohannesSMHI (JohannesSMHI)](https://github.com/JohannesSMHI)
 - [Tommy Jasmin (tommyjasmin)](https://github.com/tommyjasmin)
+- [Johannes Johansson (JohannesSMHI)](https://github.com/JohannesSMHI)
 - [Sauli Joro (sjoro)](https://github.com/sjoro)
 - [Janne Kotro (jkotro)](https://github.com/jkotro)
 - [Ralph Kuehn (ralphk11)](https://github.com/ralphk11)


=====================================
CHANGELOG.md
=====================================
@@ -1,3 +1,36 @@
+## Version 0.19.1 (2020/01/10)
+
+### Issues Closed
+
+* [Issue 1030](https://github.com/pytroll/satpy/issues/1030) - Geostationary padding results in wrong area definition for AHI mesoscale sectors. ([PR 1037](https://github.com/pytroll/satpy/pull/1037))
+* [Issue 1029](https://github.com/pytroll/satpy/issues/1029) - NetCDF (CF) writer doesn't include semi_minor_axis/semi_major_axis for new versions of pyproj ([PR 1040](https://github.com/pytroll/satpy/pull/1040))
+* [Issue 1023](https://github.com/pytroll/satpy/issues/1023) - RTD "Edit on Github" broken in "latest" documentation
+
+In this release 3 issues were closed.
+
+### Pull Requests Merged
+
+#### Bugs fixed
+
+* [PR 1040](https://github.com/pytroll/satpy/pull/1040) - Fix geostationary axis handling in CF writer ([1029](https://github.com/pytroll/satpy/issues/1029))
+* [PR 1037](https://github.com/pytroll/satpy/pull/1037) - Fix segment handling for non-FLDK sectors in the AHI HSD reader ([1030](https://github.com/pytroll/satpy/issues/1030))
+* [PR 1036](https://github.com/pytroll/satpy/pull/1036) - Fix ABI L1b/L2 time dimension causing issues with newer xarray
+* [PR 1034](https://github.com/pytroll/satpy/pull/1034) - Fix AMI geolocation being off by 1 pixel
+* [PR 1033](https://github.com/pytroll/satpy/pull/1033) - Fix avhrr_l1b_aapp reader not including standard_name metadata
+* [PR 1031](https://github.com/pytroll/satpy/pull/1031) - Fix tropomi_l2 reader not using y and x dimension names
+
+#### Features added
+
+* [PR 1035](https://github.com/pytroll/satpy/pull/1035) - Add additional Sentinel 3 OLCI 2 datasets
+* [PR 1027](https://github.com/pytroll/satpy/pull/1027) - Update SCMI writer and VIIRS EDR Flood reader to work for pre-tiled data
+
+#### Documentation changes
+
+* [PR 1032](https://github.com/pytroll/satpy/pull/1032) - Add documentation about y and x dimensions for custom readers
+
+In this release 9 pull requests were closed.
+
+
 ## Version 0.19.0 (2019/12/30)
 
 ### Issues Closed


=====================================
appveyor.yml
=====================================
@@ -11,7 +11,7 @@ environment:
     - PYTHON: "C:\\Python37_64"
       PYTHON_VERSION: "3.7"
       PYTHON_ARCH: "64"
-      NUMPY_VERSION: "stable"
+      NUMPY_VERSION: "1.16"
 
 install:
     - "git clone --depth 1 git://github.com/astropy/ci-helpers.git"


=====================================
doc/source/dev_guide/custom_reader.rst
=====================================
@@ -466,6 +466,15 @@ needs to implement a few methods:
    successful, containing the data and :ref:`metadata <dataset_metadata>` of the
    loaded dataset, or return None if the loading was unsuccessful.
 
+   The DataArray should at least have a ``y`` dimension. For data covering
+   a 2D region on the Earth, their should be at least a ``y`` and ``x``
+   dimension. This applies to
+   non-gridded data like that of a polar-orbiting satellite instrument. The
+   latitude dimension is typically named ``y`` and longitude named ``x``.
+   This may require renaming dimensions from the file, see for the
+   :meth:`xarray.DataArray.rename` method for more information and its use
+   in the example below.
+
  - the ``get_area_def`` method, that takes as single argument the
    :class:`~satpy.dataset.DatasetID` for which we want
    the area. It should return a :class:`~pyresample.geometry.AreaDefinition`


=====================================
satpy/etc/readers/ahi_hsd.yaml
=====================================
@@ -287,82 +287,66 @@ datasets:
 
 file_types:
   hsd_b01:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B01_{area}_R10_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B01_{area}_R10_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b02:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B02_{area}_R10_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B02_{area}_R10_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b03:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B03_{area}_R05_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B03_{area}_R05_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b04:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B04_{area}_R10_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B04_{area}_R10_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b05:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B05_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B05_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b06:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B06_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B06_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b07:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B07_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B07_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b08:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B08_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B08_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b09:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B09_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B09_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b10:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B10_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B10_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b11:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B11_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B11_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b12:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B12_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B12_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b13:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B13_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B13_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b14:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B14_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B14_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b15:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B15_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B15_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10
   hsd_b16:
-    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler ''
+    file_reader: !!python/name:satpy.readers.ahi_hsd.AHIHSDFileHandler
     file_patterns: ['HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B16_{area}_R20_S{segment:2d}{total_segments:2d}.DAT',
                     'HS_{platform_shortname}_{start_time:%Y%m%d_%H%M}_B16_{area}_R20_S{segment:2d}{total_segments:2d}.DAT.bz2']
-    expected_segments: 10


=====================================
satpy/etc/readers/avhrr_l1b_aapp.yaml
=====================================
@@ -92,6 +92,8 @@ datasets:
             - longitude
             - latitude
         file_type: avhrr_aapp_l1b
+        standard_name: solar_zenith_angle
+        units: degrees
 
     sensor_zenith_angle:
         name: sensor_zenith_angle
@@ -100,6 +102,8 @@ datasets:
             - longitude
             - latitude
         file_type: avhrr_aapp_l1b
+        standard_name: sensor_zenith_angle
+        units: degrees
 
     sun_sensor_azimuth_difference_angle:
         name: sun_sensor_azimuth_difference_angle
@@ -108,18 +112,21 @@ datasets:
             - longitude
             - latitude
         file_type: avhrr_aapp_l1b
+        units: degrees
 
     latitude:
         name: latitude
         resolution: 1050
         file_type: avhrr_aapp_l1b
         standard_name: latitude
+        units: degrees_north
 
     longitude:
         name: longitude
         resolution: 1050
         file_type: avhrr_aapp_l1b
         standard_name: longitude
+        units: degrees_east
 
 file_types:
     avhrr_aapp_l1b:


=====================================
satpy/etc/readers/olci_l2.yaml
=====================================
@@ -16,6 +16,15 @@ file_types:
     esa_l2_chl_oc4me:
         file_reader: !!python/name:satpy.readers.olci_nc.NCOLCI2
         file_patterns: ['{mission_id:3s}_OL_2_{datatype_id:_<6s}_{start_time:%Y%m%dT%H%M%S}_{end_time:%Y%m%dT%H%M%S}_{creation_time:%Y%m%dT%H%M%S}_{duration:4d}_{cycle:3d}_{relative_orbit:3d}_{frame:4d}_{centre:3s}_{mode:1s}_{timeliness:2s}_{collection:3s}.SEN3/chl_oc4me.nc']
+    esa_l2_iop_nn:
+        file_reader: !!python/name:satpy.readers.olci_nc.NCOLCI2
+        file_patterns: ['{mission_id:3s}_OL_2_{datatype_id:_<6s}_{start_time:%Y%m%dT%H%M%S}_{end_time:%Y%m%dT%H%M%S}_{creation_time:%Y%m%dT%H%M%S}_{duration:4d}_{cycle:3d}_{relative_orbit:3d}_{frame:4d}_{centre:3s}_{mode:1s}_{timeliness:2s}_{collection:3s}.SEN3/iop_nn.nc']
+    esa_l2_trsp:
+        file_reader: !!python/name:satpy.readers.olci_nc.NCOLCI2
+        file_patterns: ['{mission_id:3s}_OL_2_{datatype_id:_<6s}_{start_time:%Y%m%dT%H%M%S}_{end_time:%Y%m%dT%H%M%S}_{creation_time:%Y%m%dT%H%M%S}_{duration:4d}_{cycle:3d}_{relative_orbit:3d}_{frame:4d}_{centre:3s}_{mode:1s}_{timeliness:2s}_{collection:3s}.SEN3/trsp.nc']
+    esa_l2_tsm_nn:
+        file_reader: !!python/name:satpy.readers.olci_nc.NCOLCI2
+        file_patterns: ['{mission_id:3s}_OL_2_{datatype_id:_<6s}_{start_time:%Y%m%dT%H%M%S}_{end_time:%Y%m%dT%H%M%S}_{creation_time:%Y%m%dT%H%M%S}_{duration:4d}_{cycle:3d}_{relative_orbit:3d}_{frame:4d}_{centre:3s}_{mode:1s}_{timeliness:2s}_{collection:3s}.SEN3/tsm_nn.nc']
     esa_l2_wqsf:
         file_reader: !!python/name:satpy.readers.olci_nc.NCOLCI2
         file_patterns: ['{mission_id:3s}_OL_2_{datatype_id:_<6s}_{start_time:%Y%m%dT%H%M%S}_{end_time:%Y%m%dT%H%M%S}_{creation_time:%Y%m%dT%H%M%S}_{duration:4d}_{cycle:3d}_{relative_orbit:3d}_{frame:4d}_{centre:3s}_{mode:1s}_{timeliness:2s}_{collection:3s}.SEN3/wqsf.nc']
@@ -339,6 +348,42 @@ datasets:
     file_type: esa_l2_chl_nn
     nc_key: CHL_NN
 
+  iop_nn:
+    name: iop_nn
+    sensor: olci
+    resolution: 300
+    calibration:
+      reflectance:
+        standard_name: cdm_absorption_coefficient
+        units: "lg(re m-l)"
+    coordinates: [longitude, latitude]
+    file_type: esa_l2_iop_nn
+    nc_key: ADG443_NN
+
+  trsp:
+    name: trsp
+    sensor: olci
+    resolution: 300
+    calibration:
+      reflectance:
+        standard_name: diffuse_attenuation_coefficient
+        units: "lg(re m-l)"
+    coordinates: [longitude, latitude]
+    file_type: esa_l2_trsp
+    nc_key: KD490_M07
+
+  tsm_nn:
+    name: tsm_nn
+    sensor: olci
+    resolution: 300
+    calibration:
+      reflectance:
+        standard_name: total_suspended_matter_concentration
+        units: "lg(re g.m-3)"
+    coordinates: [longitude, latitude]
+    file_type: esa_l2_tsm_nn
+    nc_key: TSM_NN
+
   wqsf:
     name: wqsf
     sensor: olci


=====================================
satpy/etc/readers/viirs_edr_flood.yaml
=====================================
@@ -10,6 +10,7 @@ file_types:
         file_patterns:
             - 'WATER_VIIRS_Prj_SVI_{platform_shortname}_d{start_time:%Y%m%d_t%H%M%S%f}_e{end_time:%H%M%S%f}_b{orbit:5d}_{source:8s}_{dim0:d}_{dim1:d}_01.hdf'
             - 'WATER_VIIRS_Prj_SVI_{platform_shortname}_d{start_time:%Y%m%d_t%H%M%S%f}_e{end_time:%H%M%S%f}_b{orbit:5d}_{source:8s}_{aoi:3s}_{dim0:d}_{dim1:d}_01.hdf'
+            - 'WATER_COM_VIIRS_Prj_SVI_d{start_time:%Y%m%d}_d{end_time:%Y%m%d}_{dim0:d}_{dim1:d}_{unknown1:2d}_{total_days:3d}day_{tile_num:3d}.hdf'
 
 datasets:
     water_detection:


=====================================
satpy/readers/_geos_area.py
=====================================
@@ -79,7 +79,7 @@ def get_area_extent(pdict):
     # count starts at 1
     cols = 1 - 0.5
 
-    if (pdict['scandir'] == 'S2N'):
+    if pdict['scandir'] == 'S2N':
         lines = 0.5 - 1
         scanmult = -1
     else:


=====================================
satpy/readers/aapp_l1b.py
=====================================
@@ -56,14 +56,17 @@ PLATFORM_NAMES = {4: 'NOAA-15',
 
 
 def create_xarray(arr):
+    """Create xarray DataArray from numpy array."""
     res = da.from_array(arr, chunks=(CHUNK_SIZE, CHUNK_SIZE))
     res = xr.DataArray(res, dims=['y', 'x'])
     return res
 
 
 class AVHRRAAPPL1BFile(BaseFileHandler):
+    """Reader for AVHRR L1B files created from the AAPP software."""
 
     def __init__(self, filename, filename_info, filetype_info):
+        """Initialize object information by reading the input file."""
         super(AVHRRAAPPL1BFile, self).__init__(filename, filename_info,
                                                filetype_info)
         self.channels = {i: None for i in AVHRR_CHANNEL_NAMES}
@@ -89,23 +92,20 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
 
     @property
     def start_time(self):
+        """Get the time of the first observation."""
         return datetime(self._data['scnlinyr'][0], 1, 1) + timedelta(
             days=int(self._data['scnlindy'][0]) - 1,
             milliseconds=int(self._data['scnlintime'][0]))
 
     @property
     def end_time(self):
+        """Get the time of the final observation."""
         return datetime(self._data['scnlinyr'][-1], 1, 1) + timedelta(
             days=int(self._data['scnlindy'][-1]) - 1,
             milliseconds=int(self._data['scnlintime'][-1]))
 
-    def shape(self):
-        # return self._data.shape
-        return self._shape
-
     def get_dataset(self, key, info):
         """Get a dataset from the file."""
-
         if key.name in CHANNEL_NAMES:
             dataset = self.calibrate(key)
         elif key.name in ['longitude', 'latitude']:
@@ -128,6 +128,9 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
         dataset.attrs.update({'platform_name': self.platform_name,
                               'sensor': self.sensor})
         dataset.attrs.update(key.to_dict())
+        for meta_key in ('standard_name', 'units'):
+            if meta_key in info:
+                dataset.attrs.setdefault(meta_key, info[meta_key])
 
         if not self._shape:
             self._shape = dataset.shape
@@ -135,8 +138,7 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
         return dataset
 
     def read(self):
-        """Read the data.
-        """
+        """Read the data."""
         tic = datetime.now()
         with open(self.filename, "rb") as fp_:
             header = np.memmap(fp_, dtype=_HEADERTYPE, mode="r", shape=(1, ))
@@ -148,10 +150,7 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
         self._data = data
 
     def get_angles(self, angle_id):
-        """Get sun-satellite viewing angles"""
-
-        tic = datetime.now()
-
+        """Get sun-satellite viewing angles."""
         sunz40km = self._data["ang"][:, :, 0] * 1e-2
         satz40km = self._data["ang"][:, :, 1] * 1e-2
         azidiff40km = self._data["ang"][:, :, 2] * 1e-2
@@ -177,15 +176,10 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
                 (rows1km, cols1km), along_track_order, cross_track_order)
             self.sunz, self.satz, self.azidiff = satint.interpolate()
 
-            logger.debug("Interpolate sun-sat angles: time %s",
-                         str(datetime.now() - tic))
-
         return create_xarray(getattr(self, ANGLES[angle_id]))
 
     def navigate(self):
-        """Return the longitudes and latitudes of the scene.
-        """
-        tic = datetime.now()
+        """Get the longitudes and latitudes of the scene."""
         lons40km = self._data["pos"][:, :, 1] * 1e-4
         lats40km = self._data["pos"][:, :, 0] * 1e-4
 
@@ -209,16 +203,12 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
                 (lons40km, lats40km), (rows40km, cols40km), (rows1km, cols1km),
                 along_track_order, cross_track_order)
             self.lons, self.lats = satint.interpolate()
-            logger.debug("Navigation time %s", str(datetime.now() - tic))
 
     def calibrate(self,
                   dataset_id,
                   pre_launch_coeffs=False,
                   calib_coeffs=None):
-        """Calibrate the data
-        """
-        tic = datetime.now()
-
+        """Calibrate the data."""
         if calib_coeffs is None:
             calib_coeffs = {}
 
@@ -268,9 +258,6 @@ class AVHRRAAPPL1BFile(BaseFileHandler):
 
         ds.attrs['units'] = units[dataset_id.calibration]
         ds.attrs.update(dataset_id._asdict())
-
-        logger.debug("Calibration time %s", str(datetime.now() - tic))
-
         return ds
 
 
@@ -458,13 +445,13 @@ def _vis_calibrate(data,
                    pre_launch_coeffs=False,
                    calib_coeffs=None,
                    mask=False):
-    """Visible channel calibration only.
+    """Calibrate visible channel data.
+
+    ``calib_type`` in count, reflectance, radiance.
 
-    *calib_type* in count, reflectance, radiance
     """
     # Calibration count to albedo, the calibration is performed separately for
     # two value ranges.
-
     if calib_type not in ['counts', 'radiance', 'reflectance']:
         raise ValueError('Calibration ' + calib_type + ' unknown!')
 
@@ -525,10 +512,11 @@ def _vis_calibrate(data,
 
 
 def _ir_calibrate(header, data, irchn, calib_type, mask=False):
-    """IR calibration
-    *calib_type* in brightness_temperature, radiance, count
-    """
+    """Calibrate for IR bands.
 
+    ``calib_type`` in brightness_temperature, radiance, count
+
+    """
     count = data["hrpt"][:, :, irchn + 2].astype(np.float)
 
     if calib_type == 0:


=====================================
satpy/readers/abi_base.py
=====================================
@@ -53,7 +53,7 @@ class NC_ABI_BASE(BaseFileHandler):
                                       mask_and_scale=False,
                                       chunks={'lon': CHUNK_SIZE, 'lat': CHUNK_SIZE}, )
 
-        if 't' in self.nc.dims:
+        if 't' in self.nc.dims or 't' in self.nc.coords:
             self.nc = self.nc.rename({'t': 'time'})
         platform_shortname = filename_info['platform_shortname']
         self.platform_name = PLATFORM_NAMES.get(platform_shortname)


=====================================
satpy/readers/ami_l1b.py
=====================================
@@ -25,7 +25,7 @@ import xarray as xr
 import dask.array as da
 import pyproj
 
-from satpy.readers._geos_area import make_ext, get_area_definition
+from satpy.readers._geos_area import get_area_definition, get_area_extent
 from pyspectral.blackbody import blackbody_wn_rad2temp as rad2temp
 from satpy.readers.file_handlers import BaseFileHandler
 from satpy import CHUNK_SIZE
@@ -85,26 +85,19 @@ class AMIL1bNetCDF(BaseFileHandler):
         obs_mode = self.nc.attrs['observation_mode']
         resolution = self.nc.attrs['channel_spatial_resolution']
 
+        # Example offset: 11000.5
+        # the 'get_area_extent' will handle this half pixel for us
         pdict['cfac'] = self.nc.attrs['cfac']
         pdict['coff'] = self.nc.attrs['coff']
-        pdict['lfac'] = self.nc.attrs['lfac']
+        pdict['lfac'] = -self.nc.attrs['lfac']
         pdict['loff'] = self.nc.attrs['loff']
-
-        # AMI grid appears offset, we can not use the standard get_area_extent
-        bit_shift = 2**16
-        ll_x = (0 - pdict['coff'] - 0.5) * bit_shift / pdict['cfac']
-        ll_y = -(0 - pdict['loff'] - 0.5) * bit_shift / pdict['lfac']
-        ur_x = (pdict['ncols'] - pdict['coff'] + 0.5) * bit_shift / pdict['cfac']
-        ur_y = -(pdict['nlines'] - pdict['loff'] + 0.5) * bit_shift / pdict['lfac']
-
-        area_extent = make_ext(ll_x, ur_x, ll_y, ur_y, pdict['h'])
-
+        pdict['scandir'] = 'N2S'
         pdict['a_name'] = 'ami_geos_{}'.format(obs_mode.lower())
         pdict['a_desc'] = 'AMI {} Area at {} resolution'.format(obs_mode, resolution)
         pdict['p_id'] = 'ami_fixed_grid'
 
+        area_extent = get_area_extent(pdict)
         fg_area_def = get_area_definition(pdict, area_extent)
-
         return fg_area_def
 
     def get_orbital_parameters(self):


=====================================
satpy/readers/tropomi_l2.py
=====================================
@@ -145,6 +145,15 @@ class TROPOMIL2FileHandler(NetCDF4FileHandler):
 
         return metadata
 
+    def _rename_dims(self, data_arr):
+        """Normalize dimension names with the rest of Satpy."""
+        dims_dict = {}
+        if 'ground_pixel' in data_arr.dims:
+            dims_dict['ground_pixel'] = 'x'
+        if 'scanline' in data_arr.dims:
+            dims_dict['scanline'] = 'y'
+        return data_arr.rename(dims_dict)
+
     def get_dataset(self, ds_id, ds_info):
         """Get dataset."""
         logger.debug("Getting data for: %s", ds_id.name)
@@ -154,4 +163,5 @@ class TROPOMIL2FileHandler(NetCDF4FileHandler):
         fill = data.attrs.pop('_FillValue')
         data = data.squeeze()
         data = data.where(data != fill)
+        data = self._rename_dims(data)
         return data


=====================================
satpy/readers/utils.py
=====================================
@@ -144,8 +144,8 @@ def get_geostationary_bounding_box(geos_area, nb_points=50):
 
     # generate points around the north hemisphere in satellite projection
     # make it a bit smaller so that we stay inside the valid area
-    x = np.cos(np.linspace(-np.pi, 0, nb_points / 2)) * (xmax - 0.001)
-    y = -np.sin(np.linspace(-np.pi, 0, nb_points / 2)) * (ymax - 0.001)
+    x = np.cos(np.linspace(-np.pi, 0, nb_points // 2)) * (xmax - 0.001)
+    y = -np.sin(np.linspace(-np.pi, 0, nb_points // 2)) * (ymax - 0.001)
 
     # clip the projection coordinates to fit the area extent of geos_area
     ll_x, ll_y, ur_x, ur_y = (np.array(geos_area.area_extent) /


=====================================
satpy/readers/yaml_reader.py
=====================================
@@ -860,9 +860,36 @@ def _load_area_def(dsid, file_handlers):
 class GEOSegmentYAMLReader(FileYAMLReader):
     """Reader for segmented geostationary data.
 
-    This reader pads the data to full geostationary disk.
+    This reader pads the data to full geostationary disk if necessary.
+
+    This reader uses an optional ``pad_data`` keyword argument that can be
+    passed to :meth:`Scene.load` to control if padding is done (True by
+    default). Passing `pad_data=False` will return data unpadded.
+
+    When using this class in a reader's YAML configuration, segmented file
+    types (files that may have multiple segments) should specify an extra
+    ``expected_segments`` piece of file_type metadata. This tells this reader
+    how many total segments it should expect when padding data. Alternatively,
+    the file patterns for a file type can include a ``total_segments``
+    field which will be used if ``expected_segments`` is not defined. This
+    will default to 1 segment.
+
     """
 
+    def create_filehandlers(self, filenames, fh_kwargs=None):
+        """Create file handler objects and determine expected segments for each."""
+        created_fhs = super(GEOSegmentYAMLReader, self).create_filehandlers(
+            filenames, fh_kwargs=fh_kwargs)
+
+        # add "expected_segments" information
+        for fhs in created_fhs.values():
+            for fh in fhs:
+                # check the filename for total_segments parameter as a fallback
+                ts = fh.filename_info.get('total_segments', 1)
+                # if the YAML has segments explicitly specified then use that
+                fh.filetype_info.setdefault('expected_segments', ts)
+        return created_fhs
+
     @staticmethod
     def _load_dataset(dsid, ds_info, file_handlers, dim='y', pad_data=True):
         """Load only a piece of the dataset."""
@@ -932,8 +959,7 @@ def _stack_area_defs(area_def_dict):
 def _pad_later_segments_area(file_handlers, dsid):
     """Pad area definitions for missing segments that are later in sequence than the first available."""
     seg_size = None
-    expected_segments = file_handlers[0].filetype_info.get(
-        'expected_segments', 1)
+    expected_segments = file_handlers[0].filetype_info['expected_segments']
     available_segments = [int(fh.filename_info.get('segment', 1)) for
                           fh in file_handlers]
     area_defs = {}
@@ -990,11 +1016,13 @@ def _find_missing_segments(file_handlers, ds_info, dsid):
     failure = True
     counter = 1
     expected_segments = 1
+    # get list of file handlers in segment order
+    # (ex. first segment, second segment, etc)
     handlers = sorted(file_handlers, key=lambda x: x.filename_info.get('segment', 1))
     projectable = None
     for fh in handlers:
         if fh.filetype_info['file_type'] in ds_info['file_type']:
-            expected_segments = fh.filetype_info.get('expected_segments', 1)
+            expected_segments = fh.filetype_info['expected_segments']
 
         while int(fh.filename_info.get('segment', 1)) > counter:
             slice_list.append(None)


=====================================
satpy/tests/reader_tests/test_abi_l1b.py
=====================================
@@ -27,35 +27,6 @@ except ImportError:
     import mock
 
 
-class FakeDataset(object):
-    """Act like an xarray Dataset object for testing."""
-
-    def __init__(self, info, attrs, dims=None):
-        """Set properties to mimic a Dataset object."""
-        for var_name, var_data in list(info.items()):
-            if isinstance(var_data, np.ndarray):
-                info[var_name] = xr.DataArray(var_data)
-        self.info = info
-        self.attrs = attrs
-        self.dims = dims or tuple()
-
-    def __getitem__(self, key):
-        """Get the info for the fake data."""
-        return self.info[key]
-
-    def __contains__(self, key):
-        """Check if key is in the fake data."""
-        return key in self.info
-
-    def rename(self, *args, **kwargs):
-        """Allow for dimension renaming."""
-        return self
-
-    def close(self):
-        """Pretend to close."""
-        return
-
-
 class Test_NC_ABI_L1B_Base(unittest.TestCase):
     """Common setup for NC_ABI_L1B tests."""
 
@@ -81,16 +52,18 @@ class Test_NC_ABI_L1B_Base(unittest.TestCase):
                     'units': 'W m-2 um-1 sr-1'
                 }
             )
-        rad['time'] = time
-        rad['x_image'] = x_image
-        rad['y_image'] = y_image
+        rad.coords['t'] = time
+        rad.coords['x_image'] = x_image
+        rad.coords['y_image'] = y_image
         x__ = xr.DataArray(
             range(5),
             attrs={'scale_factor': 2., 'add_offset': -1.},
+            dims=('x',)
         )
         y__ = xr.DataArray(
             range(2),
             attrs={'scale_factor': -2., 'add_offset': 1.},
+            dims=('y',)
         )
         proj = xr.DataArray(
             [],
@@ -103,30 +76,38 @@ class Test_NC_ABI_L1B_Base(unittest.TestCase):
                 'sweep_angle_axis': u'x'
             }
         )
-        yaw_flip = xr.DataArray([1])
-        xr_.open_dataset.return_value = FakeDataset({
-            'Rad': rad,
-            'band_id': np.array(8),
-            'x': x__,
-            'y': y__,
-            'x_image': x_image,
-            'y_image': y_image,
-            'goes_imager_projection': proj,
-            'yaw_flip_flag': yaw_flip,
-            "planck_fk1": np.array(13432.1),
-            "planck_fk2": np.array(1497.61),
-            "planck_bc1": np.array(0.09102),
-            "planck_bc2": np.array(0.99971),
-            "esun": np.array(2017),
-            "nominal_satellite_subpoint_lat": np.array(0.0),
-            "nominal_satellite_subpoint_lon": np.array(-89.5),
-            "nominal_satellite_height": np.array(35786.02),
-            "earth_sun_distance_anomaly_in_AU": np.array(0.99)},
-            {
+        fake_dataset = xr.Dataset(
+            data_vars={
+                'Rad': rad,
+                'band_id': np.array(8),
+                # 'x': x__,
+                # 'y': y__,
+                'x_image': x_image,
+                'y_image': y_image,
+                'goes_imager_projection': proj,
+                'yaw_flip_flag': np.array([1]),
+                "planck_fk1": np.array(13432.1),
+                "planck_fk2": np.array(1497.61),
+                "planck_bc1": np.array(0.09102),
+                "planck_bc2": np.array(0.99971),
+                "esun": np.array(2017),
+                "nominal_satellite_subpoint_lat": np.array(0.0),
+                "nominal_satellite_subpoint_lon": np.array(-89.5),
+                "nominal_satellite_height": np.array(35786.02),
+                "earth_sun_distance_anomaly_in_AU": np.array(0.99)
+            },
+            coords={
+                't': rad.coords['t'],
+                'x': x__,
+                'y': y__,
+
+            },
+            attrs={
                 "time_coverage_start": "2017-09-20T17:30:40.8Z",
                 "time_coverage_end": "2017-09-20T17:41:17.5Z",
-            }, dims=('y', 'x'))
-
+            },
+        )
+        xr_.open_dataset.return_value = fake_dataset
         self.reader = NC_ABI_L1B('filename',
                                  {'platform_shortname': 'G16', 'observation_type': 'Rad',
                                   'scene_abbr': 'C', 'scan_mode': 'M3'},
@@ -173,6 +154,11 @@ class Test_NC_ABI_L1B(Test_NC_ABI_L1B_Base):
                'units': 'W m-2 um-1 sr-1'}
 
         self.assertDictEqual(res.attrs, exp)
+        # we remove any time dimension information
+        self.assertNotIn('t', res.coords)
+        self.assertNotIn('t', res.dims)
+        self.assertNotIn('time', res.coords)
+        self.assertNotIn('time', res.dims)
 
     def test_bad_calibration(self):
         """Test that asking for a bad calibration fails."""


=====================================
satpy/tests/reader_tests/test_abi_l2_nc.py
=====================================
@@ -19,7 +19,6 @@
 import sys
 import numpy as np
 import xarray as xr
-from .test_abi_l1b import FakeDataset
 
 if sys.version_info < (2, 7):
     import unittest2 as unittest
@@ -53,10 +52,12 @@ class Test_NC_ABI_L2_base(unittest.TestCase):
         x__ = xr.DataArray(
             [0, 1],
             attrs={'scale_factor': 2., 'add_offset': -1.},
+            dims=('x',),
         )
         y__ = xr.DataArray(
             [0, 1],
             attrs={'scale_factor': -2., 'add_offset': 1.},
+            dims=('y',),
         )
 
         ht_da = xr.DataArray(np.array([2, -1, -32768, 32767]).astype(np.int16).reshape((2, 2)),
@@ -67,23 +68,24 @@ class Test_NC_ABI_L2_base(unittest.TestCase):
                                     '_Unsigned': 'True',
                                     'units': 'm'},)
 
-        xr_.open_dataset.return_value = FakeDataset({
-            'goes_imager_projection': proj,
-            'x': x__,
-            'y': y__,
-            'HT': ht_da,
-            "nominal_satellite_subpoint_lat": np.array(0.0),
-            "nominal_satellite_subpoint_lon": np.array(-89.5),
-            "nominal_satellite_height": np.array(35786020.),
-            "spatial_resolution": "10km at nadir",
+        fake_dataset = xr.Dataset(
+            data_vars={
+                'goes_imager_projection': proj,
+                'x': x__,
+                'y': y__,
+                'HT': ht_da,
+                "nominal_satellite_subpoint_lat": np.array(0.0),
+                "nominal_satellite_subpoint_lon": np.array(-89.5),
+                "nominal_satellite_height": np.array(35786020.),
+                "spatial_resolution": "10km at nadir",
+
             },
-            {
+            attrs={
                 "time_coverage_start": "2017-09-20T17:30:40.8Z",
                 "time_coverage_end": "2017-09-20T17:41:17.5Z",
-            },
-            dims=('y', 'x'),
+            }
         )
-
+        xr_.open_dataset.return_value = fake_dataset
         self.reader = NC_ABI_L2('filename',
                                 {'platform_shortname': 'G16', 'observation_type': 'HT',
                                  'scan_mode': 'M3'},
@@ -168,17 +170,23 @@ class Test_NC_ABI_L2_area_latlon(unittest.TestCase):
         x__ = xr.DataArray(
             [0, 1],
             attrs={'scale_factor': 2., 'add_offset': -1.},
+            dims=('lon',),
         )
         y__ = xr.DataArray(
             [0, 1],
             attrs={'scale_factor': -2., 'add_offset': 1.},
+            dims=('lat',),
+        )
+        fake_dataset = xr.Dataset(
+            data_vars={
+                'goes_lat_lon_projection': proj,
+                'geospatial_lat_lon_extent': proj_ext,
+                'lon': x__,
+                'lat': y__,
+                'RSR': xr.DataArray(np.ones((2, 2)), dims=('lat', 'lon')),
+            },
         )
-        xr_.open_dataset.return_value = FakeDataset({
-            'goes_lat_lon_projection': proj,
-            'geospatial_lat_lon_extent': proj_ext,
-            'lon': x__,
-            'lat': y__,
-            'RSR': np.ones((2, 2))}, {}, dims=('lon', 'lat'))
+        xr_.open_dataset.return_value = fake_dataset
 
         self.reader = NC_ABI_L2('filename',
                                 {'platform_shortname': 'G16', 'observation_type': 'RSR',


=====================================
satpy/tests/reader_tests/test_ami_l1b.py
=====================================
@@ -199,7 +199,7 @@ class TestAMIL1bNetCDF(TestAMIL1bNetCDFBase):
         self.assertEqual(call_args[4], self.reader.nc.attrs['number_of_columns'])
         self.assertEqual(call_args[5], self.reader.nc.attrs['number_of_lines'])
         np.testing.assert_allclose(call_args[6],
-                                   [-5511523.904082, -5511523.904082, 5511022.902, 5511022.902])
+                                   [-5511022.902, -5511022.902, 5511022.902, 5511022.902])
 
     def test_get_dataset_vis(self):
         """Test get visible calibrated data."""


=====================================
satpy/tests/reader_tests/test_glm_l2.py
=====================================
@@ -28,35 +28,6 @@ except ImportError:
     import mock
 
 
-class FakeDataset(object):
-    """Act like an xarray Dataset object for testing."""
-
-    def __init__(self, info, attrs, dims=None):
-        """Set properties to mimic a Dataset object."""
-        for var_name, var_data in list(info.items()):
-            if isinstance(var_data, np.ndarray):
-                info[var_name] = xr.DataArray(var_data)
-        self.info = info
-        self.attrs = attrs
-        self.dims = dims or tuple()
-
-    def __getitem__(self, key):
-        """Get the info for the fake data."""
-        return self.info[key]
-
-    def __contains__(self, key):
-        """Check if key is in the fake data."""
-        return key in self.info
-
-    def rename(self, *args, **kwargs):
-        """Allow for dimension renaming."""
-        return self
-
-    def close(self):
-        """Pretend to close."""
-        return
-
-
 def setup_fake_dataset():
     """Create a fake dataset to avoid opening a file."""
     # flash_extent_density
@@ -79,10 +50,12 @@ def setup_fake_dataset():
     x__ = xr.DataArray(
         range(5),
         attrs={'scale_factor': 2., 'add_offset': -1.},
+        dims=('x',),
     )
     y__ = xr.DataArray(
         range(2),
         attrs={'scale_factor': -2., 'add_offset': 1.},
+        dims=('y',),
     )
     proj = xr.DataArray(
         [],
@@ -95,20 +68,22 @@ def setup_fake_dataset():
             'sweep_angle_axis': u'x'
         }
     )
-    fake_dataset = FakeDataset({
-        'flash_extent_density': fed,
-        'x': x__,
-        'y': y__,
-        'goes_imager_projection': proj,
-        "nominal_satellite_subpoint_lat": np.array(0.0),
-        "nominal_satellite_subpoint_lon": np.array(-89.5),
-        "nominal_satellite_height": np.array(35786.02)
-    },
-        {
+    fake_dataset = xr.Dataset(
+        data_vars={
+            'flash_extent_density': fed,
+            'x': x__,
+            'y': y__,
+            'goes_imager_projection': proj,
+            "nominal_satellite_subpoint_lat": np.array(0.0),
+            "nominal_satellite_subpoint_lon": np.array(-89.5),
+            "nominal_satellite_height": np.array(35786.02)
+        },
+        attrs={
             "time_coverage_start": "2017-09-20T17:30:40Z",
             "time_coverage_end": "2017-09-20T17:41:17Z",
             "spatial_resolution": "2km at nadir",
-        }, dims=('y', 'x'))
+        }
+    )
     return fake_dataset
 
 


=====================================
satpy/tests/reader_tests/test_tropomi_l2.py
=====================================
@@ -77,7 +77,7 @@ class FakeNetCDF4FileHandlerTL2(FakeNetCDF4FileHandler):
             for key, val in file_content.items():
                 if isinstance(val, np.ndarray):
                     if val.ndim > 1:
-                        file_content[key] = DataArray(val, dims=('y', 'x'))
+                        file_content[key] = DataArray(val, dims=('scanline', 'ground_pixel'))
                     else:
                         file_content[key] = DataArray(val)
             file_content['PRODUCT/latitude'].attrs['_FillValue'] = -999.0
@@ -139,6 +139,8 @@ class TestTROPOMIL2Reader(unittest.TestCase):
             self.assertEqual(d.attrs['sensor'], 'TROPOMI')
             self.assertIn('area', d.attrs)
             self.assertIsNotNone(d.attrs['area'])
+            self.assertIn('y', d.dims)
+            self.assertIn('x', d.dims)
 
     def test_load_so2(self):
         """Load SO2 dataset"""
@@ -155,6 +157,8 @@ class TestTROPOMIL2Reader(unittest.TestCase):
             self.assertEqual(d.attrs['platform_shortname'], 'S5P')
             self.assertIn('area', d.attrs)
             self.assertIsNotNone(d.attrs['area'])
+            self.assertIn('y', d.dims)
+            self.assertIn('x', d.dims)
 
 
 def suite():


=====================================
satpy/tests/test_yaml_reader.py
=====================================
@@ -607,6 +607,48 @@ class TestGEOSegmentYAMLReader(unittest.TestCase):
         GEOSegmentYAMLReader.__bases__ = (MagicMock, )
         self.reader = GEOSegmentYAMLReader()
 
+    def test_get_expected_segments(self):
+        """Test that expected segments can come from the filename."""
+        from satpy.readers.yaml_reader import GEOSegmentYAMLReader
+        cfh = MagicMock()
+        # Hacky: This is setting an attribute on the MagicMock *class*
+        #        not on a MagicMock instance
+        GEOSegmentYAMLReader.__bases__[0].create_filehandlers = cfh
+
+        fake_fh = MagicMock()
+        fake_fh.filename_info = {}
+        fake_fh.filetype_info = {}
+        cfh.return_value = {'ft1': [fake_fh]}
+        reader = GEOSegmentYAMLReader()
+        # default (1)
+        created_fhs = reader.create_filehandlers(['fake.nc'])
+        es = created_fhs['ft1'][0].filetype_info['expected_segments']
+        self.assertEqual(es, 1)
+
+        # YAML defined for each file type
+        fake_fh.filetype_info['expected_segments'] = 2
+        created_fhs = reader.create_filehandlers(['fake.nc'])
+        es = created_fhs['ft1'][0].filetype_info['expected_segments']
+        self.assertEqual(es, 2)
+
+        # defined both in the filename and the YAML metadata
+        # YAML has priority
+        fake_fh.filename_info = {'total_segments': 3}
+        fake_fh.filetype_info = {'expected_segments': 2}
+        created_fhs = reader.create_filehandlers(['fake.nc'])
+        es = created_fhs['ft1'][0].filetype_info['expected_segments']
+        self.assertEqual(es, 2)
+
+        # defined in the filename
+        fake_fh.filename_info = {'total_segments': 3}
+        fake_fh.filetype_info = {}
+        created_fhs = reader.create_filehandlers(['fake.nc'])
+        es = created_fhs['ft1'][0].filetype_info['expected_segments']
+        self.assertEqual(es, 3)
+
+        # undo the hacky-ness
+        del GEOSegmentYAMLReader.__bases__[0].create_filehandlers
+
     @patch('satpy.readers.yaml_reader.FileYAMLReader._load_dataset')
     @patch('satpy.readers.yaml_reader.xr')
     @patch('satpy.readers.yaml_reader._find_missing_segments')


=====================================
satpy/tests/writer_tests/test_cf.py
=====================================
@@ -743,6 +743,15 @@ class TestCFWriter(unittest.TestCase):
         import pyresample.geometry
         from satpy.writers.cf_writer import area2gridmapping
 
+        def _gm_matches(gmapping, expected):
+            """Assert that all keys in ``expected`` match the values in ``gmapping``."""
+            for attr_key, attr_val in expected.attrs.items():
+                test_val = gmapping.attrs[attr_key]
+                if attr_val is None or isinstance(attr_val, str):
+                    self.assertEqual(test_val, attr_val)
+                else:
+                    np.testing.assert_almost_equal(test_val, attr_val, decimal=3)
+
         ds_base = xr.DataArray(data=[[1, 2], [3, 4]], dims=('y', 'x'), coords={'y': [1, 2], 'x': [3, 4]},
                                attrs={'name': 'var1'})
 
@@ -759,8 +768,8 @@ class TestCFWriter(unittest.TestCase):
             area_extent=[-1, -1, 1, 1])
         geos_expected = xr.DataArray(data=0,
                                      attrs={'perspective_point_height': h,
-                                            'latitude_of_projection_origin': None,
-                                            'longitude_of_projection_origin': None,
+                                            'latitude_of_projection_origin': 0,
+                                            'longitude_of_projection_origin': 0,
                                             'grid_mapping_name': 'geostationary',
                                             'semi_major_axis': a,
                                             'semi_minor_axis': b,
@@ -772,7 +781,7 @@ class TestCFWriter(unittest.TestCase):
         res, grid_mapping = area2gridmapping(ds)
 
         self.assertEqual(res.attrs['grid_mapping'], 'geos')
-        self.assertEqual(grid_mapping, geos_expected)
+        _gm_matches(grid_mapping, geos_expected)
 
         # b) Projection does not have a corresponding CF representation (COSMO)
         cosmo7 = pyresample.geometry.AreaDefinition(
@@ -784,8 +793,6 @@ class TestCFWriter(unittest.TestCase):
             width=597, height=510,
             area_extent=[-1812933, -1003565, 814056, 1243448]
         )
-        proj_str = '+proj=ob_tran +ellps=WGS84 +lat_0=46.0 +lon_0=4.535 +o_proj=stere +o_lat_p=90.0 +o_lon_p=-5.465'
-        cosmo_expected = xr.DataArray(data=0, attrs={'name': 'proj4', 'proj4': proj_str})
 
         ds = ds_base.copy()
         ds.attrs['area'] = cosmo7
@@ -801,31 +808,25 @@ class TestCFWriter(unittest.TestCase):
             self.assertEqual(proj_dict['proj'], 'ob_tran')
             self.assertEqual(proj_dict['o_proj'], 'stere')
             self.assertEqual(proj_dict['ellps'], 'WGS84')
-            self.assertEqual(grid_mapping, cosmo_expected)
+            self.assertEqual(grid_mapping.attrs['name'], 'proj4')
 
         # c) Projection Transverse Mercator
         lat_0 = 36.5
         lon_0 = 15.0
-        lat_ts = 36.5
 
         tmerc = pyresample.geometry.AreaDefinition(
             area_id='tmerc',
             description='tmerc',
             proj_id='tmerc',
-            projection={'proj': 'tmerc', 'ellps': 'WGS84', 'lat_0': 36.5, 'lon_0': 15.0, 'lat_ts': 36.5},
+            projection={'proj': 'tmerc', 'ellps': 'WGS84', 'lat_0': 36.5, 'lon_0': 15.0},
             width=2, height=2,
             area_extent=[-1, -1, 1, 1])
 
         tmerc_expected = xr.DataArray(data=0,
-                                      attrs={'azimuth_of_central_line': 'alpha',
-                                             'latitude_of_projection_origin': lat_0,
-                                             'longitude_of_projection_origin': lon_0,
-                                             'latitude_of_meridian_ts': lat_ts,
+                                      attrs={'latitude_of_projection_origin': lat_0,
+                                             'longitude_of_central_meridian': lon_0,
                                              'grid_mapping_name': 'transverse_mercator',
-                                             'reference_ellipsoid_name': ('ellps', 'WGS84'),
-                                             'prime_meridian_name': ('pm', 'Greenwich'),
-                                             'horizontal_datum_name': ('datum', 'unknown'),
-                                             'geographic_crs_name': 'unknown',
+                                             'reference_ellipsoid_name': 'WGS84',
                                              'false_easting': 0.,
                                              'false_northing': 0.,
                                              'name': 'tmerc'})
@@ -834,7 +835,67 @@ class TestCFWriter(unittest.TestCase):
         ds.attrs['area'] = tmerc
         res, grid_mapping = area2gridmapping(ds)
         self.assertEqual(res.attrs['grid_mapping'], 'tmerc')
-        self.assertEqual(grid_mapping, tmerc_expected)
+        _gm_matches(grid_mapping, tmerc_expected)
+
+        # d) Projection that has a representation but no explicit a/b
+        h = 35785831.
+        geos = pyresample.geometry.AreaDefinition(
+            area_id='geos',
+            description='geos',
+            proj_id='geos',
+            projection={'proj': 'geos', 'h': h, 'datum': 'WGS84', 'ellps': 'GRS80'},
+            width=2, height=2,
+            area_extent=[-1, -1, 1, 1])
+        geos_expected = xr.DataArray(data=0,
+                                     attrs={'perspective_point_height': h,
+                                            'latitude_of_projection_origin': 0,
+                                            'longitude_of_projection_origin': 0,
+                                            'grid_mapping_name': 'geostationary',
+                                            'semi_major_axis': 6378137.0,
+                                            'semi_minor_axis': 6356752.314,
+                                            'sweep_axis': None,
+                                            'name': 'geos'})
+
+        ds = ds_base.copy()
+        ds.attrs['area'] = geos
+        res, grid_mapping = area2gridmapping(ds)
+
+        self.assertEqual(res.attrs['grid_mapping'], 'geos')
+        _gm_matches(grid_mapping, geos_expected)
+
+        # e) oblique Mercator
+        area = pyresample.geometry.AreaDefinition(
+            area_id='omerc_otf',
+            description='On-the-fly omerc area',
+            proj_id='omerc',
+            projection={'alpha': '9.02638777018478', 'ellps': 'WGS84', 'gamma': '0', 'k': '1',
+                        'lat_0': '-0.256794486098476', 'lonc': '13.7888658224205',
+                        'proj': 'omerc', 'units': 'm'},
+            width=2837,
+            height=5940,
+            area_extent=[-1460463.0893, 3455291.3877, 1538407.1158, 9615788.8787]
+        )
+
+        omerc_dict = {'name': 'omerc',
+                      'azimuth_of_central_line': 9.02638777018478,
+                      'false_easting': 0.,
+                      'false_northing': 0.,
+                      'gamma': 0,
+                      'geographic_crs_name': "unknown",
+                      'grid_mapping_name': "oblique_mercator",
+                      'horizontal_datum_name': "unknown",
+                      'latitude_of_projection_origin': -0.256794486098476,
+                      'longitude_of_projection_origin': 13.7888658224205,
+                      'prime_meridian_name': "Greenwich",
+                      'reference_ellipsoid_name': "WGS84"}
+        omerc_expected = xr.DataArray(data=0, attrs=omerc_dict)
+
+        ds = ds_base.copy()
+        ds.attrs['area'] = area
+        res, grid_mapping = area2gridmapping(ds)
+
+        self.assertEqual(res.attrs['grid_mapping'], 'omerc')
+        _gm_matches(grid_mapping, omerc_expected)
 
     def test_area2lonlat(self):
         """Test the conversion from areas to lon/lat."""


=====================================
satpy/tests/writer_tests/test_scmi.py
=====================================
@@ -15,20 +15,15 @@
 #
 # You should have received a copy of the GNU General Public License along with
 # satpy.  If not, see <http://www.gnu.org/licenses/>.
-"""Tests for the SCMI writer
-"""
+"""Tests for the SCMI writer."""
 import os
-import sys
 from glob import glob
 from datetime import datetime, timedelta
 
 import numpy as np
 import dask.array as da
 
-if sys.version_info < (2, 7):
-    import unittest2 as unittest
-else:
-    import unittest
+import unittest
 
 
 class TestSCMIWriter(unittest.TestCase):
@@ -121,6 +116,7 @@ class TestSCMIWriter(unittest.TestCase):
 
     def test_basic_lettered_tiles(self):
         """Test creating a lettered grid."""
+        import xarray as xr
         from satpy.writers.scmi import SCMIWriter
         from xarray import DataArray
         from pyresample.geometry import AreaDefinition
@@ -151,6 +147,54 @@ class TestSCMIWriter(unittest.TestCase):
         w.save_datasets([ds], sector_id='LCC', source_name="TESTS", tile_count=(3, 3), lettered_grid=True)
         all_files = glob(os.path.join(self.base_dir, 'TESTS_AII*.nc'))
         self.assertEqual(len(all_files), 16)
+        for fn in all_files:
+            nc = xr.open_dataset(fn, mask_and_scale=False)
+            # geolocation coordinates should be monotonically increasing by 1
+            np.testing.assert_equal(np.diff(nc['x']), 1)
+            np.testing.assert_equal(np.diff(nc['y']), 1)
+            assert nc.attrs['start_date_time'] == now.strftime('%Y-%m-%dT%H:%M:%S')
+
+    def test_lettered_tiles_sector_ref(self):
+        """Test creating a lettered grid using the sector as reference."""
+        import xarray as xr
+        from satpy.writers.scmi import SCMIWriter
+        from xarray import DataArray
+        from pyresample.geometry import AreaDefinition
+        from pyresample.utils import proj4_str_to_dict
+        w = SCMIWriter(base_dir=self.base_dir, compress=True)
+        area_def = AreaDefinition(
+            'test',
+            'test',
+            'test',
+            proj4_str_to_dict('+proj=lcc +datum=WGS84 +ellps=WGS84 +lon_0=-95. '
+                              '+lat_0=25 +lat_1=25 +units=m +no_defs'),
+            1000,
+            2000,
+            (-1000000., -1500000., 1000000., 1500000.),
+        )
+        now = datetime(2018, 1, 1, 12, 0, 0)
+        ds = DataArray(
+            da.from_array(np.linspace(0., 1., 2000000, dtype=np.float32).reshape((2000, 1000)), chunks=500),
+            attrs=dict(
+                name='test_ds',
+                platform_name='PLAT',
+                sensor='SENSOR',
+                units='1',
+                area=area_def,
+                start_time=now,
+                end_time=now + timedelta(minutes=20))
+        )
+        w.save_datasets([ds], sector_id='LCC', source_name="TESTS",
+                        lettered_grid=True, use_sector_reference=True,
+                        use_end_time=True)
+        all_files = glob(os.path.join(self.base_dir, 'TESTS_AII*.nc'))
+        self.assertEqual(len(all_files), 16)
+        for fn in all_files:
+            nc = xr.open_dataset(fn, mask_and_scale=False)
+            # geolocation coordinates should be monotonically increasing by 1
+            np.testing.assert_equal(np.diff(nc['x']), 1)
+            np.testing.assert_equal(np.diff(nc['y']), 1)
+            assert nc.attrs['start_date_time'] == (now + timedelta(minutes=20)).strftime('%Y-%m-%dT%H:%M:%S')
 
     def test_lettered_tiles_no_fit(self):
         """Test creating a lettered grid with no data."""
@@ -263,7 +307,7 @@ class TestSCMIWriter(unittest.TestCase):
 
 
 def suite():
-    """The test suite for this writer's tests."""
+    """Create test suite for this writer's tests."""
     loader = unittest.TestLoader()
     mysuite = unittest.TestSuite()
     mysuite.addTest(loader.loadTestsFromTestCase(TestSCMIWriter))


=====================================
satpy/writers/cf_writer.py
=====================================
@@ -142,15 +142,10 @@ CF_VERSION = 'CF-1.7'
 def tmerc2cf(area):
     """Return the cf grid mapping for the tmerc projection."""
     proj_dict = area.proj_dict
-    args = dict(azimuth_of_central_line=proj_dict.get('alpha'),
-                latitude_of_projection_origin=proj_dict.get('lat_0'),
-                longitude_of_projection_origin=proj_dict.get('lon_0'),
-                latitude_of_meridian_ts=proj_dict.get('lat_ts'),
+    args = dict(latitude_of_projection_origin=proj_dict.get('lat_0'),
+                longitude_of_central_meridian=proj_dict.get('lon_0'),
                 grid_mapping_name='transverse_mercator',
                 reference_ellipsoid_name=proj_dict.get('ellps', 'WGS84'),
-                prime_meridian_name=proj_dict.get('pm', 'Greenwich'),
-                horizontal_datum_name=proj_dict.get('datum', 'unknown'),
-                geographic_crs_name='unknown',
                 false_easting=0.,
                 false_northing=0.
                 )
@@ -185,13 +180,17 @@ def omerc2cf(area):
 
 def geos2cf(area):
     """Return the cf grid mapping for the geos projection."""
+    from pyresample.utils import proj4_radius_parameters
     proj_dict = area.proj_dict
+    a, b = proj4_radius_parameters(proj_dict)
     args = dict(perspective_point_height=proj_dict.get('h'),
-                latitude_of_projection_origin=proj_dict.get('lat_0'),
-                longitude_of_projection_origin=proj_dict.get('lon_0'),
+                latitude_of_projection_origin=proj_dict.get('lat_0', 0),
+                longitude_of_projection_origin=proj_dict.get('lon_0', 0),
                 grid_mapping_name='geostationary',
-                semi_major_axis=proj_dict.get('a'),
-                semi_minor_axis=proj_dict.get('b'),
+                semi_major_axis=a,
+                semi_minor_axis=b,
+                # semi_major_axis=proj_dict.get('a'),
+                # semi_minor_axis=proj_dict.get('b'),
                 sweep_axis=proj_dict.get('sweep'),
                 )
     return args


=====================================
satpy/writers/scmi.py
=====================================
@@ -15,8 +15,9 @@
 #
 # You should have received a copy of the GNU General Public License along with
 # satpy.  If not, see <http://www.gnu.org/licenses/>.
-"""The SCMI AWIPS writer is used to create AWIPS compatible tiled NetCDF4
-files. The Advanced Weather Interactive Processing System (AWIPS) is a
+"""The SCMI AWIPS writer is used to create AWIPS-compatible tiled NetCDF4 files.
+
+The Advanced Weather Interactive Processing System (AWIPS) is a
 program used by the United States National Weather Service (NWS) and others
 to view
 different forms of weather imagery. Sectorized Cloud and Moisture Imagery
@@ -58,6 +59,40 @@ bandwidth, and space.
 Any tiles (numbered or lettered) not containing any valid data are not
 created.
 
+Updating tiles
+--------------
+
+There are some input data cases where we want to put new data in a tile
+file written by a previous execution. An example is a pre-tiled input dataset
+that is processed one tile at a time. One input tile may map to one or more
+output SCMI tiles, but may not perfectly align with the SCMI tile, leaving
+empty/unused space in the SCMI tile. The next input tile may be able to fill
+in that empty space and should be allowed to write the "new" data to the file.
+This is the default behavior of the SCMI writer. In cases where data overlaps
+the existing data in the tile, the newer data has priority.
+
+Shifting Lettered Grids
+-----------------------
+
+Due to the static nature of the lettered grids, there is sometimes a
+need to shift the locations of where these tiles are by up to 0.5 pixels in
+each dimension to align with the data being processed. This means that the
+tiles for a 1000m resolution grid may be shifted up to 500m in each direction
+from the original definition of the lettered "sector". This can cause
+differences in the location of the tiles between executions depending on the
+locations of the input data. In the worst case tile A01 from one execution
+could be shifted up to 1 grid cell from tile A01 in another execution (one
+is shifted 0.5 pixels to the left, the other is shifted 0.5 to the right).
+
+This shifting makes the calculations for generating tiles easier and
+more accurate. By default, the lettered tile locations are changed to match
+the location of the data. This works well when output tiles will not be
+updated (see above) in future processing. In cases where output tiles will be
+filled in or updated with more data the ``use_sector_reference`` keyword
+argument can be set to ``True`` to tell the SCMI writer to shift the data's
+geolocation by up to 0.5 pixels in each dimension instead of shifting the
+lettered tile locations.
+
 """
 import os
 import logging
@@ -114,6 +149,11 @@ XYFactors = namedtuple('XYFactors', ['mx', 'bx', 'my', 'by'])
 
 
 def fix_awips_file(fn):
+    """Hack the NetCDF4 files to workaround NetCDF-Java bugs used by AWIPS.
+
+    This should not be needed for new versions of AWIPS.
+
+    """
     # hack to get files created by new NetCDF library
     # versions to be read by AWIPS buggy java version
     # of NetCDF
@@ -126,8 +166,11 @@ def fix_awips_file(fn):
 
 
 class NumberedTileGenerator(object):
+    """Helper class to generate per-tile metadata for numbered tiles."""
+
     def __init__(self, area_definition,
                  tile_shape=None, tile_count=None):
+        """Initialize and generate tile information for this sector/grid for later use."""
         self.area_definition = area_definition
         self._rows = self.area_definition.y_size
         self._cols = self.area_definition.x_size
@@ -143,6 +186,7 @@ class NumberedTileGenerator(object):
         self._tile_cache = []
 
     def _get_tile_properties(self, tile_shape, tile_count):
+        """Generate tile information for numbered tiles."""
         if tile_shape is not None:
             tile_shape = (int(min(tile_shape[0], self._rows)), int(min(tile_shape[1], self._cols)))
             tile_count = (int(np.ceil(self._rows / float(tile_shape[0]))),
@@ -167,6 +211,7 @@ class NumberedTileGenerator(object):
         self.x, self.y = self._get_xy_arrays()
 
     def _get_xy_arrays(self):
+        """Get the overall X/Y coordinate variable arrays."""
         gd = self.area_definition
         ts = self.tile_shape
         tc = self.tile_count
@@ -209,7 +254,7 @@ class NumberedTileGenerator(object):
         return np.ma.masked_array(x), np.ma.masked_array(y)
 
     def _get_xy_scaling_parameters(self):
-        """Get the X/Y coordinate limits for the full resulting image"""
+        """Get the X/Y coordinate limits for the full resulting image."""
         gd = self.area_definition
         bx = self.x.min()
         mx = gd.pixel_size_x
@@ -218,15 +263,18 @@ class NumberedTileGenerator(object):
         return mx, bx, my, by
 
     def _tile_number(self, ty, tx):
+        """Get tile number from tile row/column."""
         # e.g.
         # 001 002 003 004
         # 005 006 ...
         return ty * self.tile_count[1] + tx + 1
 
     def _tile_identifier(self, ty, tx):
+        """Get tile identifier for numbered tiles."""
         return "T{:03d}".format(self._tile_number(ty, tx))
 
     def _generate_tile_info(self):
+        """Get numbered tile metadata."""
         x = self.x
         y = self.y
         ts = self.tile_shape
@@ -261,6 +309,7 @@ class NumberedTileGenerator(object):
                 yield tile_info
 
     def __call__(self, data):
+        """Provide simple call interface for getting tile metadata."""
         if self._tile_cache:
             tile_infos = self._tile_cache
         else:
@@ -275,27 +324,30 @@ class NumberedTileGenerator(object):
 
 
 class LetteredTileGenerator(NumberedTileGenerator):
+    """Helper class to generate per-tile metadata for lettered tiles."""
+
     def __init__(self, area_definition, extents,
                  cell_size=(2000000, 2000000),
-                 num_subtiles=None):
+                 num_subtiles=None, use_sector_reference=False):
+        """Initialize tile information for later generation."""
         # (row subtiles, col subtiles)
         self.num_subtiles = num_subtiles or (2, 2)
         self.cell_size = cell_size  # (row tile height, col tile width)
         # x/y
         self.ll_extents = extents[:2]  # (x min, y min)
         self.ur_extents = extents[2:]  # (x max, y max)
+        self.use_sector_reference = use_sector_reference
         super(LetteredTileGenerator, self).__init__(area_definition)
 
     def _get_tile_properties(self, tile_shape, tile_count):
+        """Calculate tile information for this particular sector/grid."""
         # ignore tile_shape and tile_count
         # they come from the base class, but aren't used here
         del tile_shape, tile_count
 
         # get original image's X/Y
         ad = self.area_definition
-        x, y = ad.get_proj_coords()
-        x = x[0].squeeze()  # all rows should have the same coordinates
-        y = y[:, 0].squeeze()  # all columns should have the same coordinates
+        x, y = ad.get_proj_vectors()
 
         ll_xy = self.ll_extents
         ur_xy = self.ur_extents
@@ -312,11 +364,21 @@ class LetteredTileGenerator(NumberedTileGenerator):
         # X/Y are center of pixels, adjust by half a pixels to get upper-left pixel corner
         shift_x = float(ul_xy[0] - (x.min() - cw / 2.)) % cw  # could be negative
         shift_y = float(ul_xy[1] - (y.max() + ch / 2.)) % ch  # could be negative
-        LOG.debug("Adjusting lettered grid by ({}, {}) so it better matches data X/Y".format(shift_x, shift_y))
-        ul_xy = (ul_xy[0] - shift_x, ul_xy[1] - shift_y)  # outer edge of grid
-        # always keep the same distance between the extents
-        ll_xy = (ul_xy[0], ll_xy[1] - shift_y)
-        ur_xy = (ur_xy[0] - shift_x, ul_xy[1])
+        # if we're really close to 0 then don't worry about it
+        if abs(shift_x) < 1e-10 or abs(shift_x - cw) < 1e-10:
+            shift_x = 0
+        if abs(shift_y) < 1e-10 or abs(shift_y - ch) < 1e-10:
+            shift_y = 0
+        if self.use_sector_reference:
+            LOG.debug("Adjusting X/Y by ({}, {}) so it better matches lettered grid".format(shift_x, shift_y))
+            x = x + shift_x
+            y = y + shift_y
+        else:
+            LOG.debug("Adjusting lettered grid by ({}, {}) so it better matches data X/Y".format(shift_x, shift_y))
+            ul_xy = (ul_xy[0] - shift_x, ul_xy[1] - shift_y)  # outer edge of grid
+            # always keep the same distance between the extents
+            ll_xy = (ul_xy[0], ll_xy[1] - shift_y)
+            ur_xy = (ur_xy[0] - shift_x, ul_xy[1])
 
         fcs_y, fcs_x = (np.ceil(float(cs[0]) / st[0]), np.ceil(float(cs[1]) / st[1]))
         # need X/Y for *whole* tiles
@@ -331,7 +393,7 @@ class LetteredTileGenerator(NumberedTileGenerator):
         num_pixels_y = int(np.floor(fcs_y / ch))
         # NOTE: this does not change the *total* number of columns/rows that
         # will be produced. This is important because otherwise the number
-        # of alpha tiles could depend on the input data which is not what we
+        # of lettered tiles could depend on the input data which is not what we
         # want
         fcs_x = num_pixels_x * cw
         fcs_y = num_pixels_y * ch
@@ -359,17 +421,18 @@ class LetteredTileGenerator(NumberedTileGenerator):
         self.max_row = max_row
         self.ul_xy = ul_xy
         self.mx = cw
-        self.bx = ul_xy[0]
+        self.bx = ul_xy[0] + cw / 2.0  # X represents the center of the pixel
         self.my = -ch
-        self.by = ul_xy[1]
+        self.by = ul_xy[1] - ch / 2.0  # Y represents the center of the pixel
         self.x = x
         self.y = y
 
     def _get_xy_scaling_parameters(self):
-        """Get the X/Y coordinate limits for the full resulting image"""
+        """Get the X/Y coordinate limits for the full resulting image."""
         return self.mx, self.bx, self.my, self.by
 
     def _tile_identifier(self, ty, tx):
+        """Get tile identifier (name) for a particular tile row/column."""
         st = self.num_subtiles
         ttc = self.total_tile_count
         alpha_num = int((ty // st[0]) * (ttc[1] // st[1]) + (tx // st[1]))
@@ -378,6 +441,7 @@ class LetteredTileGenerator(NumberedTileGenerator):
         return "T{}{:02d}".format(alpha, tile_num)
 
     def _generate_tile_info(self):
+        """Create generator of individual tile metadata."""
         if self._tile_cache:
             for tile_info in self._tile_cache:
                 yield tile_info
@@ -431,8 +495,11 @@ class LetteredTileGenerator(NumberedTileGenerator):
 
 
 class SCMIDatasetDecisionTree(DecisionTree):
-    # Fields used to match a product object to it's correct configuration
+    """Load AWIPS-specific metadata from YAML configuration."""
+
     def __init__(self, decision_dicts, **kwargs):
+        """Initialize decision tree with specific keys to look for."""
+        # Fields used to match a product object to it's correct configuration
         attrs = kwargs.pop('attrs',
                            ["name",
                             "standard_name",
@@ -446,16 +513,14 @@ class SCMIDatasetDecisionTree(DecisionTree):
 
 
 class AttributeHelper(object):
-    """
-    helper object which wraps around a HimawariScene to provide SCMI attributes
-    """
+    """Helper object which wraps around metadata to provide SCMI attributes."""
+
     def __init__(self, ds_info):
+        """Initialize metadata for future attribute collection."""
         self.ds_info = ds_info
 
     def apply_attributes(self, nc, table, prefix=''):
-        """
-        apply fixed attributes, or look up attributes needed and apply them
-        """
+        """Apply fixed attributes or look up attributes needed and apply them."""
         for name, value in sorted(table.items()):
             if name in nc.ncattrs():
                 LOG.debug('already have a value for %s' % name)
@@ -473,25 +538,28 @@ class AttributeHelper(object):
                     LOG.info('no routine matching %s' % funcname)
 
     def _scene_time(self):
+        """Get default start time of this observation."""
         return self.ds_info["start_time"] + timedelta(minutes=int(os.environ.get("DEBUG_TIME_SHIFT", 0)))
 
-    def _product_name(self):
-        return self.ds_info["name"]
-
     def _global_product_name(self):
-        return self._product_name()
+        """Get default global product name attribute."""
+        return self.ds_info["name"]
 
     def _global_pixel_x_size(self):
+        """Get default global x size attribute."""
         return self.ds_info["area"].pixel_size_x / 1000.
 
     def _global_pixel_y_size(self):
+        """Get default global y size attribute."""
         return self.ds_info["area"].pixel_size_y / 1000.
 
     def _global_start_date_time(self):
+        """Get default global start time attribute."""
         when = self._scene_time()
         return when.strftime('%Y-%m-%dT%H:%M:%S')
 
     def _global_production_location(self):
+        """Get default global production_location attribute."""
         org = os.environ.get('ORGANIZATION', None)
         if org is not None:
             return org
@@ -502,45 +570,66 @@ class AttributeHelper(object):
 
 
 class NetCDFWriter(object):
-    """
-    Write a basic NetCDF4 file with header data mapped to global attributes, and BT/ALB/RAD variables
-    FUTURE: optionally add time dimension (CF)
-    FUTURE: optionally add zenith and azimuth angles
+    """Write a basic AWIPS compatible NetCDF4 SCMI file representing one "tile" of data."""
 
-    """
     _kind = None  # 'albedo', 'brightness_temp'
     _band = None
     _include_fgf = True
     _fill_value = 0
-    row_dim_name, col_dim_name = 'y', 'x'
-    y_var_name, x_var_name = 'y', 'x'
     image_var_name = 'data'
     fgf_y = None
     fgf_x = None
     projection = None
 
-    def __init__(self, filename, include_fgf=True, ds_info=None, compress=False):
+    def __init__(self, filename, include_fgf=True, ds_info=None, compress=False,
+                 is_geographic=False):
+        """Initialize variable and dimension names and metadata helper objects."""
         self._nc = None
         self.filename = filename
         self._include_fgf = include_fgf
         self._compress = compress
         self.helper = AttributeHelper(ds_info)
         self.image_data = None
+        self.is_geographic = is_geographic
+        self.exists = os.path.isfile(self.filename)
+        if self.is_geographic:
+            self.row_dim_name = 'lat'
+            self.col_dim_name = 'lon'
+            self.y_var_name = 'lat'
+            self.x_var_name = 'lon'
+        else:
+            self.row_dim_name = 'y'
+            self.col_dim_name = 'x'
+            self.y_var_name = 'y'
+            self.x_var_name = 'x'
 
     @property
     def nc(self):
+        """Access the NetCDF file object if not already created."""
         if self._nc is None:
-            self._nc = Dataset(self.filename, 'w')
+            self._nc = Dataset(self.filename, 'r+' if self.exists else 'w')
         return self._nc
 
     def create_dimensions(self, lines, columns):
+        """Create NetCDF dimensions."""
         # Create Dimensions
+        if self.exists:
+            LOG.debug("Skipping creating dimensions because file already exists.")
+            return
         _nc = self.nc
         _nc.createDimension(self.row_dim_name, lines)
         _nc.createDimension(self.col_dim_name, columns)
 
     def create_variables(self, bitdepth, fill_value, scale_factor=None, add_offset=None,
                          valid_min=None, valid_max=None):
+        """Create data and geolcoation NetCDF variables."""
+        if self.exists:
+            LOG.debug("Skipping creating variables because file already exists.")
+            self.image_data = self.nc[self.image_var_name]
+            self.fgf_y = self.nc[self.y_var_name]
+            self.fgf_x = self.nc[self.x_var_name]
+            return
+
         fgf_coords = "%s %s" % (self.y_var_name, self.x_var_name)
 
         self.image_data = self.nc.createVariable(self.image_var_name,
@@ -560,6 +649,7 @@ class NetCDFWriter(object):
 
     def apply_data_attributes(self, bitdepth, scale_factor, add_offset,
                               valid_min=None, valid_max=None):
+        """Assign various data variable metadata."""
         # NOTE: grid_mapping is set by `set_projection_attrs`
         self.image_data.scale_factor = np.float32(scale_factor)
         self.image_data.add_offset = np.float32(add_offset)
@@ -597,22 +687,36 @@ class NetCDFWriter(object):
         else:
             self.image_data.standard_name = self.helper.ds_info.get("standard_name") or ''
 
-    def set_fgf(self, x, mx, bx, y, my, by, units='meters', downsample_factor=1):
+    def set_fgf(self, x, mx, bx, y, my, by, units=None, downsample_factor=1):
+        """Assign geolocation x/y variables metadata."""
+        if self.exists:
+            LOG.debug("Skipping setting FGF variable attributes because file already exists.")
+            return
+
         # assign values before scale factors to avoid implicit scale reversal
         LOG.debug('y variable shape is {}'.format(self.fgf_y.shape))
         self.fgf_y.scale_factor = np.float64(my * float(downsample_factor))
         self.fgf_y.add_offset = np.float64(by)
-        self.fgf_y.units = units
-        self.fgf_y.standard_name = "projection_y_coordinate"
+        if self.is_geographic:
+            self.fgf_y.units = units if units is not None else 'degrees_north'
+            self.fgf_y.standard_name = "latitude"
+        else:
+            self.fgf_y.units = units if units is not None else 'meters'
+            self.fgf_y.standard_name = "projection_y_coordinate"
         self.fgf_y[:] = y
 
         self.fgf_x.scale_factor = np.float64(mx * float(downsample_factor))
         self.fgf_x.add_offset = np.float64(bx)
-        self.fgf_x.units = units
-        self.fgf_x.standard_name = "projection_x_coordinate"
+        if self.is_geographic:
+            self.fgf_x.units = units if units is not None else 'degrees_east'
+            self.fgf_x.standard_name = "longitude"
+        else:
+            self.fgf_x.units = units if units is not None else 'meters'
+            self.fgf_x.standard_name = "projection_x_coordinate"
         self.fgf_x[:] = x
 
     def set_image_data(self, data):
+        """Write image variable data."""
         LOG.debug('writing image data')
         if not hasattr(data, 'mask'):
             data = np.ma.masked_array(data, np.isnan(data))
@@ -620,7 +724,10 @@ class NetCDFWriter(object):
         self.image_data[:, :] = np.require(data, dtype=np.float32)
 
     def set_projection_attrs(self, area_id, proj4_info):
-        """Assign projection attributes per GRB standard"""
+        """Assign projection attributes per GRB standard."""
+        if self.exists:
+            LOG.debug("Skipping setting projection attributes because file already exists.")
+            return
         proj4_info['a'], proj4_info['b'] = proj4_radius_parameters(proj4_info)
         if proj4_info["proj"] == "geos":
             p = self.projection = self.nc.createVariable("fixedgrid_projection", 'i4')
@@ -654,6 +761,12 @@ class NetCDFWriter(object):
             p.grid_mapping_name = "mercator"
             p.standard_parallel = proj4_info.get('lat_ts', proj4_info.get('lat_0', 0.0))
             p.longitude_of_projection_origin = proj4_info.get("lon_0", 0.0)
+        # AWIPS 2 Doesn't actually support this yet
+        # elif proj4_info['proj'] in ['latlong', 'longlat', 'lonlat', 'latlon']:
+        #     p = self.projection = self._nc.createVariable("latitude_longitude_projection", 'i4')
+        #     self.image_data.grid_mapping = "latitude_longitude_projection"
+        #     p.short_name = area_id
+        #     p.grid_mapping_name = 'latitude_longitude'
         else:
             raise ValueError("SCMI can not handle projection '{}'".format(proj4_info['proj']))
 
@@ -665,6 +778,11 @@ class NetCDFWriter(object):
     def set_global_attrs(self, physical_element, awips_id, sector_id,
                          creating_entity, total_tiles, total_pixels,
                          tile_row, tile_column, tile_height, tile_width, creator=None):
+        """Assign NetCDF global attributes."""
+        if self.exists:
+            LOG.debug("Skipping setting global attributes because file already exists.")
+            return
+
         self.nc.Conventions = "CF-1.7"
         if creator is None:
             from satpy import __version__
@@ -689,6 +807,7 @@ class NetCDFWriter(object):
         self.helper.apply_attributes(self.nc, SCMI_GLOBAL_ATT, '_global_')
 
     def close(self):
+        """Close the NetCDF file if created."""
         if self._nc is not None:
             self._nc.sync()
             self._nc.close()
@@ -701,8 +820,11 @@ class NetCDFWrapper(object):
     This makes it possible to do SCMI writing with dask's delayed `da.store` function.
 
     """
+
     def __init__(self, filename, sector_id, ds_info, awips_info,
-                 xy_factors, tile_info, compress=False, fix_awips=False):
+                 xy_factors, tile_info, compress=False, fix_awips=False,
+                 update_existing=True):
+        """Assign instance attributes for later use."""
         self.filename = filename
         self.sector_id = sector_id
         self.ds_info = ds_info
@@ -711,6 +833,8 @@ class NetCDFWrapper(object):
         self.xy_factors = xy_factors
         self.compress = compress
         self.fix_awips = fix_awips
+        self.update_existing = update_existing
+        self.exists = os.path.isfile(self.filename)
 
     def __setitem__(self, key, data):
         """Write an entire tile to a file."""
@@ -722,13 +846,28 @@ class NetCDFWrapper(object):
         awips_info = self.awips_info
         tile_info = self.tile_info
         area_def = ds_info['area']
+        if hasattr(area_def, 'crs'):
+            is_geographic = area_def.crs.is_geographic
+        else:
+            is_geographic = Proj(area_def.proj_dict).is_latlong()
+        nc = NetCDFWriter(self.filename, ds_info=self.ds_info,
+                          compress=self.compress,
+                          is_geographic=is_geographic)
+
         LOG.debug("Scaling %s data to fit in netcdf file...", ds_info["name"])
         bit_depth = ds_info.get("bit_depth", 16)
         valid_min = ds_info.get('valid_min')
-        if valid_min is None:
+        if valid_min is None and self.update_existing and self.exists:
+            # reuse the valid_min that was previously computed
+            valid_min = nc.nc['data'].valid_min
+        elif valid_min is None:
             valid_min = np.nanmin(data)
+
         valid_max = ds_info.get('valid_max')
-        if valid_max is None:
+        if valid_max is None and self.update_existing and self.exists:
+            # reuse the valid_max that was previously computed
+            valid_max = nc.nc['data'].valid_max
+        elif valid_max is None:
             valid_max = np.nanmax(data)
 
         LOG.debug("Using product valid min {} and valid max {}".format(valid_min, valid_max))
@@ -740,10 +879,8 @@ class NetCDFWrapper(object):
 
         tmp_tile = np.empty(tile_info.tile_shape, dtype=data.dtype)
         tmp_tile[:] = np.nan
-        tmp_tile[tile_info.tile_slices] = data
 
         LOG.info("Writing tile '%s' to '%s'", self.tile_info[2], self.filename)
-        nc = NetCDFWriter(self.filename, ds_info=self.ds_info, compress=self.compress)
         LOG.debug("Creating dimensions...")
         nc.create_dimensions(tmp_tile.shape[0], tmp_tile.shape[1])
         LOG.debug("Creating variables...")
@@ -757,19 +894,29 @@ class NetCDFWrapper(object):
                             tmp_tile.shape[0], tmp_tile.shape[1])
         LOG.debug("Creating projection attributes...")
         nc.set_projection_attrs(area_def.area_id, area_def.proj_dict)
+        LOG.debug("Writing X/Y navigation data...")
+        mx, bx, my, by = self.xy_factors
+        nc.set_fgf(tile_info.x, mx, bx, tile_info.y, my, by)
+
+        tmp_tile[tile_info.tile_slices] = data
+        if self.exists and self.update_existing:
+            # use existing data where possible
+            existing_data = nc.nc['data'][:]
+            # where we don't have new data but we also have good existing data
+            old_mask = np.isnan(tmp_tile) & ~existing_data.mask
+            tmp_tile[old_mask] = existing_data[old_mask]
+
         LOG.debug("Writing image data...")
         np.clip(tmp_tile, valid_min, valid_max, out=tmp_tile)
         nc.set_image_data(tmp_tile)
-        LOG.debug("Writing X/Y navigation data...")
-        mx, bx, my, by = self.xy_factors
-        nc.set_fgf(tile_info.x, mx, bx, tile_info.y, my, by, units='meters')
         nc.close()
 
-        if self.fix_awips:
+        if self.fix_awips and not self.exists:
             fix_awips_file(self.filename)
 
     def _calc_factor_offset(self, data=None, dtype=np.int16, bitdepth=None,
                             min=None, max=None, num_fills=1, flag_meanings=False):
+        """Compute netcdf variable factor and offset."""
         if num_fills > 1:
             raise NotImplementedError("More than one fill value is not implemented yet")
 
@@ -815,7 +962,19 @@ class NetCDFWrapper(object):
 
 
 class SCMIWriter(Writer):
+    """Writer for AWIPS NetCDF4 SCMI files.
+
+    These files are **not** the official GOES-R style files, but rather a
+    custom "Polar SCMI" file scheme originally developed at the University
+    of Wisconsin - Madison, Space Science and Engineering Center (SSEC) for
+    use by the CSPP Polar2Grid project. Despite the name these files should
+    support data from polar-orbitting satellites (after resampling) and
+    geostationary satellites in single band (luminance) or RGB image format.
+
+    """
+
     def __init__(self, compress=False, fix_awips=False, **kwargs):
+        """Initialize writer and decision trees."""
         super(SCMIWriter, self).__init__(default_config_filename="writers/scmi.yaml", **kwargs)
         self.keep_intermediate = False
         self.overwrite_existing = True
@@ -828,13 +987,14 @@ class SCMIWriter(Writer):
 
     @property
     def enhancer(self):
-        """Lazy loading of enhancements only if needed."""
+        """Get lazy loaded enhancer object only if needed."""
         if self._enhancer is None:
             self._enhancer = Enhancer(ppp_config_dir=self.ppp_config_dir)
         return self._enhancer
 
     @classmethod
     def separate_init_kwargs(cls, kwargs):
+        """Separate keyword arguments by initialization and saving keyword arguments."""
         # FUTURE: Don't pass Scene.save_datasets kwargs to init and here
         init_kwargs, kwargs = super(SCMIWriter, cls).separate_init_kwargs(
             kwargs)
@@ -845,6 +1005,7 @@ class SCMIWriter(Writer):
         return init_kwargs, kwargs
 
     def _fill_sector_info(self):
+        """Convert sector extents if needed."""
         for sector_info in self.scmi_sectors.values():
             p = Proj(sector_info['projection'])
             if 'lower_left_xy' in sector_info:
@@ -857,6 +1018,14 @@ class SCMIWriter(Writer):
                 sector_info['upper_right_xy'] = p(*sector_info['upper_right_lonlat'])
 
     def _get_sector_info(self, sector_id, lettered_grid):
+        """Get metadata for the current sector if configured.
+
+        This is not necessary for numbered grids. If found, the sector info
+        will provide the overall tile layout for this grid/sector. This allows
+        for consistent tile numbering/naming regardless of where the data being
+        converted actually is.
+
+        """
         try:
             sector_info = self.scmi_sectors[sector_id]
         except KeyError:
@@ -866,7 +1035,10 @@ class SCMIWriter(Writer):
                 sector_info = None
         return sector_info
 
-    def _get_tile_generator(self, area_def, lettered_grid, sector_id, num_subtiles, tile_size, tile_count):
+    def _get_tile_generator(self, area_def, lettered_grid, sector_id,
+                            num_subtiles, tile_size, tile_count,
+                            use_sector_reference=False):
+        """Get the appropriate tile generator class for lettered or numbered tiles."""
         sector_info = self._get_sector_info(sector_id, lettered_grid)
         # Create a tile generator for this grid definition
         if lettered_grid:
@@ -874,6 +1046,8 @@ class SCMIWriter(Writer):
                 area_def,
                 sector_info['lower_left_xy'] + sector_info['upper_right_xy'],
                 cell_size=sector_info['resolution'],
+                num_subtiles=num_subtiles,
+                use_sector_reference=use_sector_reference,
                 )
         else:
             tile_gen = NumberedTileGenerator(
@@ -884,6 +1058,7 @@ class SCMIWriter(Writer):
         return tile_gen
 
     def _get_awips_info(self, ds_info, source_name=None, physical_element=None):
+        """Get metadata for this product when shown in AWIPS if configured in the YAML file."""
         try:
             awips_info = self.scmi_datasets.find_match(**ds_info).copy()
             awips_info['awips_id'] = "AWIPS_" + ds_info['name']
@@ -922,6 +1097,7 @@ class SCMIWriter(Writer):
         return area_datasets
 
     def _split_rgbs(self, ds):
+        """Split a single RGB dataset in to multiple."""
         for component in 'RGB':
             band_data = ds.sel(bands=component)
             band_data.attrs['name'] += '_{}'.format(component)
@@ -930,6 +1106,7 @@ class SCMIWriter(Writer):
             yield band_data
 
     def _enhance_and_split_rgbs(self, datasets):
+        """Handle multi-band images by splitting in to separate products."""
         new_datasets = []
         for ds in datasets:
             if ds.ndim == 2:
@@ -947,10 +1124,12 @@ class SCMIWriter(Writer):
         return new_datasets
 
     def save_dataset(self, dataset, **kwargs):
+        """Save a single DataArray to one or more NetCDF4 SCMI files."""
         LOG.warning("For best performance use `save_datasets`")
         return self.save_datasets([dataset], **kwargs)
 
     def get_filename(self, area_def, tile_info, sector_id, **kwargs):
+        """Generate output NetCDF file from metadata."""
         # format the filename
         kwargs["start_time"] += timedelta(minutes=int(os.environ.get("DEBUG_TIME_SHIFT", 0)))
         return super(SCMIWriter, self).get_filename(
@@ -962,36 +1141,98 @@ class SCMIWriter(Writer):
             **kwargs)
 
     def check_tile_exists(self, output_filename):
+        """Check if tile exists and report error accordingly."""
         if os.path.isfile(output_filename):
             if not self.overwrite_existing:
                 LOG.error("AWIPS file already exists: %s", output_filename)
                 raise RuntimeError("AWIPS file already exists: %s" % (output_filename,))
             else:
-                LOG.warning("AWIPS file already exists, will overwrite: %s", output_filename)
+                LOG.info("AWIPS file already exists, will update with new data: %s", output_filename)
 
     def save_datasets(self, datasets, sector_id=None,
                       source_name=None, filename=None,
                       tile_count=(1, 1), tile_size=None,
                       lettered_grid=False, num_subtiles=None,
+                      use_end_time=False, use_sector_reference=False,
                       compute=True, **kwargs):
+        """Write a series of DataArray objects to multiple NetCDF4 SCMI files.
+
+        Args:
+            datasets (iterable): Series of gridded :class:`~xarray.DataArray`
+                objects with the necessary metadata to be converted to a valid
+                SCMI product file.
+            sector_id (str): Name of the region or sector that the provided
+                data is on. This name will be written to the NetCDF file and
+                will be used as the sector in the AWIPS client. For lettered
+                grids this name should match the name configured in the writer
+                YAML. This is required but is defined as a keyword argument
+                for better error handling in Satpy.
+            source_name (str): Name of producer of these files (ex. "SSEC").
+                This name is used to create the output filename.
+            filename (str): Filename format pattern to be filled in with
+                dataset metadata for each tile. See YAML configuration file
+                for default.
+            tile_count (tuple): For numbered tiles only, how many tile rows
+                and tile columns to produce. Default to ``(1, 1)``, a single
+                giant tile. Either ``tile_count``, ``tile_size``, or
+                ``lettered_grid`` should be specified.
+            tile_size (tuple): For numbered tiles only, how many pixels each
+                tile should be. This takes precedence over ``tile_count`` if
+                specified. Either ``tile_count``, ``tile_size``, or
+                ``lettered_grid`` should be specified.
+            lettered_grid (bool): Whether to use a preconfigured grid and
+                label tiles with letters and numbers instead of only numbers.
+                For example, tiles will be named "A01", "A02", "B01", and so
+                on in the first row of data and continue on to "A03", "A04",
+                and "B03" in the default case where ``num_subtiles`` is (2, 2).
+                Letters start in the upper-left corner and will go from A up to
+                Z, if necessary.
+            num_subtiles (tuple): For lettered tiles only, how many rows and
+                columns to split each lettered tile in to. By default 2 rows
+                and 2 columns will be created. For example, the tile for
+                letter "A" will have "A01" and "A02" in the top row and "A03"
+                and "A04" in the second row.
+            use_end_time (bool): Instead of using the ``start_time`` for the
+                product filename and time written to the file, use the
+                ``end_time``. This is useful for multi-day composites where
+                the ``end_time`` is a better representation of what data is
+                in the file.
+            use_sector_reference (bool): For lettered tiles only, whether to
+                shift the data locations to align with the preconfigured
+                grid's pixels. By default this is False meaning that the
+                grid's tiles will be shifted to align with the data locations.
+                If True, the data is shifted. At most the data will be shifted
+                by 0.5 pixels. See :mod:`satpy.writers.scmi` for more
+                information.
+            compute (bool): Compute and write the output immediately using
+                dask. Default to ``False``.
+
+        """
         if sector_id is None:
             raise TypeError("Keyword 'sector_id' is required")
 
         area_datasets = self._group_by_area(datasets)
         sources_targets = []
-        for area_id, (area_def, ds_list) in area_datasets.items():
-            tile_gen = self._get_tile_generator(area_def, lettered_grid, sector_id, num_subtiles, tile_size, tile_count)
+        for area_def, ds_list in area_datasets.values():
+            tile_gen = self._get_tile_generator(
+                area_def, lettered_grid, sector_id, num_subtiles, tile_size,
+                tile_count, use_sector_reference=use_sector_reference)
             for dataset in self._enhance_and_split_rgbs(ds_list):
                 LOG.info("Preparing product %s to be written to AWIPS SCMI NetCDF file", dataset.attrs["name"])
                 awips_info = self._get_awips_info(dataset.attrs, source_name=source_name)
                 for tile_info, tmp_tile in tile_gen(dataset):
                     # make sure this entire tile is loaded as one single array
                     tmp_tile.data = tmp_tile.data.rechunk(tmp_tile.shape)
+                    ds_info = dataset.attrs.copy()
+                    if use_end_time:
+                        # replace start_time with end_time for multi-day composites
+                        ds_info['start_time'] = ds_info['end_time']
+
                     output_filename = filename or self.get_filename(area_def, tile_info, sector_id,
                                                                     source_name=awips_info['source_name'],
-                                                                    **dataset.attrs)
+                                                                    **ds_info)
                     self.check_tile_exists(output_filename)
-                    nc_wrapper = NetCDFWrapper(output_filename, sector_id, dataset.attrs, awips_info,
+                    nc_wrapper = NetCDFWrapper(output_filename, sector_id, ds_info, awips_info,
                                                tile_gen.xy_factors, tile_info,
                                                compress=self.compress, fix_awips=self.fix_awips)
                     sources_targets.append((tmp_tile.data, nc_wrapper))
@@ -1083,6 +1324,7 @@ def _create_debug_array(sector_info, num_subtiles, font_path='Verdana.ttf'):
 
 
 def draw_rectangle(draw, coordinates, outline=None, fill=None, width=1):
+    """Draw simple rectangle in to a numpy array image."""
     for i in range(width):
         rect_start = (coordinates[0] + i, coordinates[1] + i)
         rect_end = (coordinates[2] - i, coordinates[3] - i)
@@ -1090,6 +1332,7 @@ def draw_rectangle(draw, coordinates, outline=None, fill=None, width=1):
 
 
 def create_debug_lettered_tiles(init_args, create_args):
+    """Create SCMI files with tile identifiers "burned" in to the image data for debugging."""
     import xarray as xr
     create_args['lettered_grid'] = True
     create_args['num_subtiles'] = (2, 2)  # default, don't use command line argument
@@ -1101,10 +1344,9 @@ def create_debug_lettered_tiles(init_args, create_args):
     area_def, arr = _create_debug_array(sector_info, create_args['num_subtiles'])
 
     now = datetime.utcnow()
-    product = xr.DataArray(arr, attrs=dict(
-        mask=np.isnan(arr),
+    product = xr.DataArray(da.from_array(arr, chunks='auto'), attrs=dict(
         name='debug_{}'.format(sector_id),
-        platform='DEBUG',
+        platform_name='DEBUG',
         sensor='TILES',
         start_time=now,
         end_time=now,
@@ -1122,6 +1364,7 @@ def create_debug_lettered_tiles(init_args, create_args):
 
 
 def add_backend_argument_groups(parser):
+    """Add command line arguments for this writer used for debugging."""
     group_1 = parser.add_argument_group(title="Backend Initialization")
     group_1.add_argument("--backend-configs", nargs="*", dest="backend_configs",
                          help="alternative backend configuration files")
@@ -1150,6 +1393,7 @@ def add_backend_argument_groups(parser):
 
 
 def main():
+    """Command line interface mimicing CSPP Polar2Grid."""
     import argparse
     parser = argparse.ArgumentParser(description="Create SCMI AWIPS compatible NetCDF files")
     subgroups = add_backend_argument_groups(parser)



View it on GitLab: https://salsa.debian.org/debian-gis-team/satpy/commit/6c0171860ada87a028922943f3ca2556c476a920

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/satpy/commit/6c0171860ada87a028922943f3ca2556c476a920
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20200113/71e178da/attachment-0001.html>


More information about the Pkg-grass-devel mailing list