[med-svn] [Git][python-team/packages/tifffile][master] 4 commits: New upstream version 20210201

Ole Streicher gitlab at salsa.debian.org
Sun Feb 7 09:36:41 GMT 2021



Ole Streicher pushed to branch master at Debian Python Team / packages / tifffile


Commits:
4a3583fa by Ole Streicher at 2021-02-07T10:33:50+01:00
New upstream version 20210201
- - - - -
21d978db by Ole Streicher at 2021-02-07T10:33:51+01:00
Update upstream source from tag 'upstream/20210201'

Update to upstream version '20210201'
with Debian dir 71a85b5c1bbd1f73b9279bea062a8f779f657c3b
- - - - -
39fc4881 by Ole Streicher at 2021-02-07T10:34:23+01:00
Rediff patches

- - - - -
0ba37453 by Ole Streicher at 2021-02-07T10:34:33+01:00
Update changelog for 20210201-1 release

- - - - -


12 changed files:

- CHANGES.rst
- PKG-INFO
- README.rst
- debian/changelog
- debian/patches/Disable-tests-that-require-remote-files.patch
- debian/patches/Don-t-install-lsm2bin.patch
- setup.py
- tests/conftest.py
- tests/test_tifffile.py
- tifffile.egg-info/PKG-INFO
- tifffile.egg-info/requires.txt
- tifffile/tifffile.py


Changes:

=====================================
CHANGES.rst
=====================================
@@ -1,7 +1,20 @@
 Revisions
 ---------
+2021.2.1
+    Pass 4384 tests.
+    Fix multi-threaded access of ZarrTiffStores using same TiffFile instance.
+    Use fallback zlib and lzma codecs with imagecodecs lite builds.
+    Open Olympus and Panasonic RAW files for parsing, albeit not supported.
+    Support X2 and X4 differencing found in DNG.
+    Support reading JPEG_LOSSY compression found in DNG.
+2021.1.14
+    Try ImageJ series if OME series fails (#54)
+    Add option to use pages as chunks in ZarrFileStore (experimental).
+    Fix reading from file objects with no readinto function.
+2021.1.11
+    Fix test errors on PyPy.
+    Fix decoding bitorder with imagecodecs >= 2021.1.11.
 2021.1.8
-    Pass 4376 tests.
     Decode float24 using imagecodecs >= 2021.1.8.
     Consolidate reading of segments if possible.
 2020.12.8


=====================================
PKG-INFO
=====================================
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: tifffile
-Version: 2021.1.8
+Version: 2021.2.1
 Summary: Read and write TIFF(r) files
 Home-page: https://www.lfd.uci.edu/~gohlke/
 Author: Christoph Gohlke
@@ -51,7 +51,7 @@ Description: Read and write TIFF(r) files
         
         :License: BSD 3-Clause
         
-        :Version: 2021.1.8
+        :Version: 2021.2.1
         
         Requirements
         ------------
@@ -60,7 +60,7 @@ Description: Read and write TIFF(r) files
         
         * `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
         * `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
-        * `Imagecodecs 2021.1.8 <https://pypi.org/project/imagecodecs/>`_
+        * `Imagecodecs 2021.1.28 <https://pypi.org/project/imagecodecs/>`_
           (required only for encoding or decoding LZW, JPEG, etc.)
         * `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
           (required only for plotting)
@@ -71,8 +71,21 @@ Description: Read and write TIFF(r) files
         
         Revisions
         ---------
+        2021.2.1
+            Pass 4384 tests.
+            Fix multi-threaded access of ZarrTiffStores using same TiffFile instance.
+            Use fallback zlib and lzma codecs with imagecodecs lite builds.
+            Open Olympus and Panasonic RAW files for parsing, albeit not supported.
+            Support X2 and X4 differencing found in DNG.
+            Support reading JPEG_LOSSY compression found in DNG.
+        2021.1.14
+            Try ImageJ series if OME series fails (#54)
+            Add option to use pages as chunks in ZarrFileStore (experimental).
+            Fix reading from file objects with no readinto function.
+        2021.1.11
+            Fix test errors on PyPy.
+            Fix decoding bitorder with imagecodecs >= 2021.1.11.
         2021.1.8
-            Pass 4376 tests.
             Decode float24 using imagecodecs >= 2021.1.8.
             Consolidate reading of segments if possible.
         2020.12.8


=====================================
README.rst
=====================================
@@ -41,7 +41,7 @@ For command line usage run ``python -m tifffile --help``
 
 :License: BSD 3-Clause
 
-:Version: 2021.1.8
+:Version: 2021.2.1
 
 Requirements
 ------------
@@ -50,7 +50,7 @@ This release has been tested with the following requirements and dependencies
 
 * `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
 * `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
-* `Imagecodecs 2021.1.8 <https://pypi.org/project/imagecodecs/>`_
+* `Imagecodecs 2021.1.28 <https://pypi.org/project/imagecodecs/>`_
   (required only for encoding or decoding LZW, JPEG, etc.)
 * `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
   (required only for plotting)
@@ -61,8 +61,21 @@ This release has been tested with the following requirements and dependencies
 
 Revisions
 ---------
+2021.2.1
+    Pass 4384 tests.
+    Fix multi-threaded access of ZarrTiffStores using same TiffFile instance.
+    Use fallback zlib and lzma codecs with imagecodecs lite builds.
+    Open Olympus and Panasonic RAW files for parsing, albeit not supported.
+    Support X2 and X4 differencing found in DNG.
+    Support reading JPEG_LOSSY compression found in DNG.
+2021.1.14
+    Try ImageJ series if OME series fails (#54)
+    Add option to use pages as chunks in ZarrFileStore (experimental).
+    Fix reading from file objects with no readinto function.
+2021.1.11
+    Fix test errors on PyPy.
+    Fix decoding bitorder with imagecodecs >= 2021.1.11.
 2021.1.8
-    Pass 4376 tests.
     Decode float24 using imagecodecs >= 2021.1.8.
     Consolidate reading of segments if possible.
 2020.12.8


=====================================
debian/changelog
=====================================
@@ -1,3 +1,10 @@
+tifffile (20210201-1) unstable; urgency=medium
+
+  * New upstream version 20210201
+  * Rediff patches
+
+ -- Ole Streicher <olebole at debian.org>  Sun, 07 Feb 2021 10:34:29 +0100
+
 tifffile (20210108-1) unstable; urgency=medium
 
   * New upstream version 20210108


=====================================
debian/patches/Disable-tests-that-require-remote-files.patch
=====================================
@@ -7,10 +7,10 @@ Subject: Disable tests that require remote files
  1 file changed, 10 insertions(+), 2 deletions(-)
 
 diff --git a/tests/test_tifffile.py b/tests/test_tifffile.py
-index 91d2912..661bb7b 100644
+index a586331..b075a97 100644
 --- a/tests/test_tifffile.py
 +++ b/tests/test_tifffile.py
-@@ -174,7 +174,7 @@ from tifffile.tifffile import (
+@@ -175,7 +175,7 @@ from tifffile.tifffile import (
  )
  
  # skip certain tests
@@ -19,7 +19,7 @@ index 91d2912..661bb7b 100644
  SKIP_EXTENDED = False
  SKIP_PUBLIC = False  # skip public files
  SKIP_PRIVATE = False  # skip private files
-@@ -441,6 +441,7 @@ def test_issue_imread_kwargs_legacy():
+@@ -445,6 +445,7 @@ def test_issue_imread_kwargs_legacy():
  
  
  @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON)
@@ -27,7 +27,7 @@ index 91d2912..661bb7b 100644
  def test_issue_infinite_loop():
      """Test infinite loop reading more than two tags of same code in IFD."""
      # Reported by D. Hughes on 2019.7.26
-@@ -1481,6 +1482,7 @@ def test_class_omexml_fail(shape, storedshape, dtype, axes, error):
+@@ -1519,6 +1520,7 @@ def test_class_omexml_fail(shape, storedshape, dtype, axes, error):
      ],
  )
  @pytest.mark.parametrize('metadata', ('axes', None))
@@ -35,7 +35,7 @@ index 91d2912..661bb7b 100644
  def test_class_omexml(axes, autoaxes, shape, storedshape, dimorder, metadata):
      """Test OmeXml class."""
      dtype = 'uint8'
-@@ -1570,6 +1572,7 @@ def test_class_omexml(axes, autoaxes, shape, storedshape, dimorder, metadata):
+@@ -1608,6 +1610,7 @@ def test_class_omexml(axes, autoaxes, shape, storedshape, dimorder, metadata):
          ),
      ],
  )
@@ -43,7 +43,7 @@ index 91d2912..661bb7b 100644
  def test_class_omexml_modulo(axes, shape, storedshape, sizetzc, dimorder):
      """Test OmeXml class with modulo dimensions."""
      dtype = 'uint8'
-@@ -1583,6 +1586,7 @@ def test_class_omexml_modulo(axes, shape, storedshape, sizetzc, dimorder):
+@@ -1621,6 +1624,7 @@ def test_class_omexml_modulo(axes, shape, storedshape, sizetzc, dimorder):
      assert_valid_omexml(omexml)
  
  
@@ -51,15 +51,15 @@ index 91d2912..661bb7b 100644
  def test_class_omexml_attributes():
      """Test OmeXml class with attributes and elements."""
      from uuid import uuid1  # noqa: delayed import
-@@ -1621,6 +1625,7 @@ def test_class_omexml_attributes():
-     assert 'TheC="0" TheZ="2" TheT="0" PositionZ="4.0"' in omexml
+@@ -1661,6 +1665,7 @@ def test_class_omexml_attributes():
+     assert '\n  ' in str(omexml)
  
  
 + at pytest.mark.skip(reason="remote connection not available")
  def test_class_omexml_multiimage():
      """Test OmeXml class with multiple images."""
      omexml = OmeXml(description='multiimage')
-@@ -2514,7 +2519,7 @@ def test_func_pformat_xml():
+@@ -2554,7 +2559,7 @@ def test_func_pformat_xml():
      )
  
      assert pformat(value, height=8, width=60) == (
@@ -68,7 +68,7 @@ index 91d2912..661bb7b 100644
  <Dimap_Document name="band2.dim">
   <Metadata_Id>
    <METADATA_FORMAT version="2.12.1">DIMAP</METADATA_FORMAT>
-@@ -2730,6 +2735,7 @@ def assert_filehandle(fh, offset=0):
+@@ -2772,6 +2777,7 @@ def assert_filehandle(fh, offset=0):
          )
  
  
@@ -76,7 +76,7 @@ index 91d2912..661bb7b 100644
  def test_filehandle_seekable():
      """Test FileHandle must be seekable."""
      import ssl
-@@ -11638,6 +11644,7 @@ def test_write_imagej_raw():
+@@ -11762,6 +11768,7 @@ def test_write_imagej_raw():
          ((2, 3, 4, 5, 6, 7, 32, 32, 3), 'TQCPZRYXS'),
      ],
  )
@@ -84,7 +84,7 @@ index 91d2912..661bb7b 100644
  def test_write_ome(shape, axes):
      """Test write OME-TIFF format."""
      metadata = {'axes': axes} if axes is not None else {}
-@@ -11810,6 +11817,7 @@ def test_write_ome_methods(method):
+@@ -11934,6 +11941,7 @@ def test_write_ome_methods(method):
  
  
  @pytest.mark.parametrize('contiguous', [True, False])


=====================================
debian/patches/Don-t-install-lsm2bin.patch
=====================================
@@ -8,7 +8,7 @@ This seems not an end-user script, and we don't have a manpage for it
  1 file changed, 2 deletions(-)
 
 diff --git a/setup.py b/setup.py
-index 11f4aa7..f0e10b1 100644
+index 058f7eb..e5527d7 100644
 --- a/setup.py
 +++ b/setup.py
 @@ -106,8 +106,6 @@ setup(


=====================================
setup.py
=====================================
@@ -82,11 +82,11 @@ setup(
     python_requires='>=3.7',
     install_requires=[
         'numpy>=1.15.1',
-        # 'imagecodecs>=2021.1.8',
+        # 'imagecodecs>=2021.1.11',
     ],
     extras_require={
         'all': [
-            'imagecodecs>=2021.1.8',
+            'imagecodecs>=2021.1.11',
             'matplotlib>=3.2',
             'lxml',
             # 'zarr>=2.5.0'


=====================================
tests/conftest.py
=====================================
@@ -3,6 +3,11 @@
 import os
 import sys
 
+if os.environ.get('VSCODE_CWD'):
+    # work around pytest not using PYTHONPATH in VSCode
+    sys.path.insert(
+        0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
+    )
 
 if os.environ.get('SKIP_CODECS', None):
     sys.modules['imagecodecs'] = None


=====================================
tests/test_tifffile.py
=====================================
@@ -42,7 +42,7 @@ Private data files are not available due to size and copyright restrictions.
 
 :License: BSD 3-Clause
 
-:Version: 2021.1.8
+:Version: 2021.2.1
 
 """
 
@@ -51,6 +51,7 @@ import datetime
 import glob
 import json
 import math
+import mmap
 import os
 import pathlib
 import random
@@ -181,8 +182,9 @@ SKIP_PRIVATE = False  # skip private files
 SKIP_VALIDATE = True  # skip validate written files with jhove
 SKIP_CODECS = False
 SKIP_ZARR = False
+SKIP_PYPY = 'PyPy' in sys.version
 SKIP_BE = sys.byteorder == 'big'
-REASON = 'just skip it'
+REASON = 'skipped'
 
 if sys.maxsize < 2 ** 32:
     SKIP_LARGE = True
@@ -329,13 +331,13 @@ def assert_decode_method(page, image=None):
         assert image.reshape(page.shaped)[index] == strile[0, 0, 0, 0]
 
 
-def assert_aszarr_method(obj, image=None, **kwargs):
+def assert_aszarr_method(obj, image=None, chunkmode=None, **kwargs):
     """Assert aszarr returns same data as asarray."""
     if SKIP_ZARR:
         return
     if image is None:
         image = obj.asarray(**kwargs)
-    with obj.aszarr(**kwargs) as store:
+    with obj.aszarr(chunkmode=chunkmode, **kwargs) as store:
         data = zarr.open(store, mode='r')
         if isinstance(data, zarr.Group):
             data = data[0]
@@ -349,7 +351,9 @@ class TempFileName:
     def __init__(self, name=None, ext='.tif', remove=False):
         self.remove = remove or TEMP_DIR == tempfile.gettempdir()
         if not name:
-            self.name = tempfile.NamedTemporaryFile(prefix='test_').name
+            fh = tempfile.NamedTemporaryFile(prefix='test_')
+            self.name = fh.named
+            fh.close()
         else:
             self.name = os.path.join(TEMP_DIR, f'test_{name}{ext}')
 
@@ -1041,6 +1045,40 @@ def test_issue_write_separated():
             assert_array_equal(page.asarray(), extrasample)
 
 
+ at pytest.mark.skipif(SKIP_PRIVATE, reason=REASON)
+def test_issue_mmap():
+    """Test reading from mmap object with no readinto function.."""
+    fname = public_file('OME/bioformats-artificial/4D-series.ome.tiff')
+    with open(fname, 'rb') as fh:
+        mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ)
+        assert_array_equal(imread(mm), imread(fname))
+        mm.close()
+
+
+ at pytest.mark.skipif(SKIP_PRIVATE, reason=REASON)
+def test_issue_micromanager(caplog):
+    """Test fallback to ImageJ metadata if OME series fails."""
+    # https://github.com/cgohlke/tifffile/issues/54
+    # https://forum.image.sc/t/47567/9
+    # OME-XML does not contain reference to master file
+    # file has corrupt MicroManager DisplaySettings metadata
+    fname = private_file(
+        'OME/'
+        'image_stack_tpzc_50tp_2p_5z_3c_512k_1_MMStack_2-Pos001_000.ome.tif'
+    )
+    with TiffFile(fname) as tif:
+        assert len(tif.pages) == 750
+        assert len(tif.series) == 1
+        assert 'OME series: not an ome-tiff master file' in caplog.text
+        assert tif.is_micromanager
+        assert tif.is_ome
+        assert tif.is_imagej
+        assert tif.micromanager_metadata['DisplaySettings'] is None
+        assert 'read_json: invalid JSON' in caplog.text
+        series = tif.series[0]
+        assert series.shape == (50, 5, 3, 256, 256)
+
+
 ###############################################################################
 
 # Test specific functions and classes
@@ -1306,13 +1344,13 @@ def test_class_tifftags():
 def test_class_tifftagregistry():
     """Test TiffTagRegistry."""
     tags = TIFF.TAGS
-    assert len(tags) == 624
+    assert len(tags) == 632
     assert tags[11] == 'ProcessingSoftware'
     assert tags['ProcessingSoftware'] == 11
     assert tags.getall(11) == ['ProcessingSoftware']
     assert tags.getall('ProcessingSoftware') == [11]
     tags.add(11, 'ProcessingSoftware')
-    assert len(tags) == 624
+    assert len(tags) == 632
 
     # one code with two names
     assert 34853 in tags
@@ -1325,7 +1363,7 @@ def test_class_tifftagregistry():
     assert tags.getall('GPSTag') == [34853]
 
     del tags[34853]
-    assert len(tags) == 622
+    assert len(tags) == 630
     assert 34853 not in tags
     assert 'GPSTag' not in tags
     assert 'OlympusSIS2' not in tags
@@ -1351,7 +1389,7 @@ def test_class_tifftagregistry():
     assert tags.getall(41483) == ['FlashEnergy']
 
     del tags['FlashEnergy']
-    assert len(tags) == 622
+    assert len(tags) == 630
     assert 37387 not in tags
     assert 41483 not in tags
     assert 'FlashEnergy' not in tags
@@ -1612,13 +1650,15 @@ def test_class_omexml_attributes():
 
     omexml = OmeXml(**metadata)
     omexml.addimage('uint16', (3, 32, 32, 3), (3, 1, 1, 32, 32, 3), **metadata)
+    xml = omexml.tostring()
+    assert uuid in xml
+    assert 'SignificantBits="12"' in xml
+    assert 'SamplesPerPixel="3" Name="ChannelName"' in xml
+    assert 'TheC="0" TheZ="2" TheT="0" PositionZ="4.0"' in xml
+    if SKIP_PYPY:
+        pytest.xfail('lxml bug?')
+    assert_valid_omexml(xml)
     assert '\n  ' in str(omexml)
-    omexml = omexml.tostring()
-    assert_valid_omexml(omexml)
-    assert uuid in omexml
-    assert 'SignificantBits="12"' in omexml
-    assert 'SamplesPerPixel="3" Name="ChannelName"' in omexml
-    assert 'TheC="0" TheZ="2" TheT="0" PositionZ="4.0"' in omexml
 
 
 def test_class_omexml_multiimage():
@@ -2581,7 +2621,7 @@ def test_func_create_output_asarray(out, key):
     """Test create_output function in context of asarray."""
     data = random_data('uint16', (5, 219, 301))
 
-    with TempFileName('out') as fname:
+    with TempFileName(f'out_{key}_{out}') as fname:
         imwrite(fname, data)
         # assert file
         with TiffFile(fname) as tif:
@@ -2631,7 +2671,9 @@ def test_func_create_output_asarray(out, key):
                 del image
             elif out == 'name':
                 # memmap in specified file
-                with TempFileName('out', ext='.memmap') as fileout:
+                with TempFileName(
+                    f'out_{key}_{out}', ext='.memmap'
+                ) as fileout:
                     image = obj.asarray(out=fileout)
                     assert isinstance(image, numpy.core.memmap)
                     assert_array_equal(dat, image)
@@ -3237,6 +3279,56 @@ def test_read_gimp_f2():
         assert__str__(tif)
 
 
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
+def test_read_dng_jpeglossy():
+    """Test read JPEG_LOSSY in DNG."""
+    fname = private_file('DNG/Adobe DNG Converter.dng')
+    with TiffFile(fname) as tif:
+        assert len(tif.pages) == 1
+        assert len(tif.series) == 6
+        for series in tif.series:
+            image = series.asarray()
+            assert_aszarr_method(series, image)
+        assert__str__(tif)
+
+
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
+ at pytest.mark.parametrize('fp', ['fp16', 'fp24', 'fp32'])
+def test_read_dng_floatpredx2(fp):
+    """Test read FLOATINGPOINTX2 predictor in DNG."""
+    # <https://raw.pixls.us/data/Canon/EOS%205D%20Mark%20III/>
+    fname = private_file(f'DNG/fpx2/hdrmerge-bayer-{fp}-w-pred-deflate.dng')
+    with TiffFile(fname) as tif:
+        assert len(tif.pages) == 1
+        assert len(tif.series) == 3
+        page = tif.pages[0].pages[0]
+        assert page.compression == ADOBE_DEFLATE
+        assert page.photometric == CFA
+        assert page.imagewidth == 5920
+        assert page.imagelength == 3950
+        assert page.sampleformat == 3
+        assert page.bitspersample == int(fp[2:])
+        assert page.samplesperpixel == 1
+        assert page.predictor == 34894
+        if fp == 'fp24':
+            with pytest.raises(NotImplementedError):
+                image = page.asarray()
+        else:
+            image = page.asarray()
+            assert_aszarr_method(page, image)
+        assert__str__(tif)
+
+
+ at pytest.mark.skipif(SKIP_PRIVATE, reason=REASON)
+ at pytest.mark.parametrize('fname', ['sample1.orf', 'sample1.rw2'])
+def test_read_rawformats(fname, caplog):
+    """Test parse unsupported RAW formats."""
+    fname = private_file(f'RAWformats/{fname}')
+    with TiffFile(fname) as tif:
+        assert 'RAW format' in caplog.text
+        assert__str__(tif)
+
+
 @pytest.mark.skipif(SKIP_PRIVATE, reason=REASON)
 def test_read_iss_vista():
     """Test read bogus imagedepth tag by ISS Vista."""
@@ -6613,6 +6705,7 @@ def test_read_ome_multifile():
         assert__str__(tif)
         # test aszarr
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         del data
         # assert other files are still closed after ZarrFileStore.close
         for page in tif.series[0].pages:
@@ -6664,6 +6757,7 @@ def test_read_ome_multifile_missing(caplog):
         assert data.dtype.name == 'uint8'
         assert data[1, 42, 9, 426, 272] == 123
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         del data
         assert__str__(tif)
 
@@ -6700,6 +6794,7 @@ def test_read_ome_rgb():
         assert data.dtype.name == 'uint8'
         assert data[1, 158, 428] == 253
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -6733,6 +6828,7 @@ def test_read_ome_samplesperpixel():
         assert data.dtype.name == 'uint8'
         assert tuple(data[5, :, 191, 449]) == (253, 0, 28)
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -6768,6 +6864,7 @@ def test_read_ome_float_modulo_attributes():
         assert data.dtype.name == 'uint16'
         assert data[1, 158, 428] == 51
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -6803,6 +6900,7 @@ def test_read_ome_cropped(caplog):
         assert data.dtype.name == 'uint16'
         assert data[4, 9, 1, 175, 123] == 9605
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         del data
         assert__str__(tif)
 
@@ -6836,6 +6934,7 @@ def test_read_ome_corrupted_page(caplog):
         assert data.dtype.name == 'uint16'
         assert tuple(data[:, 2684, 2684]) == (496, 657, 7106, 469)
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         del data
         assert__str__(tif)
 
@@ -6946,6 +7045,7 @@ def test_read_ome_jpeg2000_be():
         assert data.dtype.name == 'uint16'
         assert data[0, 0] == 1904
         assert_aszarr_method(page, data)
+        assert_aszarr_method(page, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -6983,6 +7083,7 @@ def test_read_andor_light_sheet_512p():
         assert data.dtype.name == 'uint16'
         assert round(abs(data[50, 256, 256] - 703), 7) == 0
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif, 0)
 
 
@@ -7019,6 +7120,7 @@ def test_read_nih_morph():
         assert data.dtype.name == 'uint8'
         assert data[195, 144] == 41
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7093,6 +7195,7 @@ def test_read_nih_scala_media():
         assert data.dtype.name == 'uint8'
         assert data[35, 35, 65] == 171
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7135,6 +7238,7 @@ def test_read_imagej_rrggbb():
         assert tuple(data[:, 15, 15]) == (812, 1755, 648)
         assert_decode_method(page)
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif, 0)
 
 
@@ -7173,6 +7277,7 @@ def test_read_imagej_focal1():
         assert data.dtype.name == 'uint8'
         assert data[102, 216, 212] == 120
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif, 0)
 
 
@@ -7208,6 +7313,7 @@ def test_read_imagej_hela_cells():
         assert data.dtype.name == 'uint16'
         assert tuple(data[255, 336]) == (440, 378, 298)
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7242,6 +7348,7 @@ def test_read_imagej_flybrain():
         assert data.dtype.name == 'uint8'
         assert tuple(data[18, 108, 97]) == (165, 157, 0)
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7289,6 +7396,7 @@ def test_read_imagej_confocal_series():
         assert tif.pages.pages[2] == 8001073
         assert tif.pages.pages[-1] == 8008687
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7516,6 +7624,7 @@ def test_read_imagej_invalid_metadata(caplog):
         assert data.dtype.name == 'uint16'
         assert data[94, 34] == 1257
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
         del data
 
@@ -7590,6 +7699,7 @@ def test_read_fluoview_lsp1_v_laser():
         assert data.dtype.name == 'uint16'
         assert round(abs(data[1, 36, 128, 128] - 824), 7) == 0
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7658,6 +7768,7 @@ def test_read_metaseries():
         assert data.dtype.name == 'uint16'
         assert data[256, 256] == 1917
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7698,6 +7809,7 @@ def test_read_metaseries_g4d7r():
         assert round(abs(data[512, 2856] - 4095), 7) == 0
         if not SKIP_LARGE:
             assert_aszarr_method(series, data)
+            assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7751,6 +7863,7 @@ def test_read_mdgel_rat():
         assert data.dtype.name == 'float32'
         assert round(abs(data[260, 740] - 399.1728515625), 7) == 0
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7785,6 +7898,7 @@ def test_read_mediacy_imagepro():
         assert data.dtype.name == 'uint8'
         assert round(abs(data[120, 34] - 4), 7) == 0
         assert_aszarr_method(series, data)
+        assert_aszarr_method(series, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7810,6 +7924,7 @@ def test_read_pilatus_100k():
         assert attr['Tau'] == 1.991e-07
         assert attr['Silicon'] == 0.000320
         assert_aszarr_method(page)
+        assert_aszarr_method(page, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7866,6 +7981,7 @@ def test_read_epics_attrib():
             tags['epicsTSSec'], tags['epicsTSNsec']
         ) == datetime.datetime(2015, 6, 2, 11, 31, 56, 103746)
         assert_aszarr_method(page)
+        assert_aszarr_method(page, chunkmode='page')
         assert__str__(tif)
 
 
@@ -7995,6 +8111,7 @@ def test_read_qpi():
         assert image.dtype == 'uint8'
         assert image[300, 400, 1] == 48
         assert_aszarr_method(tif, image, series=1)
+        assert_aszarr_method(tif, image, series=1, chunkmode='page')
         assert__str__(tif)
 
 
@@ -8028,6 +8145,7 @@ def test_read_philips():
         assert image.shape == (2789, 2677, 3)
         assert image[300, 400, 1] == 206
         assert_aszarr_method(series, image, level=5)
+        assert_aszarr_method(series, image, level=5, chunkmode='page')
         assert__str__(tif)
 
 
@@ -8098,6 +8216,7 @@ def test_read_sis():
         assert sis['name'] == 'Hela-Zellen'
         assert sis['magnification'] == 60.0
         assert_aszarr_method(tif, data)
+        assert_aszarr_method(tif, data, chunkmode='page')
         assert__str__(tif)
 
 
@@ -8867,6 +8986,7 @@ def test_write_truncate():
             data = tif.asarray()
             assert data.shape == shape
             assert_aszarr_method(tif, data)
+            assert_aszarr_method(tif, data, chunkmode='page')
             assert__str__(tif)
 
 
@@ -8891,6 +9011,7 @@ def test_write_is_shaped():
             assert page.is_shaped
             assert page.description == descr
             assert_aszarr_method(page)
+            assert_aszarr_method(page, chunkmode='page')
             assert__str__(tif)
 
 
@@ -9599,7 +9720,7 @@ def test_write_rowsperstrip():
     """Test write rowsperstrip without compression."""
     data = WRITE_DATA
     with TempFileName('rowsperstrip') as fname:
-        imwrite(fname, data, rowsperstrip=32, contiguous=False, metadata=False)
+        imwrite(fname, data, rowsperstrip=32, contiguous=False, metadata=None)
         assert_valid_tiff(fname)
         with TiffFile(fname) as tif:
             assert len(tif.pages) == 1
@@ -9713,6 +9834,7 @@ def test_write_pixel():
             image = tif.asarray()
             assert_array_equal(data, image)
             assert_aszarr_method(tif, image)
+            assert_aszarr_method(tif, image, chunkmode='page')
             assert__str__(tif)
 
 
@@ -9757,6 +9879,7 @@ def test_write_2d_as_rgb():
             image = tif.asarray()
             assert_array_equal(data, image)
             assert_aszarr_method(tif, image)
+            assert_aszarr_method(tif, image, chunkmode='page')
             assert__str__(tif)
 
 
@@ -10930,6 +11053,7 @@ def test_write_volumetric_striped_png():
             image = tif.asarray()
             assert_array_equal(data, image)
             assert_aszarr_method(tif, image)
+            assert_aszarr_method(tif, image, chunkmode='page')
             assert__str__(tif)
 
 


=====================================
tifffile.egg-info/PKG-INFO
=====================================
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: tifffile
-Version: 2021.1.8
+Version: 2021.2.1
 Summary: Read and write TIFF(r) files
 Home-page: https://www.lfd.uci.edu/~gohlke/
 Author: Christoph Gohlke
@@ -51,7 +51,7 @@ Description: Read and write TIFF(r) files
         
         :License: BSD 3-Clause
         
-        :Version: 2021.1.8
+        :Version: 2021.2.1
         
         Requirements
         ------------
@@ -60,7 +60,7 @@ Description: Read and write TIFF(r) files
         
         * `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
         * `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
-        * `Imagecodecs 2021.1.8 <https://pypi.org/project/imagecodecs/>`_
+        * `Imagecodecs 2021.1.28 <https://pypi.org/project/imagecodecs/>`_
           (required only for encoding or decoding LZW, JPEG, etc.)
         * `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
           (required only for plotting)
@@ -71,8 +71,21 @@ Description: Read and write TIFF(r) files
         
         Revisions
         ---------
+        2021.2.1
+            Pass 4384 tests.
+            Fix multi-threaded access of ZarrTiffStores using same TiffFile instance.
+            Use fallback zlib and lzma codecs with imagecodecs lite builds.
+            Open Olympus and Panasonic RAW files for parsing, albeit not supported.
+            Support X2 and X4 differencing found in DNG.
+            Support reading JPEG_LOSSY compression found in DNG.
+        2021.1.14
+            Try ImageJ series if OME series fails (#54)
+            Add option to use pages as chunks in ZarrFileStore (experimental).
+            Fix reading from file objects with no readinto function.
+        2021.1.11
+            Fix test errors on PyPy.
+            Fix decoding bitorder with imagecodecs >= 2021.1.11.
         2021.1.8
-            Pass 4376 tests.
             Decode float24 using imagecodecs >= 2021.1.8.
             Consolidate reading of segments if possible.
         2020.12.8


=====================================
tifffile.egg-info/requires.txt
=====================================
@@ -1,6 +1,6 @@
 numpy>=1.15.1
 
 [all]
-imagecodecs>=2021.1.8
+imagecodecs>=2021.1.11
 matplotlib>=3.2
 lxml


=====================================
tifffile/tifffile.py
=====================================
@@ -71,7 +71,7 @@ For command line usage run ``python -m tifffile --help``
 
 :License: BSD 3-Clause
 
-:Version: 2021.1.8
+:Version: 2021.2.1
 
 Requirements
 ------------
@@ -80,7 +80,7 @@ This release has been tested with the following requirements and dependencies
 
 * `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
 * `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
-* `Imagecodecs 2021.1.8 <https://pypi.org/project/imagecodecs/>`_
+* `Imagecodecs 2021.1.28 <https://pypi.org/project/imagecodecs/>`_
   (required only for encoding or decoding LZW, JPEG, etc.)
 * `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
   (required only for plotting)
@@ -91,8 +91,21 @@ This release has been tested with the following requirements and dependencies
 
 Revisions
 ---------
+2021.2.1
+    Pass 4384 tests.
+    Fix multi-threaded access of ZarrTiffStores using same TiffFile instance.
+    Use fallback zlib and lzma codecs with imagecodecs lite builds.
+    Open Olympus and Panasonic RAW files for parsing, albeit not supported.
+    Support X2 and X4 differencing found in DNG.
+    Support reading JPEG_LOSSY compression found in DNG.
+2021.1.14
+    Try ImageJ series if OME series fails (#54)
+    Add option to use pages as chunks in ZarrFileStore (experimental).
+    Fix reading from file objects with no readinto function.
+2021.1.11
+    Fix test errors on PyPy.
+    Fix decoding bitorder with imagecodecs >= 2021.1.11.
 2021.1.8
-    Pass 4376 tests.
     Decode float24 using imagecodecs >= 2021.1.8.
     Consolidate reading of segments if possible.
 2020.12.8
@@ -614,48 +627,59 @@ as numpy or zarr arrays:
 
 """
 
-__version__ = '2021.1.8'
+__version__ = '2021.2.1'
 
 __all__ = (
-    'imwrite',
-    'imread',
-    'imshow',
-    'memmap',
-    'lsm2bin',
-    'TiffFileError',
+    'OmeXml',
+    'OmeXmlError',
+    'TIFF',
     'TiffFile',
-    'TiffWriter',
-    'TiffReader',
-    'TiffSequence',
+    'TiffFileError',
+    'TiffFrame',
     'TiffPage',
     'TiffPageSeries',
-    'TiffFrame',
+    'TiffReader',
+    'TiffSequence',
     'TiffTag',
-    'TIFF',
-    'OmeXmlError',
-    'OmeXml',
+    'TiffTags',
+    'TiffWriter',
+    'ZarrFileStore',
+    'ZarrStore',
+    'ZarrTiffStore',
+    'imread',
+    'imshow',
+    'imwrite',
+    'lsm2bin',
+    'memmap',
     'read_micromanager_metadata',
     'read_scanimage_metadata',
+    'tiffcomment',
     # utility classes and functions used by oiffile, czifile, etc.
+    'FileCache',
     'FileHandle',
     'FileSequence',
     'Timer',
+    'askopenfilename',
+    'astype',
+    'create_output',
+    'enumarg',
+    'enumstr',
+    'format_size',
     'lazyattr',
+    'matlabstr2py',
     'natural_sorted',
+    'nullfunc',
+    'parse_kwargs',
+    'pformat',
+    'product',
+    'repeat_nd',
+    'reshape_axes',
+    'reshape_nd',
+    'squeeze_axes',
     'stripnull',
     'transpose_axes',
-    'squeeze_axes',
-    'create_output',
-    'repeat_nd',
-    'format_size',
-    'astype',
-    'product',
-    'xml2dict',
-    'pformat',
-    'nullfunc',
     'update_kwargs',
-    'parse_kwargs',
-    'askopenfilename',
+    'xml2dict',
     '_app_show',
     # deprecated
     'imsave',
@@ -1128,7 +1152,7 @@ class TiffWriter:
         """Write numpy ndarray to a series of TIFF pages.
 
         The ND image data are written to a series of TIFF pages/IFDs.
-        By default, metadata in JSON, ImageJ or OME-XML format are written
+        By default, metadata in JSON, ImageJ, or OME-XML format are written
         to the ImageDescription tag of the first page to describe the series
         such that the image data can later be read back as a ndarray of same
         shape.
@@ -1141,10 +1165,10 @@ class TiffWriter:
         If 'shape' and 'dtype' are specified instead of 'data', an empty array
         is saved. This option cannot be used with compression, predictors,
         packed integers, bilevel images, or multiple tiles.
-        If 'shape', 'dtype', and 'tile' are specified, 'data' must be a
+        If 'shape', 'dtype', and 'tile' are specified, 'data' must be an
         iterable of all tiles in the image.
-        If 'shape' and 'dtype' are specified but not 'tile', 'data' must be a
-        iterable of all single planes in the image.
+        If 'shape', 'dtype', and 'data' are specified but not 'tile', 'data'
+        must be an iterable of all single planes in the image.
         Image data are written uncompressed in one strip per plane by default.
         Dimensions larger than 2 to 4 (depending on photometric mode, planar
         configuration, and volumetric mode) are flattened and saved as separate
@@ -1558,7 +1582,16 @@ class TiffWriter:
             predictortag = 1
 
         if predictor:
-            if compressiontag in (7, 33003, 33005, 34712, 34933, 34934, 50001):
+            if compressiontag in (
+                7,
+                33003,
+                33005,
+                34712,
+                34892,
+                34933,
+                34934,
+                50001,
+            ):
                 # disable predictor for JPEG, JPEG2000, WEBP, PNG, JPEGXR
                 predictor = False
             elif datadtype.kind in 'iu':
@@ -2892,6 +2925,13 @@ class TiffFile:
                     self.tiff = TIFF.NDPI_LE
                 else:
                     self.tiff = TIFF.CLASSIC_LE
+            elif version == 0x55 or version == 0x4F52 or version == 0x5352:
+                # Panasonic or Olympus RAW
+                log_warning(f'RAW format 0x{version:04X} not supported')
+                if byteorder == '>':
+                    self.tiff = TIFF.CLASSIC_BE
+                else:
+                    self.tiff = TIFF.CLASSIC_LE
             else:
                 raise TiffFileError(f'invalid TIFF version {version}')
 
@@ -3084,12 +3124,12 @@ class TiffFile:
             result.shape = (-1,) + pages[0].shape
         return result
 
-    def aszarr(self, key=None, series=None, level=None):
+    def aszarr(self, key=None, series=None, level=None, **kwargs):
         """Return image data from selected TIFF page(s) as zarr storage."""
         if not self.pages:
             raise NotImplementedError('empty zarr arrays not supported')
         if key is None and series is None:
-            return self.series[0].aszarr(level=level)
+            return self.series[0].aszarr(level=level, **kwargs)
         if series is None:
             pages = self.pages
         else:
@@ -3098,10 +3138,10 @@ class TiffFile:
             except (KeyError, TypeError):
                 pass
             if key is None:
-                return series.aszarr(level=level)
+                return series.aszarr(level=level, **kwargs)
             pages = series.pages
         if isinstance(key, (int, numpy.integer)):
-            return pages[key].aszarr()
+            return pages[key].aszarr(**kwargs)
         raise TypeError('key must be an integer index')
 
     @lazyattr
@@ -3136,6 +3176,12 @@ class TiffFile:
         ):
             if getattr(self, 'is_' + name, False):
                 series = getattr(self, '_series_' + name)()
+                if not series and name == 'ome' and self.is_imagej:
+                    # try ImageJ series if OME series fails.
+                    # clear pages cache since _series_ome() might leave some
+                    # frames without keyframe
+                    self.pages._clear()
+                    continue
                 break
         self.pages.useframes = useframes
         self.pages.keyframe = keyframe
@@ -5808,8 +5854,8 @@ class TiffPage:
                 data = numpy.pad(data, padwidth, constant_values=nodata)
                 return data, data.shape
 
-        if self.compression in (6, 7):
-            # COMPRESSION.JPEG needs special handling
+        if self.compression in (6, 7, 34892):
+            # JPEG needs special handling
             if self.fillorder == 2:
                 log_warning(
                     f'TiffPage {self.index}: disabling LSB2MSB for JPEG'
@@ -5925,6 +5971,10 @@ class TiffPage:
 
         elif self.bitspersample == 24 and dtype.char == 'f':
             # float24
+            if unpredict is not None:
+                # floatpred_decode requires numpy.float24, which does not exist
+                raise NotImplementedError('unpredicting float24 not supported')
+
             def unpack(data, byteorder=self.parent.byteorder):
                 # return numpy.float32 array from float24
                 return float24_decode(data, byteorder)
@@ -5945,7 +5995,7 @@ class TiffPage:
                     data, shape = pad(data, shape)
                 return data, index, shape
             if self.fillorder == 2:
-                data = bitorder_decode(data, out=data)
+                data = bitorder_decode(data)
             if decompress is not None:
                 # TODO: calculate correct size for packed integers
                 size = shape[0] * shape[1] * shape[2] * shape[3]
@@ -5977,7 +6027,7 @@ class TiffPage:
             _fullsize = keyframe.is_tiled
 
         decodeargs = {'_fullsize': bool(_fullsize)}
-        if keyframe.compression in (6, 7):  # COMPRESSION.JPEG
+        if keyframe.compression in (6, 7, 34892):  # JPEG
             decodeargs['jpegtables'] = self.jpegtables
 
         def decode(args, decodeargs=decodeargs, keyframe=keyframe, func=func):
@@ -6014,7 +6064,7 @@ class TiffPage:
     def asarray(self, out=None, squeeze=True, lock=None, maxworkers=None):
         """Read image data from file and return as numpy array.
 
-        Raise ValueError if format is unsupported.
+        Raise ValueError if format is not supported.
 
         Parameters
         ----------
@@ -6348,6 +6398,7 @@ class TiffPage:
             33003,
             33005,
             34712,
+            34892,
             34933,
             34934,
             50001,
@@ -7688,7 +7739,7 @@ class TiffPageSeries:
     offset : int or None
         Position of image data in file if memory-mappable, else None.
     levels : list of TiffPageSeries
-        Pyramid levels.
+        Pyramid levels. levels[0] is 'self'.
 
     """
 
@@ -7872,10 +7923,14 @@ class ZarrStore(MutableMapping):
 
     """
 
-    def __init__(self, fillvalue=None):
+    def __init__(self, fillvalue=None, chunkmode=None):
         """Initialize ZarrStore."""
         self._store = {}
         self._fillvalue = 0 if fillvalue is None else fillvalue
+        if chunkmode is None:
+            self._chunkmode = TIFF.CHUNKMODE(0)
+        else:
+            self._chunkmode = enumarg(TIFF.CHUNKMODE, chunkmode)
 
     def __enter__(self):
         return self
@@ -7997,20 +8052,31 @@ class ZarrTiffStore(ZarrStore):
     """Zarr storage interface to image data in TiffPage or TiffPageSeries."""
 
     def __init__(
-        self, arg, level=None, fillvalue=None, lock=None, _openfiles=None
+        self,
+        arg,
+        level=None,
+        chunkmode=None,
+        fillvalue=None,
+        lock=None,
+        _openfiles=None,
     ):
         """Initialize Zarr storage from TiffPage or TiffPageSeries."""
-        super().__init__(fillvalue=fillvalue)
+        super().__init__(fillvalue=fillvalue, chunkmode=chunkmode)
 
-        if lock is None:
-            lock = threading.RLock()
+        if self._chunkmode not in (0, 2):
+            raise NotImplementedError(f'{self._chunkmode!r} not implemented')
 
-        self._filecache = FileCache(size=_openfiles, lock=lock)
         self._transform = getattr(arg, 'transform', None)
         self._data = getattr(arg, 'levels', [TiffPageSeries([arg])])
         if level is not None:
             self._data = [self._data[level]]
 
+        if lock is None:
+            fh = self._data[0].keyframe.parent._master.filehandle
+            fh.lock = True
+            lock = fh.lock
+        self._filecache = FileCache(size=_openfiles, lock=lock)
+
         if len(self._data) > 1:
             # multiscales
             self._store['.zgroup'] = ZarrStore._json({'zarr_format': 2})
@@ -8033,7 +8099,10 @@ class ZarrTiffStore(ZarrStore):
             for level, series in enumerate(self._data):
                 shape = series.shape
                 dtype = series.dtype
-                chunks = series.keyframe.chunks
+                if self._chunkmode:
+                    chunks = series.keyframe.shape
+                else:
+                    chunks = series.keyframe.chunks
                 self._store[f'{level}/.zarray'] = ZarrStore._json(
                     {
                         'zarr_format': 2,
@@ -8050,7 +8119,10 @@ class ZarrTiffStore(ZarrStore):
             series = self._data[0]
             shape = series.shape
             dtype = series.dtype
-            chunks = series.keyframe.chunks
+            if self._chunkmode:
+                chunks = series.keyframe.shape
+            else:
+                chunks = series.keyframe.chunks
             self._store['.zattrs'] = ZarrStore._json({})
             self._store['.zarray'] = ZarrStore._json(
                 {
@@ -8073,10 +8145,24 @@ class ZarrTiffStore(ZarrStore):
         """Return chunk from file."""
         keyframe, page, chunkindex, offset, bytecount = self._parse_key(key)
 
+        if self._chunkmode:
+            chunks = keyframe.shape
+        else:
+            chunks = keyframe.chunks
+
         if page is None or offset == 0 or bytecount == 0:
-            return ZarrStore._empty_chunk(
-                keyframe.chunks, keyframe.dtype, self._fillvalue
+            chunk = ZarrStore._empty_chunk(
+                chunks, keyframe.dtype, self._fillvalue
             )
+            if self._transform is not None:
+                chunk = self._transform(chunk)
+            return chunk
+
+        if self._chunkmode and offset is None:
+            chunk = page.asarray(lock=self._filecache.lock)  # maxworkers=1 ?
+            if self._transform is not None:
+                chunk = self._transform(chunk)
+            return chunk
 
         chunk = self._filecache.read(page.parent.filehandle, offset, bytecount)
 
@@ -8088,7 +8174,7 @@ class ZarrTiffStore(ZarrStore):
         if self._transform is not None:
             chunk = self._transform(chunk)
 
-        if chunk.size != product(keyframe.chunks):
+        if chunk.size != product(chunks):
             raise RuntimeError
         return chunk  # .tobytes()
 
@@ -8101,7 +8187,7 @@ class ZarrTiffStore(ZarrStore):
         else:
             series = self._data[0]
         keyframe = series.keyframe
-        pageindex, chunkindex = ZarrTiffStore._indices(key, series)
+        pageindex, chunkindex = self._indices(key, series)
         if pageindex > 0 and len(series) == 1:
             # truncated ImageJ, STK, or shaped
             if series.offset is None:
@@ -8111,6 +8197,14 @@ class ZarrTiffStore(ZarrStore):
                 return keyframe, None, chunkindex, 0, 0
             offset = pageindex * page.size * page.dtype.itemsize
             offset += page.dataoffsets[chunkindex]
+            if self._chunkmode:
+                bytecount = page.size * page.dtype.itemsize
+                return keyframe, page, chunkindex, offset, bytecount
+        elif self._chunkmode:
+            page = series[pageindex]
+            if page is None:
+                return keyframe, None, None, 0, 0
+            return keyframe, page, None, None, None
         else:
             page = series[pageindex]
             if page is None:
@@ -8119,48 +8213,15 @@ class ZarrTiffStore(ZarrStore):
         bytecount = page.databytecounts[chunkindex]
         return keyframe, page, chunkindex, offset, bytecount
 
-    @staticmethod
-    def _chunks(chunks, shape):
-        """Return chunks with same length as shape."""
-        ndim = len(shape)
-        if ndim == 0:
-            return ()  # empty array
-        if 0 in shape:
-            return (1,) * ndim
-        newchunks = []
-        i = ndim - 1
-        j = len(chunks) - 1
-        while True:
-            if j < 0:
-                newchunks.append(1)
-                i -= 1
-            elif shape[i] > 1 and chunks[j] > 1:
-                newchunks.append(chunks[j])
-                i -= 1
-                j -= 1
-            elif shape[i] == chunks[j]:  # both 1
-                newchunks.append(1)
-                i -= 1
-                j -= 1
-            elif shape[i] == 1:
-                newchunks.append(1)
-                i -= 1
-            elif chunks[j] == 1:
-                newchunks.append(1)
-                j -= 1
-            else:
-                raise RuntimeError
-            if i < 0 or ndim == len(newchunks):
-                break
-        # assert ndim == len(newchunks)
-        return tuple(newchunks[::-1])
-
-    @staticmethod
-    def _indices(key, series):
+    def _indices(self, key, series):
         """Return page and strile indices from zarr chunk index."""
         keyframe = series.keyframe
         indices = [int(i) for i in key.split('.')]
         assert len(indices) == len(series.shape)
+        if self._chunkmode:
+            chunked = (1,) * len(keyframe.shape)
+        else:
+            chunked = keyframe.chunked
         p = 1
         for i, s in enumerate(series.shape[::-1]):
             p *= s
@@ -8174,14 +8235,14 @@ class ZarrTiffStore(ZarrStore):
         else:
             raise RuntimeError
         if len(strile_chunked) == len(keyframe.shape):
-            strile_chunked = keyframe.chunked
+            strile_chunked = chunked
         else:
             # get strile_chunked including singleton dimensions
             i = len(strile_indices) - 1
             j = len(keyframe.shape) - 1
             while True:
                 if strile_chunked[i] == keyframe.shape[j]:
-                    strile_chunked[i] = keyframe.chunked[j]
+                    strile_chunked[i] = chunked[j]
                     i -= 1
                     j -= 1
                 elif strile_chunked[i] == 1:
@@ -8190,7 +8251,7 @@ class ZarrTiffStore(ZarrStore):
                     raise RuntimeError('shape does not match page shape')
                 if i < 0 or j < 0:
                     break
-            assert product(strile_chunked) == product(keyframe.chunked)
+            assert product(strile_chunked) == product(chunked)
         if len(frames_indices) > 0:
             frameindex = int(
                 numpy.ravel_multi_index(frames_indices, frames_chunked)
@@ -8205,13 +8266,52 @@ class ZarrTiffStore(ZarrStore):
             strileindex = 0
         return frameindex, strileindex
 
+    @staticmethod
+    def _chunks(chunks, shape):
+        """Return chunks with same length as shape."""
+        ndim = len(shape)
+        if ndim == 0:
+            return ()  # empty array
+        if 0 in shape:
+            return (1,) * ndim
+        newchunks = []
+        i = ndim - 1
+        j = len(chunks) - 1
+        while True:
+            if j < 0:
+                newchunks.append(1)
+                i -= 1
+            elif shape[i] > 1 and chunks[j] > 1:
+                newchunks.append(chunks[j])
+                i -= 1
+                j -= 1
+            elif shape[i] == chunks[j]:  # both 1
+                newchunks.append(1)
+                i -= 1
+                j -= 1
+            elif shape[i] == 1:
+                newchunks.append(1)
+                i -= 1
+            elif chunks[j] == 1:
+                newchunks.append(1)
+                j -= 1
+            else:
+                raise RuntimeError
+            if i < 0 or ndim == len(newchunks):
+                break
+        # assert ndim == len(newchunks)
+        return tuple(newchunks[::-1])
+
 
 class ZarrFileStore(ZarrStore):
     """Zarr storage interface to image data in TiffSequence."""
 
-    def __init__(self, arg, fillvalue=None, **kwargs):
+    def __init__(self, arg, fillvalue=None, chunkmode=None, **kwargs):
         """Initialize Zarr storage from FileSequence."""
-        super().__init__(fillvalue=fillvalue)
+        super().__init__(fillvalue=fillvalue, chunkmode=chunkmode)
+
+        if self._chunkmode not in (0, 3):
+            raise NotImplementedError(f'{self._chunkmode!r} not implemented')
 
         if not isinstance(arg, FileSequence):
             raise TypeError('not a FileSequence')
@@ -8732,7 +8832,14 @@ class FileHandle:
         if result.nbytes != nbytes:
             raise ValueError('size mismatch')
 
-        n = self._fh.readinto(result)
+        try:
+            n = self._fh.readinto(result)
+        except AttributeError:
+            result[:] = numpy.frombuffer(self._fh.read(nbytes), dtype).reshape(
+                result.shape
+            )
+            n = nbytes
+
         if n != nbytes:
             raise ValueError(f'failed to read {nbytes} bytes')
 
@@ -9031,7 +9138,8 @@ class FileCache:
     def read(self, filehandle, offset, bytecount, whence=0):
         """Return bytes read from binary file."""
         with self.lock:
-            if filehandle not in self.files:
+            b = filehandle not in self.files
+            if b:
                 if filehandle.closed:
                     filehandle.open()
                     self.files[filehandle] = 0
@@ -9041,7 +9149,8 @@ class FileCache:
                 self.past.append(filehandle)
             filehandle.seek(offset, whence)
             data = filehandle.read(bytecount)
-            self._trim()
+            if b:
+                self._trim()
         return data
 
     def _trim(self):
@@ -10427,7 +10536,7 @@ class TIFF:
                 (50649, 'CR2Unknown2'),
                 (50656, 'CR2CFAPattern'),
                 (50674, 'LercParameters'),  # ESGI 50674 .. 50677
-                (50706, 'DNGVersion'),  # DNG 50706 .. 51112
+                (50706, 'DNGVersion'),  # DNG 50706 .. 51114
                 (50707, 'DNGBackwardVersion'),
                 (50708, 'UniqueCameraModel'),
                 (50709, 'LocalizedCameraModel'),
@@ -10531,10 +10640,18 @@ class TIFF:
                 (51110, 'DefaultBlackRender'),
                 (51111, 'NewRawImageDigest'),
                 (51112, 'RawToPreviewGain'),
-                (51125, 'DefaultUserCrop'),
+                (51113, 'CacheBlob'),
+                (51114, 'CacheVersion'),
                 (51123, 'MicroManagerMetadata'),
+                (51125, 'DefaultUserCrop'),
                 (51159, 'ZIFmetadata'),  # Objective Pathology Services
                 (51160, 'ZIFannotations'),  # Objective Pathology Services
+                (51177, 'DepthFormat'),
+                (51178, 'DepthNear'),
+                (51179, 'DepthFar'),
+                (51180, 'DepthUnits'),
+                (51181, 'DepthMeasureType'),
+                (51182, 'EnhanceParams'),
                 (59932, 'Padding'),
                 (59933, 'OffsetSchema'),
                 # Reusable Tags 65000-65535
@@ -10543,7 +10660,7 @@ class TIFF:
                 # (65000, 'OwnerName'),
                 # (65001, 'SerialNumber'),
                 # (65002, 'Lens'),
-                # (65024, 'KDC_IFD'),
+                # (65024, 'KodakKDCPrivateIFD'),
                 # (65100, 'RawFile'),
                 # (65101, 'Converter'),
                 # (65102, 'WhiteBalance'),
@@ -10825,6 +10942,8 @@ class TIFF:
             NONE = 1
             INCH = 2
             CENTIMETER = 3
+            MILLIMETER = 4  # DNG
+            MICROMETER = 5  # DNG
 
             def __bool__(self):
                 return self != 1
@@ -11049,6 +11168,34 @@ class TIFF:
                         codec = imagecodecs.delta_encode
                     elif key == 3:
                         codec = imagecodecs.floatpred_encode
+                    elif key == 34892:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.delta_encode(
+                                data, axis=axis, out=out, dist=2
+                            )
+
+                    elif key == 34893:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.delta_encode(
+                                data, axis=axis, out=out, dist=4
+                            )
+
+                    elif key == 34894:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.floatpred_encode(
+                                data, axis=axis, out=out, dist=2
+                            )
+
+                    elif key == 34895:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.floatpred_encode(
+                                data, axis=axis, out=out, dist=4
+                            )
+
                     else:
                         raise KeyError(f'{key} is not a valid PREDICTOR')
                 except AttributeError:
@@ -11078,6 +11225,34 @@ class TIFF:
                         codec = imagecodecs.delta_decode
                     elif key == 3:
                         codec = imagecodecs.floatpred_decode
+                    elif key == 34892:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.delta_decode(
+                                data, axis=axis, out=out, dist=2
+                            )
+
+                    elif key == 34893:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.delta_decode(
+                                data, axis=axis, out=out, dist=4
+                            )
+
+                    elif key == 34894:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.floatpred_decode(
+                                data, axis=axis, out=out, dist=2
+                            )
+
+                    elif key == 34895:
+
+                        def codec(data, axis=-1, out=None):
+                            return imagecodecs.floatpred_decode(
+                                data, axis=axis, out=out, dist=4
+                            )
+
                     else:
                         raise KeyError(f'{key} is not a valid PREDICTOR')
                 except AttributeError:
@@ -11115,16 +11290,23 @@ class TIFF:
                             and imagecodecs.DEFLATE
                         ):
                             codec = imagecodecs.deflate_encode
-                        else:
+                        elif imagecodecs.ZLIB:
                             codec = imagecodecs.zlib_encode
+                        else:
+                            codec = zlib_encode
                     elif key == 32773:
                         codec = imagecodecs.packbits_encode
                     elif key == 33003 or key == 33005 or key == 34712:
                         codec = imagecodecs.jpeg2k_encode
                     elif key == 34887:
                         codec = imagecodecs.lerc_encode
+                    elif key == 34892:
+                        codec = imagecodecs.jpeg8_encode  # DNG lossy
                     elif key == 34925:
-                        codec = imagecodecs.lzma_encode
+                        if imagecodecs.LZMA:
+                            codec = imagecodecs.lzma_encode
+                        else:
+                            codec = lzma_encode
                     elif key == 34933:
                         codec = imagecodecs.png_encode
                     elif key == 34934:
@@ -11175,18 +11357,23 @@ class TIFF:
                             and imagecodecs.DEFLATE
                         ):
                             codec = imagecodecs.deflate_decode
-                        else:
+                        elif imagecodecs.ZLIB:
                             codec = imagecodecs.zlib_decode
+                        else:
+                            codec = zlib_decode
                     elif key == 32773:
                         codec = imagecodecs.packbits_decode
-                    # elif key == 34892:
-                    #     codec = imagecodecs.jpeg_decode  # DNG lossy
                     elif key == 33003 or key == 33005 or key == 34712:
                         codec = imagecodecs.jpeg2k_decode
                     elif key == 34887:
                         codec = imagecodecs.lerc_decode
+                    elif key == 34892:
+                        codec = imagecodecs.jpeg8_decode  # DNG lossy
                     elif key == 34925:
-                        codec = imagecodecs.lzma_decode
+                        if imagecodecs.LZMA:
+                            codec = imagecodecs.lzma_decode
+                        else:
+                            codec = lzma_decode
                     elif key == 34933:
                         codec = imagecodecs.png_decode
                     elif key == 34934:
@@ -12264,6 +12451,15 @@ class TIFF:
 
         return max(multiprocessing.cpu_count() // 2, 1)
 
+    def CHUNKMODE():
+        class CHUNKMODE(enum.IntEnum):
+            NONE = 0
+            PLANE = 1
+            PAGE = 2
+            FILE = 3
+
+        return CHUNKMODE
+
 
 def read_tags(
     fh, byteorder, offsetsize, tagnames, customtags=None, maxifds=None
@@ -12448,6 +12644,7 @@ def read_json(fh, byteorder, dtype, count, offsetsize):
         return json.loads(stripnull(data).decode())
     except ValueError:
         log_warning('read_json: invalid JSON')
+    return None
 
 
 def read_mm_header(fh, byteorder, dtype, count, offsetsize):
@@ -13863,28 +14060,40 @@ def float24_decode(data, byteorder):
     raise NotImplementedError('float24_decode')
 
 
-if imagecodecs is None:
-    import lzma
+def zlib_encode(data, level=None, out=None):
+    """Compress Zlib DEFLATE."""
     import zlib
 
-    def zlib_encode(data, level=6, out=None):
-        """Compress Zlib DEFLATE."""
-        return zlib.compress(data, level)
+    return zlib.compress(data, 6 if level is None else level)
 
-    def zlib_decode(data, out=None):
-        """Decompress Zlib DEFLATE."""
-        return zlib.decompress(data)
 
-    def lzma_encode(data, level=None, out=None):
-        """Compress LZMA."""
-        return lzma.compress(data)
+def zlib_decode(data, out=None):
+    """Decompress Zlib DEFLATE."""
+    import zlib
+
+    return zlib.decompress(data)
+
+
+def lzma_encode(data, level=None, out=None):
+    """Compress LZMA."""
+    import lzma
+
+    return lzma.compress(data)
 
-    def lzma_decode(data, out=None):
-        """Decompress LZMA."""
-        return lzma.decompress(data)
 
-    def delta_encode(data, axis=-1, out=None):
+def lzma_decode(data, out=None):
+    """Decompress LZMA."""
+    import lzma
+
+    return lzma.decompress(data)
+
+
+if imagecodecs is None:
+
+    def delta_encode(data, axis=-1, dist=1, out=None):
         """Encode Delta."""
+        if dist != 1:
+            raise NotImplementedError(f'dist {dist} not implemented')
         if isinstance(data, (bytes, bytearray)):
             data = numpy.frombuffer(data, dtype='u1')
             diff = numpy.diff(data, axis=0)
@@ -13903,8 +14112,10 @@ if imagecodecs is None:
             return diff.view(dtype)
         return diff
 
-    def delta_decode(data, axis=-1, out=None):
+    def delta_decode(data, axis=-1, dist=1, out=None):
         """Decode Delta."""
+        if dist != 1:
+            raise NotImplementedError(f'dist {dist} not implemented')
         if out is not None and not out.flags.writeable:
             out = None
         if isinstance(data, (bytes, bytearray)):



View it on GitLab: https://salsa.debian.org/python-team/packages/tifffile/-/compare/a1d28d52f8c492fd747aa158cc84e23080a67b9d...0ba374534ceccfac425531ddb36b7893533502bc

-- 
View it on GitLab: https://salsa.debian.org/python-team/packages/tifffile/-/compare/a1d28d52f8c492fd747aa158cc84e23080a67b9d...0ba374534ceccfac425531ddb36b7893533502bc
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20210207/520deb72/attachment-0001.html>


More information about the debian-med-commit mailing list