[med-svn] [Git][python-team/packages/tifffile][upstream] New upstream version 20201208
Ole Streicher
gitlab at salsa.debian.org
Fri Dec 11 08:50:11 GMT 2020
Ole Streicher pushed to branch upstream at Debian Python Team / packages / tifffile
Commits:
11e8b78e by Ole Streicher at 2020-12-11T09:44:47+01:00
New upstream version 20201208
- - - - -
7 changed files:
- CHANGES.rst
- PKG-INFO
- README.rst
- tests/test_tifffile.py
- tifffile.egg-info/PKG-INFO
- tifffile/tifffile.py
- tifffile/tifffile_geodb.py
Changes:
=====================================
CHANGES.rst
=====================================
@@ -1,7 +1,16 @@
Revisions
---------
+2020.12.8
+ Pass 4376 tests.
+ Fix corrupted ImageDescription in multi shaped series if buffer too small.
+ Fix libtiff warning that ImageDescription contains null byte in value.
+ Fix reading invalid files using JPEG compression with palette colorspace.
+2020.12.4
+ Fix reading some JPEG compressed CFA images.
+ Make index of SubIFDs a tuple.
+ Pass through FileSequence.imread arguments in imread.
+ Do not apply regex flags to FileSequence axes patterns (breaking).
2020.11.26
- Pass 4372 tests.
Add option to pass axes metadata to ImageJ writer.
Pad incomplete tiles passed to TiffWriter.write (#38).
Split TiffTag constructor (breaking).
=====================================
PKG-INFO
=====================================
@@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: tifffile
-Version: 2020.11.26
+Version: 2020.12.8
Summary: Read and write TIFF(r) files
Home-page: https://www.lfd.uci.edu/~gohlke/
Author: Christoph Gohlke
@@ -51,28 +51,37 @@ Description: Read and write TIFF(r) files
:License: BSD 3-Clause
- :Version: 2020.11.26
+ :Version: 2020.12.8
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
- * `CPython 3.7.9, 3.8.6, 3.9.0 64-bit <https://www.python.org>`_
+ * `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
- * `Lxml 4.6.1 <https://pypi.org/project/lxml/>`_
+ * `Lxml 4.6.2 <https://pypi.org/project/lxml/>`_
(required only for validating and printing XML)
- * `Zarr 2.5.0 <https://pypi.org/project/zarr/>`_
+ * `Zarr 2.6.1 <https://pypi.org/project/zarr/>`_
(required only for opening zarr storage)
Revisions
---------
+ 2020.12.8
+ Pass 4376 tests.
+ Fix corrupted ImageDescription in multi shaped series if buffer too small.
+ Fix libtiff warning that ImageDescription contains null byte in value.
+ Fix reading invalid files using JPEG compression with palette colorspace.
+ 2020.12.4
+ Fix reading some JPEG compressed CFA images.
+ Make index of SubIFDs a tuple.
+ Pass through FileSequence.imread arguments in imread.
+ Do not apply regex flags to FileSequence axes patterns (breaking).
2020.11.26
- Pass 4372 tests.
Add option to pass axes metadata to ImageJ writer.
Pad incomplete tiles passed to TiffWriter.write (#38).
Split TiffTag constructor (breaking).
@@ -298,6 +307,7 @@ Description: Read and write TIFF(r) files
Other tools for inspecting and manipulating TIFF files:
* `tifftools <https://github.com/DigitalSlideArchive/tifftools>`_
+ * `Tyf <https://github.com/Moustikitos/tyf>`_
References
----------
=====================================
README.rst
=====================================
@@ -41,28 +41,37 @@ For command line usage run ``python -m tifffile --help``
:License: BSD 3-Clause
-:Version: 2020.11.26
+:Version: 2020.12.8
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
-* `CPython 3.7.9, 3.8.6, 3.9.0 64-bit <https://www.python.org>`_
+* `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
-* `Lxml 4.6.1 <https://pypi.org/project/lxml/>`_
+* `Lxml 4.6.2 <https://pypi.org/project/lxml/>`_
(required only for validating and printing XML)
-* `Zarr 2.5.0 <https://pypi.org/project/zarr/>`_
+* `Zarr 2.6.1 <https://pypi.org/project/zarr/>`_
(required only for opening zarr storage)
Revisions
---------
+2020.12.8
+ Pass 4376 tests.
+ Fix corrupted ImageDescription in multi shaped series if buffer too small.
+ Fix libtiff warning that ImageDescription contains null byte in value.
+ Fix reading invalid files using JPEG compression with palette colorspace.
+2020.12.4
+ Fix reading some JPEG compressed CFA images.
+ Make index of SubIFDs a tuple.
+ Pass through FileSequence.imread arguments in imread.
+ Do not apply regex flags to FileSequence axes patterns (breaking).
2020.11.26
- Pass 4372 tests.
Add option to pass axes metadata to ImageJ writer.
Pad incomplete tiles passed to TiffWriter.write (#38).
Split TiffTag constructor (breaking).
@@ -288,6 +297,7 @@ Some libraries are using tifffile to write OME-TIFF files:
Other tools for inspecting and manipulating TIFF files:
* `tifftools <https://github.com/DigitalSlideArchive/tifftools>`_
+* `Tyf <https://github.com/Moustikitos/tyf>`_
References
----------
=====================================
tests/test_tifffile.py
=====================================
@@ -42,7 +42,7 @@ Private data files are not available due to size and copyright restrictions.
:License: BSD 3-Clause
-:Version: 2020.11.26
+:Version: 2020.12.8
"""
@@ -54,6 +54,7 @@ import math
import os
import pathlib
import random
+import re
import struct
import sys
import tempfile
@@ -173,17 +174,19 @@ from tifffile.tifffile import (
)
# skip certain tests
-SKIP_HUGE = False
+SKIP_LARGE = False # skip tests requiring large memory
SKIP_EXTENDED = False
SKIP_PUBLIC = False # skip public files
SKIP_PRIVATE = False # skip private files
SKIP_VALIDATE = True # skip validate written files with jhove
SKIP_CODECS = False
SKIP_ZARR = False
-SKIP_32BIT = sys.maxsize < 2 ** 32
SKIP_BE = sys.byteorder == 'big'
REASON = 'just skip it'
+if sys.maxsize < 2 ** 32:
+ SKIP_LARGE = True
+
MINISBLACK = TIFF.PHOTOMETRIC.MINISBLACK
MINISWHITE = TIFF.PHOTOMETRIC.MINISWHITE
RGB = TIFF.PHOTOMETRIC.RGB
@@ -250,22 +253,22 @@ def config():
)
-def data_file(pathname, base):
+def data_file(pathname, base, expand=True):
"""Return path to test file(s)."""
path = os.path.join(base, *pathname.split('/'))
- if any(i in path for i in '*?'):
+ if expand and any(i in path for i in '*?'):
return glob.glob(path)
return path
-def private_file(pathname, base=PRIVATE_DIR):
+def private_file(pathname, base=PRIVATE_DIR, expand=True):
"""Return path to private test file(s)."""
- return data_file(pathname, base)
+ return data_file(pathname, base, expand=expand)
-def public_file(pathname, base=PUBLIC_DIR):
+def public_file(pathname, base=PUBLIC_DIR, expand=True):
"""Return path to public test file(s)."""
- return data_file(pathname, base)
+ return data_file(pathname, base, expand=expand)
def random_data(dtype, shape):
@@ -398,7 +401,7 @@ def test_issue_imread_kwargs():
for image in data:
tif.write(image) # create 5 series
assert_valid_tiff(fname)
- image = imread(fname) # reads first series
+ image = imread(fname, pattern=None) # reads first series
assert_array_equal(image, data[0])
image = imread(fname, is_shaped=False) # reads all pages
assert_array_equal(image, data)
@@ -463,6 +466,23 @@ def test_issue_jpeg_ia():
assert__str__(tif)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
+def test_issue_jpeg_palette():
+ """Test invalid JPEG compressed intensity image with palette."""
+ # https://forum.image.sc/t/viv-and-avivator/45999/24
+ fname = private_file('issues/FL_cells.ome.tif')
+ with TiffFile(fname) as tif:
+ page = tif.pages[0]
+ assert page.compression == JPEG
+ assert page.colormap is not None
+ data = tif.asarray()
+ assert data.shape == (4, 1024, 1024)
+ assert data.dtype == 'uint8'
+ assert data[2, 512, 512] == 10
+ assert_aszarr_method(tif, data)
+ assert__str__(tif)
+
+
def test_issue_specific_pages():
"""Test read second page."""
data = random_data('uint8', (3, 21, 31))
@@ -710,13 +730,21 @@ def test_issue_imagej_grascalemode():
assert__str__(tif)
-def test_issue_valueoffset():
+ at pytest.mark.parametrize('byteorder', ['>', '<'])
+def test_issue_valueoffset(byteorder):
"""Test read TiffTag.valueoffsets."""
unpack = struct.unpack
- data = random_data('uint16', (2, 19, 31))
+ data = random_data(byteorder + 'u2', (2, 19, 31))
software = 'test_tifffile'
- with TempFileName('valueoffset') as fname:
- imwrite(fname, data, software=software, photometric='minisblack')
+ bo = {'>': 'be', '<': 'le'}[byteorder]
+ with TempFileName(f'valueoffset_{bo}') as fname:
+ imwrite(
+ fname,
+ data,
+ software=software,
+ photometric='minisblack',
+ extratags=[(65535, 3, 2, (21, 22), True)],
+ )
with TiffFile(fname, _useframes=True) as tif:
with open(fname, 'rb') as fh:
page = tif.pages[0]
@@ -724,8 +752,11 @@ def test_issue_valueoffset():
fh.seek(page.tags['ImageLength'].valueoffset)
assert (
page.imagelength
- == unpack(tif.byteorder + 'H', fh.read(2))[0]
+ == unpack(tif.byteorder + 'I', fh.read(4))[0]
)
+ # two inline values
+ fh.seek(page.tags[65535].valueoffset)
+ assert unpack(tif.byteorder + 'H', fh.read(2))[0] == 21
# separate value
fh.seek(page.tags['Software'].valueoffset)
assert page.software == bytes2str(fh.read(13))
@@ -2495,7 +2526,7 @@ def test_func_pformat_xml():
)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_func_lsm2bin():
"""Test lsm2bin function."""
# Convert LSM to BIN
@@ -3968,7 +3999,7 @@ def test_read_lzw_12bit_table():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON)
def test_read_lzw_large_buffer():
"""Test read LZW compression which requires large buffer."""
# https://github.com/groupdocs-viewer/GroupDocs.Viewer-for-.NET-MVC-App
@@ -4116,9 +4147,7 @@ def test_read_jpeg12_mandril():
assert__str__(tif)
- at pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_CODECS or SKIP_HUGE or SKIP_32BIT, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON)
def test_read_jpeg_lsb2msb():
"""Test read huge tiled, JPEG compressed, with lsb2msb specified.
@@ -4313,6 +4342,33 @@ def test_read_zstd():
assert__str__(tif)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
+def test_read_dng():
+ """Test read JPEG compressed CFA image in SubIFD."""
+ fname = private_file('DNG/IMG_0793.DNG')
+ with TiffFile(fname) as tif:
+ assert len(tif.pages) == 1
+ assert len(tif.series) == 2
+ page = tif.pages[0]
+ assert page.index == 0
+ assert page.shape == (640, 852, 3)
+ assert page.bitspersample == 8
+ data = page.asarray()
+ assert_aszarr_method(tif, data)
+ page = tif.pages[0].pages[0]
+ assert page.is_tiled
+ assert page.index == (0, 0)
+ assert page.compression == JPEG
+ assert page.photometric == CFA
+ assert page.shape == (3024, 4032)
+ assert page.bitspersample == 16
+ assert page.tags['CFARepeatPatternDim'].value == (2, 2)
+ assert page.tags['CFAPattern'].value == b'\x00\x01\x01\x02'
+ data = page.asarray()
+ assert_aszarr_method(tif.series[1], data)
+ assert__str__(tif)
+
+
@pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
def test_read_cfa():
"""Test read 14-bit uncompressed and JPEG compressed CFA image."""
@@ -4519,9 +4575,7 @@ def test_read_lena_be_rgb48():
assert__str__(tif)
- at pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_EXTENDED or SKIP_32BIT or SKIP_HUGE, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_EXTENDED or SKIP_LARGE, reason=REASON)
def test_read_huge_ps5_memmap():
"""Test read 30000x30000 float32 contiguous."""
fname = private_file('large/huge_ps5.tif')
@@ -4556,7 +4610,7 @@ def test_read_huge_ps5_memmap():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED or SKIP_LARGE, reason=REASON)
def test_read_movie():
"""Test read 30000 pages, uint16."""
fname = public_file('tifffile/movie.tif')
@@ -4600,7 +4654,7 @@ def test_read_movie():
assert__str__(tif, 0)
- at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED, reason=REASON)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED or SKIP_LARGE, reason=REASON)
def test_read_movie_memmap():
"""Test read 30000 pages memory-mapped."""
fname = public_file('tifffile/movie.tif')
@@ -4617,9 +4671,7 @@ def test_read_movie_memmap():
assert__str__(tif, 0)
- at pytest.mark.skipif(
- SKIP_PUBLIC or SKIP_EXTENDED or SKIP_32BIT or SKIP_HUGE, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED or SKIP_LARGE, reason=REASON)
def test_read_100000_pages_movie():
"""Test read 100000x64x64 big endian in memory."""
fname = public_file('tifffile/100000_pages.tif')
@@ -4658,7 +4710,7 @@ def test_read_100000_pages_movie():
assert__str__(tif, 0)
- at pytest.mark.skipif(SKIP_PUBLIC or SKIP_EXTENDED, reason=REASON)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON)
def test_read_chart_bl():
"""Test read 13228x18710, 1 bit, no bitspersample tag."""
fname = public_file('tifffile/chart_bl.tif')
@@ -4687,13 +4739,12 @@ def test_read_chart_bl():
assert data.dtype.name == 'bool'
assert data[0, 0] is numpy.bool_(True)
assert data[5000, 5000] is numpy.bool_(False)
- assert_aszarr_method(tif, data)
+ if not SKIP_LARGE:
+ assert_aszarr_method(tif, data)
assert__str__(tif)
- at pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_EXTENDED or SKIP_32BIT or SKIP_HUGE, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_EXTENDED or SKIP_LARGE, reason=REASON)
def test_read_srtm_20_13():
"""Test read 6000x6000 int16 GDAL."""
fname = private_file('large/srtm_20_13.tif')
@@ -4731,7 +4782,7 @@ def test_read_srtm_20_13():
@pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_CODECS or SKIP_EXTENDED, reason=REASON
+ SKIP_PRIVATE or SKIP_CODECS or SKIP_EXTENDED or SKIP_LARGE, reason=REASON
)
def test_read_gel_scan():
"""Test read 6976x4992x3 uint8 LZW."""
@@ -4946,7 +4997,7 @@ def test_read_tiles():
assert_array_equal(tile, next(segments)[0])
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_lsm_mosaic():
"""Test read LSM: PTZCYX (Mosaic mode), two areas, 32 samples, >4 GB."""
# LSM files are little endian with two series, one of which is reduced RGB
@@ -4996,7 +5047,7 @@ def test_read_lsm_mosaic():
assert__str__(tif, 0)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_lsm_carpet():
"""Test read LSM: ZCTYX (time series x-y), 72000 pages."""
# reads very slowly, ensure colormap is not applied
@@ -5265,9 +5316,7 @@ def test_read_lsm_earpax2isl11():
assert__str__(tif)
- at pytest.mark.skipif(
- SKIP_PUBLIC or SKIP_CODECS or SKIP_HUGE or SKIP_32BIT, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_CODECS or SKIP_LARGE, reason=REASON)
def test_read_lsm_mb231paxgfp_060214():
"""Test read LSM with many LZW compressed pages."""
# TZCYX (Stack mode), (60, 31, 2, 512, 512), 3720
@@ -5603,7 +5652,7 @@ def test_read_stk_10xcalib():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT or SKIP_EXTENDED, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_stk_112508h100():
"""Test read MetaMorph STK large time-series."""
fname = private_file('stk/112508h100.stk')
@@ -5677,7 +5726,7 @@ def test_read_ndpi_cmu1():
@pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_CODECS or SKIP_32BIT or SKIP_EXTENDED, reason=REASON
+ SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE or SKIP_EXTENDED, reason=REASON
)
def test_read_ndpi_cmu2():
"""Test read Hamamatsu NDPI slide, JPEG."""
@@ -5831,7 +5880,7 @@ def test_read_svs_jp2k_33003_1():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON)
def test_read_scn_collection():
"""Test read Leica SCN slide, JPEG."""
# collection of 43 CZYX images
@@ -5902,7 +5951,7 @@ def test_read_scanimage_no_framedata():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_HUGE or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_scanimage_2gb():
"""Test read ScanImage non-BigTIFF > 2 GB.
@@ -6521,7 +6570,7 @@ def test_read_ome_zen_2chzt():
assert__str__(tif, 0)
- at pytest.mark.skipif(SKIP_PUBLIC or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE, reason=REASON)
def test_read_ome_multifile():
"""Test read OME CTZYX series in 86 files."""
# (2, 43, 10, 512, 512) CTZYX uint8 in 86 files, 10 pages each
@@ -6575,7 +6624,7 @@ def test_read_ome_multifile():
# self.assertTrue(page.parent.filehandle._fh)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_ome_multifile_missing(caplog):
"""Test read OME referencing missing files."""
# (2, 43, 10, 512, 512) CTZYX uint8, 85 files missing
@@ -6755,7 +6804,7 @@ def test_read_ome_cropped(caplog):
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS or SKIP_LARGE, reason=REASON)
def test_read_ome_corrupted_page(caplog):
"""Test read OME with corrupted but not referenced page."""
# https://forum.image.sc/t/qupath-0-2-0-not-able-to-open-ome-tiff/23821/3
@@ -7388,7 +7437,7 @@ def test_read_imagej_fluorescentcells():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PUBLIC or SKIP_32BIT or SKIP_EXTENDED, reason=REASON)
+ at pytest.mark.skipif(SKIP_PUBLIC or SKIP_LARGE or SKIP_EXTENDED, reason=REASON)
def test_read_imagej_100000_pages():
"""Test read ImageJ with 100000 pages."""
# 100000x64x64
@@ -7541,7 +7590,7 @@ def test_read_fluoview_lsp1_v_laser():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_HUGE or SKIP_32BIT, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_read_fluoview_120816_bf_f0000():
"""Test read FluoView TZYX."""
fname = private_file('fluoview/120816_bf_f0000.tif')
@@ -7644,7 +7693,8 @@ def test_read_metaseries_g4d7r():
assert data.shape == (12113, 13453)
assert data.dtype.name == 'uint16'
assert round(abs(data[512, 2856] - 4095), 7) == 0
- assert_aszarr_method(series, data)
+ if not SKIP_LARGE:
+ assert_aszarr_method(series, data)
assert__str__(tif)
@@ -10984,10 +11034,10 @@ def test_write_volumetric_striped_contig_rgb_empty():
assert__str__(tif)
-def test_write_multiple_save():
- """Test append pages."""
+def test_write_contiguous():
+ """Test contiguous mode."""
data = random_data('uint8', (5, 4, 219, 301, 3))
- with TempFileName('append') as fname:
+ with TempFileName('write_contiguous') as fname:
with TiffWriter(fname, bigtiff=True) as tif:
for i in range(data.shape[0]):
tif.write(data[i], contiguous=True)
@@ -10995,6 +11045,8 @@ def test_write_multiple_save():
with TiffFile(fname) as tif:
assert tif.is_bigtiff
assert len(tif.pages) == 20
+ # check metadata is updated in-place
+ assert tif.pages[0].tags[270].valueoffset < tif.pages[1].offset
for page in tif.pages:
assert page.is_contiguous
assert page.planarconfig == CONTIG
@@ -11008,7 +11060,7 @@ def test_write_multiple_save():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_LARGE, reason=REASON)
def test_write_3gb():
"""Test write 3 GB no-BigTiff file."""
# https://github.com/blink1073/tifffile/issues/47
@@ -11021,7 +11073,7 @@ def test_write_3gb():
assert not tif.is_bigtiff
- at pytest.mark.skipif(SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_LARGE, reason=REASON)
def test_write_bigtiff():
"""Test write 5GB BigTiff file."""
data = numpy.empty((640, 1024, 1024), dtype='float64')
@@ -11514,7 +11566,7 @@ def test_write_imagej_append():
assert__str__(tif)
- at pytest.mark.skipif(SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_LARGE, reason=REASON)
def test_write_imagej_raw():
"""Test write ImageJ 5 GB raw file."""
data = numpy.empty((1280, 1, 1024, 1024), dtype='float32')
@@ -12138,9 +12190,7 @@ def test_sequence_zip_container():
assert_array_equal(data, imread(container=fname))
- at pytest.mark.skipif(
- SKIP_PRIVATE or SKIP_HUGE or SKIP_CODECS or SKIP_32BIT, reason=REASON
-)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE or SKIP_CODECS, reason=REASON)
def test_sequence_wells_axesorder():
"""Test FileSequence with well plates and axes reorder."""
ptrn = r'(?:_(z)_(\d+)).*_(?P<p>[a-z])(?P<a>\d+)(?:_(s)(\d))(?:_(w)(\d))'
@@ -12161,6 +12211,31 @@ def test_sequence_wells_axesorder():
assert_array_equal(data, zarr.open(store, mode='r'))
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
+def test_sequence_tiled():
+ """Test FileSequence with tiled OME-TIFFs."""
+ # Dataset from https://github.com/tlambert03/tifffolder/issues/2
+ ptrn = re.compile(
+ r'\[(?P<U>\d+) x (?P<V>\d+)\].*(C)(\d+).*(Z)(\d+)', re.IGNORECASE
+ )
+ fnames = private_file('TiffSequenceTiled/*.tif', expand=False)
+ tifs = TiffSequence(fnames, pattern=ptrn)
+ assert len(tifs) == 60
+ assert tifs.shape == (2, 3, 2, 5)
+ assert tifs.axes == 'UVCZ'
+ data = tifs.asarray(is_ome=False)
+ assert isinstance(data, numpy.ndarray)
+ assert data.flags['C_CONTIGUOUS']
+ assert data.shape == (2, 3, 2, 5, 2560, 2160)
+ assert data.dtype == 'uint16'
+ assert data[1, 2, 1, 3, 1024, 1024] == 596
+ if not SKIP_ZARR:
+ with tifs.aszarr(is_ome=False) as store:
+ assert_array_equal(
+ data[1, 2, 1, 3:5], zarr.open(store, mode='r')[1, 2, 1, 3:5]
+ )
+
+
@pytest.mark.skipif(SKIP_PRIVATE or SKIP_CODECS, reason=REASON)
def test_sequence_imread():
"""Test TiffSequence with imagecodecs.imread."""
@@ -12249,7 +12324,7 @@ def test_depend_czifile():
assert data[0, 0, 52, 182, 182, 0] == 10
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_depend_czi2tif():
"""Test czifile.czi2tif."""
from czifile.czifile import CziFile, czi2tif
@@ -12270,7 +12345,7 @@ def test_depend_czi2tif():
assert_valid_tiff(tif)
- at pytest.mark.skipif(SKIP_PRIVATE or SKIP_32BIT or SKIP_HUGE, reason=REASON)
+ at pytest.mark.skipif(SKIP_PRIVATE or SKIP_LARGE, reason=REASON)
def test_depend_czi2tif_airy():
"""Test czifile.czi2tif with AiryScan."""
from czifile.czifile import czi2tif
=====================================
tifffile.egg-info/PKG-INFO
=====================================
@@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: tifffile
-Version: 2020.11.26
+Version: 2020.12.8
Summary: Read and write TIFF(r) files
Home-page: https://www.lfd.uci.edu/~gohlke/
Author: Christoph Gohlke
@@ -51,28 +51,37 @@ Description: Read and write TIFF(r) files
:License: BSD 3-Clause
- :Version: 2020.11.26
+ :Version: 2020.12.8
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
- * `CPython 3.7.9, 3.8.6, 3.9.0 64-bit <https://www.python.org>`_
+ * `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
- * `Lxml 4.6.1 <https://pypi.org/project/lxml/>`_
+ * `Lxml 4.6.2 <https://pypi.org/project/lxml/>`_
(required only for validating and printing XML)
- * `Zarr 2.5.0 <https://pypi.org/project/zarr/>`_
+ * `Zarr 2.6.1 <https://pypi.org/project/zarr/>`_
(required only for opening zarr storage)
Revisions
---------
+ 2020.12.8
+ Pass 4376 tests.
+ Fix corrupted ImageDescription in multi shaped series if buffer too small.
+ Fix libtiff warning that ImageDescription contains null byte in value.
+ Fix reading invalid files using JPEG compression with palette colorspace.
+ 2020.12.4
+ Fix reading some JPEG compressed CFA images.
+ Make index of SubIFDs a tuple.
+ Pass through FileSequence.imread arguments in imread.
+ Do not apply regex flags to FileSequence axes patterns (breaking).
2020.11.26
- Pass 4372 tests.
Add option to pass axes metadata to ImageJ writer.
Pad incomplete tiles passed to TiffWriter.write (#38).
Split TiffTag constructor (breaking).
@@ -298,6 +307,7 @@ Description: Read and write TIFF(r) files
Other tools for inspecting and manipulating TIFF files:
* `tifftools <https://github.com/DigitalSlideArchive/tifftools>`_
+ * `Tyf <https://github.com/Moustikitos/tyf>`_
References
----------
=====================================
tifffile/tifffile.py
=====================================
@@ -71,28 +71,37 @@ For command line usage run ``python -m tifffile --help``
:License: BSD 3-Clause
-:Version: 2020.11.26
+:Version: 2020.12.8
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
-* `CPython 3.7.9, 3.8.6, 3.9.0 64-bit <https://www.python.org>`_
+* `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
-* `Lxml 4.6.1 <https://pypi.org/project/lxml/>`_
+* `Lxml 4.6.2 <https://pypi.org/project/lxml/>`_
(required only for validating and printing XML)
-* `Zarr 2.5.0 <https://pypi.org/project/zarr/>`_
+* `Zarr 2.6.1 <https://pypi.org/project/zarr/>`_
(required only for opening zarr storage)
Revisions
---------
+2020.12.8
+ Pass 4376 tests.
+ Fix corrupted ImageDescription in multi shaped series if buffer too small.
+ Fix libtiff warning that ImageDescription contains null byte in value.
+ Fix reading invalid files using JPEG compression with palette colorspace.
+2020.12.4
+ Fix reading some JPEG compressed CFA images.
+ Make index of SubIFDs a tuple.
+ Pass through FileSequence.imread arguments in imread.
+ Do not apply regex flags to FileSequence axes patterns (breaking).
2020.11.26
- Pass 4372 tests.
Add option to pass axes metadata to ImageJ writer.
Pad incomplete tiles passed to TiffWriter.write (#38).
Split TiffTag constructor (breaking).
@@ -318,6 +327,7 @@ Some libraries are using tifffile to write OME-TIFF files:
Other tools for inspecting and manipulating TIFF files:
* `tifftools <https://github.com/DigitalSlideArchive/tifftools>`_
+* `Tyf <https://github.com/Moustikitos/tyf>`_
References
----------
@@ -596,7 +606,7 @@ as numpy or zarr arrays:
"""
-__version__ = '2020.11.26'
+__version__ = '2020.12.8'
__all__ = (
'imwrite',
@@ -688,7 +698,7 @@ def imread(files=None, aszarr=False, **kwargs):
zarr storage instead of numpy array (experimental).
kwargs : dict
Parameters 'name', 'offset', 'size', and 'is_' flags are passed to
- TiffFile().
+ TiffFile or TiffSequence.imread.
The 'pattern', 'sort', 'container', and 'axesorder' parameters are
passed to TiffSequence().
Other parameters are passed to the asarray or aszarr functions.
@@ -734,8 +744,8 @@ def imread(files=None, aszarr=False, **kwargs):
)
kwargs['key'] = kwargs.pop('pages')
- if not kwargs_seq:
- if isinstance(files, str) and any(i in files for i in '?*'):
+ if kwargs_seq.get('container', None) is None:
+ if isinstance(files, str) and ('*' in files or '?' in files):
files = glob.glob(files)
if not files:
raise ValueError('no files found')
@@ -754,8 +764,8 @@ def imread(files=None, aszarr=False, **kwargs):
with TiffSequence(files, **kwargs_seq) as imseq:
if aszarr:
- return imseq.aszarr(**kwargs)
- return imseq.asarray(**kwargs)
+ return imseq.aszarr(**kwargs, **kwargs_file)
+ return imseq.asarray(**kwargs, **kwargs_file)
def imwrite(file, data=None, shape=None, dtype=None, **kwargs):
@@ -1054,7 +1064,7 @@ class TiffWriter:
if append:
self._fh = FileHandle(file, mode='r+b', size=0)
- self._fh.seek(0, 2)
+ self._fh.seek(0, os.SEEK_END)
else:
self._fh = FileHandle(file, mode='wb', size=0)
self._fh.write({'<': b'II', '>': b'MM'}[byteorder])
@@ -1428,6 +1438,8 @@ class TiffWriter:
)
elif metadata is not None:
self._write_image_description()
+ # description might have been appended to file
+ fh.seek(0, os.SEEK_END)
if self._subifds:
if self._truncate or truncate:
@@ -1861,7 +1873,7 @@ class TiffWriter:
count = len(value)
if code == 270:
self._descriptiontag = TiffTag(270, 2, count, None, 0, 0)
- rawcount = value.rfind(b'\x00\x00')
+ rawcount = value.find(b'\x00\x00')
if rawcount < 0:
rawcount = count
else:
@@ -1974,21 +1986,19 @@ class TiffWriter:
self._colormap is not None,
**self._metadata,
)
+ description += '\x00' * 64 # add buffer for in-place update
elif metadata or metadata == {}:
if self._truncate:
self._metadata.update(truncated=True)
description = json_description(inputshape, **self._metadata)
+ description += '\x00' * 16 # add buffer for in-place update
# elif metadata is None and self._truncate:
# raise ValueError('cannot truncate without writing metadata')
else:
description = None
+
if description is not None:
description = description.encode('ascii')
- if not self._ome:
- # add 64 bytes buffer
- # the description might be updated later inplace with the
- # final shape
- description += b'\x00' * 64
addtag(270, 2, 0, description, writeonce=True)
del description
@@ -2372,14 +2382,14 @@ class TiffWriter:
ifdsize += 1
# write IFD later when strip/tile bytecounts and offsets are known
- fh.seek(ifdsize, 1)
+ fh.seek(ifdsize, os.SEEK_CUR)
# write image data
dataoffset = fh.tell()
if align is None:
align = 16
skip = (align - (dataoffset % align)) % align
- fh.seek(skip, 1)
+ fh.seek(skip, os.SEEK_CUR)
dataoffset += skip
if contiguous:
if data is None:
@@ -4433,11 +4443,14 @@ class TiffFile:
info = [info]
info.append('\n'.join(str(s) for s in self.series))
if detail >= 3:
- info.extend(
- TiffPage.__str__(p, detail=detail, width=width)
- for p in self.pages
- if p is not None
- )
+ for p in self.pages:
+ if p is None:
+ continue
+ info.append(TiffPage.__str__(p, detail=detail, width=width))
+ for s in p.pages:
+ info.append(
+ TiffPage.__str__(s, detail=detail, width=width)
+ )
elif self.series:
info.extend(
TiffPage.__str__(s.pages[0], detail=detail, width=width)
@@ -4742,13 +4755,11 @@ class TiffFile:
@lazyattr
def micromanager_metadata(self):
- """Return consolidated MicroManager metadata as dict."""
+ """Return MicroManager non-TIFF settings from file as dict."""
if not self.is_micromanager:
return None
# from file header
return read_micromanager_metadata(self._fh)
- # from MicroManagerMetadata tag
- # result.update(self.pages[0].tags[51123].value)
@lazyattr
def scanimage_metadata(self):
@@ -4787,7 +4798,7 @@ class TiffPages:
"""
- def __init__(self, parent):
+ def __init__(self, parent, index=None):
"""Initialize instance and read first TiffPage from file.
If parent is a TiffFile, the file position must be at an offset to an
@@ -4803,6 +4814,7 @@ class TiffPages:
self._keyframe = None # page that is currently used as keyframe
self._cache = False # do not cache frames or pages (if not keyframe)
self._nextpageoffset = None
+ self._index = (index,) if isinstance(index, int) else index
if isinstance(parent, TiffFile):
# read offset to first page from current file position
@@ -4835,9 +4847,11 @@ class TiffPages:
self._indexed = True
return
+ pageindex = 0 if self._index is None else self._index + (0,)
+
# read and cache first page
fh.seek(offset)
- page = TiffPage(self.parent, index=0)
+ page = TiffPage(self.parent, index=pageindex)
self.pages.append(page)
self._keyframe = page
if self._nextpageoffset is None:
@@ -4937,8 +4951,11 @@ class TiffPages:
keyframe = self._keyframe
for i, page in enumerate(pages):
if isinstance(page, (int, numpy.integer)):
+ pageindex = i if self._index is None else self._index + (i,)
fh.seek(page)
- page = self._tiffpage(self.parent, index=i, keyframe=keyframe)
+ page = self._tiffpage(
+ self.parent, index=pageindex, keyframe=keyframe
+ )
pages[i] = page
self._cached = True
@@ -4965,13 +4982,16 @@ class TiffPages:
for index, offset in enumerate(
range(page1.offset + delta, filesize, delta)
):
- d = (index + 2) * delta
+ pageindex = index + 2
+ d = pageindex * delta
offsets = tuple(i + d for i in page.dataoffsets)
offset = offset if offset < 2 ** 31 - 1 else None
+ if self._index is not None:
+ pageindex = self._index + (pageindex,)
pages.append(
TiffFrame(
parent=page.parent,
- index=index + 2,
+ index=pageindex,
offset=offset,
offsets=offsets,
bytecounts=page.databytecounts,
@@ -5153,8 +5173,9 @@ class TiffPages:
raise RuntimeError('page hash mismatch')
return page
+ pageindex = key if self._index is None else self._index + (key,)
self._seek(key)
- page = tiffpage(self.parent, index=key, keyframe=self._keyframe)
+ page = tiffpage(self.parent, index=pageindex, keyframe=self._keyframe)
if validate and validate != page.hash:
raise RuntimeError('page hash mismatch')
if self._cache or cache:
@@ -5318,7 +5339,7 @@ class TiffPage:
if tiff.version == 42 and tiff.offsetsize == 8:
# patch offsets/values for 64-bit NDPI file
tagsize = 16
- fh.seek(8, 1)
+ fh.seek(8, os.SEEK_CUR)
ext = fh.read(4 * tagno) # high bits
data = b''.join(
data[i * 12 : i * 12 + 12] + ext[i * 4 : i * 4 + 4]
@@ -5596,7 +5617,8 @@ class TiffPage:
def decode(*args, **kwargs):
raise ValueError(
f'TiffPage {self.index}: data type not supported: '
- f'{self.sampleformat}{self.bitspersample}'
+ f'SampleFormat {self.sampleformat}, '
+ f'{self.bitspersample}-bit'
)
return cache(decode)
@@ -5690,16 +5712,19 @@ class TiffPage:
data = data[:size]
if data.size == size:
# complete tile
- data.shape = shape
+ # data might be non-contiguous; cannot reshape inplace
+ data = data.reshape(shape)
else:
# data fills remaining space
# found in some JPEG/PNG compressed tiles
try:
- data.shape = (
- min(imdepth - indices[1], shape[0]),
- min(imlength - indices[2], shape[1]),
- min(imwidth - indices[3], shape[2]),
- samples,
+ data = data.reshape(
+ (
+ min(imdepth - indices[1], shape[0]),
+ min(imlength - indices[2], shape[1]),
+ min(imwidth - indices[3], shape[2]),
+ samples,
+ )
)
except ValueError:
# incomplete tile; see gdal issue #1179
@@ -5797,7 +5822,7 @@ class TiffPage:
elif self.photometric == 2:
if self.planarconfig == 1:
colorspace = outcolorspace = 2 # RGB
- elif self.photometric > 2:
+ elif self.photometric > 3:
outcolorspace = TIFF.PHOTOMETRIC(self.photometric).value
def decode(
@@ -6292,21 +6317,40 @@ class TiffPage:
"""Return sequence of sub-pages (SubIFDs)."""
if 330 not in self.tags:
return ()
- return TiffPages(self)
+ return TiffPages(self, index=self.index)
@lazyattr
def maxworkers(self):
- """Return maximum number of threads for decoding strips or tiles."""
- if self.is_contiguous:
- return 1
+ """Return maximum number of threads for decoding segments.
+
+ Return 0 to disable multi-threading also for stacking pages.
+
+ """
+ if self.is_contiguous or self.dtype is None:
+ return 0
+ if self.compression in (
+ 6,
+ 7,
+ 33003,
+ 33005,
+ 34712,
+ 34933,
+ 34934,
+ 50001,
+ ):
+ # image codecs
+ return min(TIFF.MAXWORKERS, len(self.dataoffsets))
+ bytecount = product(self.chunks) * self.dtype.itemsize
+ if bytecount < 2048:
+ # disable multi-threading for small segments
+ return 0
+ if self.compression != 1 or self.fillorder != 1 or self.predictor != 1:
+ if self.compression == 5 and bytecount < 16384:
+ # disable multi-threading for small LZW compressed segments
+ return 0
if len(self.dataoffsets) < 4:
return 1
- if 0 < self.databytecounts[0] < 512:
- return 1
if self.compression != 1 or self.fillorder != 1 or self.predictor != 1:
- if self.compression == 5 and self.databytecounts[0] < 8192:
- # disable multi-threading for small LZW compressed segments
- return 1
if imagecodecs is not None:
return min(TIFF.MAXWORKERS, len(self.dataoffsets))
return 2 # optimum for large number of uncompressed tiles
@@ -6875,7 +6919,7 @@ class TiffFrame:
)
is_mdgel = False
- pages = None
+ pages = ()
# tags = {}
def __init__(
@@ -7250,7 +7294,8 @@ class TiffTag:
TIFF.DATA_FORMATS.
The new packed value is appended to the file if it is longer than the
- old value. The old value is zeroed.
+ old value. The old value is zeroed. The file position is left where it
+ was.
"""
if self.offset < 8 or self.valueoffset < 8:
@@ -7338,7 +7383,7 @@ class TiffTag:
fh.write(struct.pack(tiff.tagformat2, count, packedvalue))
else:
# inline -> separate: append to file
- fh.seek(0, 2)
+ fh.seek(0, os.SEEK_END)
valueoffset = fh.tell()
if valueoffset % 2:
# value offset must begin on a word boundary
@@ -7374,7 +7419,7 @@ class TiffTag:
if erase:
fh.seek(self.valueoffset)
fh.write(b'\x00' * oldsize)
- fh.seek(0, 2)
+ fh.seek(0, os.SEEK_END)
valueoffset = fh.tell()
if valueoffset % 2:
# value offset must begin on a word boundary
@@ -8595,7 +8640,7 @@ class FileHandle:
if self._size is None:
pos = self._fh.tell()
- self._fh.seek(self._offset, 2)
+ self._fh.seek(self._offset, os.SEEK_END)
self._size = self._fh.tell()
self._fh.seek(pos)
@@ -8712,7 +8757,7 @@ class FileHandle:
"""Append size bytes to file. Position must be at end of file."""
if size < 1:
return
- self._fh.seek(size - 1, 1)
+ self._fh.seek(size - 1, os.SEEK_CUR)
self._fh.write(b'\x00')
def write_array(self, data):
@@ -9469,13 +9514,15 @@ class OmeXml:
from lxml import etree # noqa: delayed import
parser = etree.XMLParser(remove_blank_text=True)
- xml = etree.fromstring(xml, parser)
+ tree = etree.fromstring(xml, parser)
xml = etree.tostring(
- xml, encoding='utf-8', pretty_print=True, xml_declaration=True
- )
- return xml.decode()
- except Exception:
- return xml
+ tree, encoding='utf-8', pretty_print=True, xml_declaration=True
+ ).decode()
+ except Exception as exc:
+ warnings.warn(f'OmeXml.__str__: {exc}', UserWarning)
+ except ImportError:
+ pass
+ return xml
@staticmethod
def _escape(value):
@@ -9586,11 +9633,11 @@ class OmeXml:
if _schema and _schema[0] is not None:
if omexml.startswith('<?xml'):
omexml = omexml.split('>', 1)[-1]
- xml = etree.fromstring(omexml)
+ tree = etree.fromstring(omexml)
if assert_:
- _schema[0].assert_(xml)
+ _schema[0].assert_(tree)
return True
- return _schema[0].validate(xml)
+ return _schema[0].validate(tree)
return None
@@ -10684,7 +10731,7 @@ class TIFF:
def PLANARCONFIG():
class PLANARCONFIG(enum.IntEnum):
- CONTIG = 1
+ CONTIG = 1 # CHUNKY
SEPARATE = 2
return PLANARCONFIG
@@ -11127,7 +11174,7 @@ class TIFF:
def FILE_PATTERNS():
# predefined FileSequence patterns
return {
- 'axes': r"""
+ 'axes': r"""(?ix)
# matches Olympus OIF and Leica TIFF series
_?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))
_?(?:(q|l|p|a|c|t|x|y|z|ch|tp)(\d{1,4}))?
@@ -12538,7 +12585,7 @@ def read_cz_lsminfo(fh, byteorder, dtype, count, offsetsize):
magic_number, structure_size = struct.unpack('<II', fh.read(8))
if magic_number not in (50350412, 67127628):
raise ValueError('invalid CZ_LSMINFO structure')
- fh.seek(-8, 1)
+ fh.seek(-8, os.SEEK_CUR)
if structure_size < numpy.dtype(TIFF.CZ_LSMINFO).itemsize:
# adjust structure according to structure_size
@@ -13966,7 +14013,8 @@ def parse_filenames(files, pattern, axesorder=None):
"""
if not pattern:
raise ValueError('invalid pattern')
- pattern = re.compile(pattern, re.IGNORECASE | re.VERBOSE)
+ if isinstance(pattern, str):
+ pattern = re.compile(pattern)
def parse(fname, pattern=pattern):
# return axes and indices from file name
@@ -14377,7 +14425,7 @@ def stack_pages(pages, out=None, maxworkers=None, **kwargs):
# auto-detect
page_maxworkers = page0.maxworkers
maxworkers = min(npages, TIFF.MAXWORKERS)
- if maxworkers == 1 or page0.is_contiguous:
+ if maxworkers == 1 or page_maxworkers < 1:
maxworkers = page_maxworkers = 1
elif npages < 3:
maxworkers = 1
@@ -14388,9 +14436,6 @@ def stack_pages(pages, out=None, maxworkers=None, **kwargs):
and page0.predictor == 1
):
maxworkers = 1
- elif page0.compression == 5 and page0.databytecounts[0] < 8192:
- # disable for small LZW compressed segments
- maxworkers = page_maxworkers = 1
else:
page_maxworkers = 1
elif maxworkers == 1:
@@ -15099,12 +15144,12 @@ def pformat_xml(xml):
if not isinstance(xml, bytes):
xml = xml.encode()
- xml = etree.parse(io.BytesIO(xml))
+ tree = etree.parse(io.BytesIO(xml))
xml = etree.tostring(
- xml,
+ tree,
pretty_print=True,
xml_declaration=True,
- encoding=xml.docinfo.encoding,
+ encoding=tree.docinfo.encoding,
)
xml = bytes2str(xml)
except Exception:
=====================================
tifffile/tifffile_geodb.py
=====================================
@@ -1839,12 +1839,14 @@ class Datum(enum.IntEnum):
Reseau_National_Belge_1972 = 6313
Deutsche_Hauptdreiecksnetz = 6314
Conakry_1905 = 6315
+ Dealul_Piscului_1930 = 6316
+ Dealul_Piscului_1970 = 6317
+
WGS72 = 6322
WGS72_Transit_Broadcast_Ephemeris = 6324
WGS84 = 6326
Ancienne_Triangulation_Francaise = 6901
Nord_de_Guerre = 6902
- Dealul_Piscului_1970 = 6317
class ModelType(enum.IntEnum):
View it on GitLab: https://salsa.debian.org/python-team/packages/tifffile/-/commit/11e8b78e3da187f647701ce7f602826f4a21a4b9
--
View it on GitLab: https://salsa.debian.org/python-team/packages/tifffile/-/commit/11e8b78e3da187f647701ce7f602826f4a21a4b9
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20201211/759ae85c/attachment-0001.html>
More information about the debian-med-commit
mailing list