[med-svn] [Git][med-team/python-biom-format][master] 2 commits: Patch: Python 3.10 support.

Andreas Tille (@tille) gitlab at salsa.debian.org
Tue Nov 23 05:37:29 GMT 2021



Andreas Tille pushed to branch master at Debian Med / python-biom-format


Commits:
84e0ed50 by Stefano Rivera at 2021-11-22T17:48:13-04:00
Patch: Python 3.10 support.

- - - - -
3bf950b5 by Andreas Tille at 2021-11-23T05:37:13+00:00
Merge branch 'python3.10' into 'master'

Patch: Python 3.10 support.

See merge request med-team/python-biom-format!1
- - - - -


3 changed files:

- debian/changelog
- + debian/patches/python3.10.patch
- debian/patches/series


Changes:

=====================================
debian/changelog
=====================================
@@ -1,3 +1,9 @@
+python-biom-format (2.1.10-2.1) UNRELEASED; urgency=medium
+
+  * Patch: Python 3.10 support.
+
+ -- Stefano Rivera <stefanor at debian.org>  Mon, 22 Nov 2021 17:47:55 -0400
+
 python-biom-format (2.1.10-2) unstable; urgency=medium
 
   * Fix watchfile to detect new versions on github


=====================================
debian/patches/python3.10.patch
=====================================
@@ -0,0 +1,2377 @@
+From: Nicola Soranzo <nicola.soranzo at gmail.com>
+Date: Mon, 22 Nov 2021 17:45:13 -0400
+Subject: Add support for Python 3.10 (#865)
+
+* Add support for Python 3.10
+
+Also:
+- Cleanup pre-3.6 code with pyupgrade
+- Add requirement for `>=3.6` to `setup.cfg`
+
+* Run `build` job also on Python 3.10
+
+* remove skbio from build
+
+* remove unneeded packages
+
+* readd flake8 for make test
+
+* Remove unused Travis configuration
+
+* restore six / future for the moment
+
+* Remove six and future for good
+
+* Use conda-forge
+
+* Fix test_from_hdf5_custom_parsers on h5py >=3.0
+
+* Fix "VLEN strings do not support embedded NULLs"
+
+Fix the following traceback with h5py 3.1.0 :
+
+```
+test_table.py:1025: in test_to_hdf5_missing_metadata_sample
+    t.to_hdf5(h5, 'tests')
+../table.py:4563: in to_hdf5
+    formatter[category](grp, category, md, compression)
+../table.py:315: in general_formatter
+    compression=compression)
+/usr/share/miniconda/envs/env_name/lib/python3.6/site-packages/h5py/_hl/group.py:148: in create_dataset
+    dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
+/usr/share/miniconda/envs/env_name/lib/python3.6/site-packages/h5py/_hl/dataset.py:140: in make_new_dset
+    dset_id.write(h5s.ALL, h5s.ALL, data)
+h5py/_objects.pyx:54: in h5py._objects.with_phil.wrapper
+    ???
+h5py/_objects.pyx:55: in h5py._objects.with_phil.wrapper
+    ???
+h5py/h5d.pyx:232: in h5py.h5d.DatasetID.write
+    ???
+h5py/_proxy.pyx:145: in h5py._proxy.dset_rw
+    ???
+h5py/_conv.pyx:443: in h5py._conv.str2vlen
+    ???
+h5py/_conv.pyx:94: in h5py._conv.generic_converter
+    ???
+_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
+
+>   ???
+E   ValueError: VLEN strings do not support embedded NULLs
+```
+
+* Quotes
+
+* Require pytest >=6.2.4 for Python 3.10
+
+* Replace `@npt.dec.skipif()` with `@pytest.mark.skipif()`
+
+Co-authored-by: Daniel McDonald <d3mcdonald at eng.ucsd.edu>
+
+Origin: upstream, https://github.com/biocore/biom-format/pull/865
+---
+ .travis.yml                                 |  31 ---
+ biom/_filter.pyx                            |   3 +-
+ biom/_subsample.pyx                         |   1 -
+ biom/_transform.pyx                         |   1 -
+ biom/cli/__init__.py                        |   1 -
+ biom/cli/installation_informer.py           |   3 +-
+ biom/cli/metadata_adder.py                  |   5 +-
+ biom/cli/table_converter.py                 |   5 +-
+ biom/cli/table_head.py                      |   1 -
+ biom/cli/table_normalizer.py                |   1 -
+ biom/cli/table_subsetter.py                 |   5 +-
+ biom/cli/table_summarizer.py                |   1 -
+ biom/cli/table_validator.py                 |  27 +-
+ biom/cli/uc_processor.py                    |   5 +-
+ biom/cli/util.py                            |   1 -
+ biom/err.py                                 |   2 +-
+ biom/exception.py                           |   4 +-
+ biom/parse.py                               |  59 ++--
+ biom/table.py                               | 203 +++++++-------
+ biom/tests/test_cli/test_subset_table.py    |  22 +-
+ biom/tests/test_cli/test_table_converter.py | 177 +++++++-----
+ biom/tests/test_cli/test_validate_table.py  |  19 +-
+ biom/tests/test_err.py                      |   4 +-
+ biom/tests/test_parse.py                    |  13 +-
+ biom/tests/test_table.py                    | 414 ++++++++++++++--------------
+ biom/tests/test_util.py                     |  12 +-
+ biom/util.py                                |  10 +-
+ doc/conf.py                                 |  11 +-
+ setup.cfg                                   |   4 +
+ setup.py                                    |  25 +-
+ 30 files changed, 542 insertions(+), 528 deletions(-)
+ delete mode 100644 .travis.yml
+
+diff --git a/.travis.yml b/.travis.yml
+deleted file mode 100644
+index 35c386b..0000000
+--- a/.travis.yml
++++ /dev/null
+@@ -1,31 +0,0 @@
+-# Modified from https://github.com/biocore/scikit-bio
+-language: python
+-env:
+-  - PYTHON_VERSION=3.6 WITH_DOCTEST=True
+-  - PYTHON_VERSION=3.7 WITH_DOCTEST=True
+-  - PYTHON_VERSION=3.8 WITH_DOCTEST=True
+-before_install:
+-  - wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
+-  - chmod +x miniconda.sh
+-  - ./miniconda.sh -b
+-  - export PATH=/home/travis/miniconda3/bin:$PATH
+-install:
+-  - conda create --yes -n env_name python=$PYTHON_VERSION pip click numpy "scipy>=1.3.1" pep8 flake8 coverage future six "pandas>=0.20.0" nose h5py>=2.2.0 cython
+-  - source activate env_name
+-  - if [ ${PYTHON_VERSION} = "3.7" ]; then pip install sphinx==1.2.2 "docutils<0.14"; fi
+-  - pip install coveralls
+-  - pip install anndata
+-  - pip install -e . --no-deps
+-script:
+-  - make test
+-  - biom show-install-info
+-  - if [ ${PYTHON_VERSION} = "3.7" ]; then make -C doc html; fi
+-  # we can only validate the tables if we have H5PY
+-  - for table in examples/*hdf5.biom; do echo ${table}; biom validate-table -i ${table}; done
+-  # validate JSON formatted tables
+-  - for table in examples/*table.biom; do echo ${table}; biom validate-table -i ${table}; done;
+-  - python biom/assets/exercise_api.py examples/rich_sparse_otu_table_hdf5.biom sample
+-  - python biom/assets/exercise_api.py examples/rich_sparse_otu_table_hdf5.biom observation
+-  - sh biom/assets/exercise_cli.sh
+-after_success:
+-  - coveralls
+diff --git a/biom/_filter.pyx b/biom/_filter.pyx
+index e66f10f..12ee562 100644
+--- a/biom/_filter.pyx
++++ b/biom/_filter.pyx
+@@ -6,10 +6,9 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division, print_function
+ 
+ from itertools import compress
+-from collections import Iterable
++from collections.abc import Iterable
+ from types import FunctionType
+ 
+ import numpy as np
+diff --git a/biom/_subsample.pyx b/biom/_subsample.pyx
+index 13aeff7..e59e778 100644
+--- a/biom/_subsample.pyx
++++ b/biom/_subsample.pyx
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import numpy as np
+ cimport numpy as cnp
+diff --git a/biom/_transform.pyx b/biom/_transform.pyx
+index 03ab65b..8a15018 100644
+--- a/biom/_transform.pyx
++++ b/biom/_transform.pyx
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import numpy as np
+ cimport numpy as cnp
+diff --git a/biom/cli/__init__.py b/biom/cli/__init__.py
+index ee4beaa..73cd488 100644
+--- a/biom/cli/__init__.py
++++ b/biom/cli/__init__.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ from importlib import import_module
+ 
+diff --git a/biom/cli/installation_informer.py b/biom/cli/installation_informer.py
+index 7e0c755..3fb9e4c 100644
+--- a/biom/cli/installation_informer.py
++++ b/biom/cli/installation_informer.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import sys
+ 
+@@ -118,4 +117,4 @@ def _format_info(info, title):
+ 
+ 
+ def _get_max_length(info):
+-    return max([len(e[0]) for e in info])
++    return max(len(e[0]) for e in info)
+diff --git a/biom/cli/metadata_adder.py b/biom/cli/metadata_adder.py
+index 3913151..4012344 100644
+--- a/biom/cli/metadata_adder.py
++++ b/biom/cli/metadata_adder.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+@@ -82,11 +81,11 @@ def add_metadata(input_fp, output_fp, sample_metadata_fp,
+     """
+     table = load_table(input_fp)
+     if sample_metadata_fp is not None:
+-        sample_metadata_f = open(sample_metadata_fp, 'U')
++        sample_metadata_f = open(sample_metadata_fp)
+     else:
+         sample_metadata_f = None
+     if observation_metadata_fp is not None:
+-        observation_metadata_f = open(observation_metadata_fp, 'U')
++        observation_metadata_f = open(observation_metadata_fp)
+     else:
+         observation_metadata_f = None
+     if sc_separated is not None:
+diff --git a/biom/cli/table_converter.py b/biom/cli/table_converter.py
+index 21388d4..02b5d34 100644
+--- a/biom/cli/table_converter.py
++++ b/biom/cli/table_converter.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+@@ -113,12 +112,12 @@ def convert(input_fp, output_fp, sample_metadata_fp, observation_metadata_fp,
+ 
+     table = load_table(input_fp)
+     if sample_metadata_fp is not None:
+-        with open(sample_metadata_fp, 'U') as f:
++        with open(sample_metadata_fp) as f:
+             sample_metadata_f = MetadataMap.from_file(f)
+     else:
+         sample_metadata_f = None
+     if observation_metadata_fp is not None:
+-        with open(observation_metadata_fp, 'U') as f:
++        with open(observation_metadata_fp) as f:
+             observation_metadata_f = MetadataMap.from_file(f)
+     else:
+         observation_metadata_f = None
+diff --git a/biom/cli/table_head.py b/biom/cli/table_head.py
+index bb03e34..682be6e 100644
+--- a/biom/cli/table_head.py
++++ b/biom/cli/table_head.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+diff --git a/biom/cli/table_normalizer.py b/biom/cli/table_normalizer.py
+index d209306..1f05a14 100755
+--- a/biom/cli/table_normalizer.py
++++ b/biom/cli/table_normalizer.py
+@@ -8,7 +8,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+diff --git a/biom/cli/table_subsetter.py b/biom/cli/table_subsetter.py
+index aac3415..e091da8 100644
+--- a/biom/cli/table_subsetter.py
++++ b/biom/cli/table_subsetter.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+@@ -60,10 +59,10 @@ def subset_table(input_hdf5_fp, input_json_fp, axis, ids, output_fp):
+ 
+     """
+     if input_json_fp is not None:
+-        with open(input_json_fp, 'U') as f:
++        with open(input_json_fp) as f:
+             input_json_fp = f.read()
+ 
+-    with open(ids, 'U') as f:
++    with open(ids) as f:
+         ids = []
+         for line in f:
+             if not line.startswith('#'):
+diff --git a/biom/cli/table_summarizer.py b/biom/cli/table_summarizer.py
+index 18f5edc..edf30dc 100644
+--- a/biom/cli/table_summarizer.py
++++ b/biom/cli/table_summarizer.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ from operator import itemgetter
+ import locale
+diff --git a/biom/cli/table_validator.py b/biom/cli/table_validator.py
+index 09e6639..8b16c5d 100644
+--- a/biom/cli/table_validator.py
++++ b/biom/cli/table_validator.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ # -----------------------------------------------------------------------------
+ # Copyright (c) 2011-2017, The BIOM Format Development Team.
+ #
+@@ -8,7 +7,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ import json
+ import sys
+ from datetime import datetime
+@@ -61,15 +59,21 @@ def _validate_table(input_fp, format_version=None):
+ 
+ 
+ # Refactor in the future. Also need to address #664
+-class TableValidator(object):
++class TableValidator:
+ 
+     FormatURL = "http://biom-format.org"
+-    TableTypes = set(['otu table', 'pathway table', 'function table',
+-                      'ortholog table', 'gene table', 'metabolite table',
+-                      'taxon table'])
+-    MatrixTypes = set(['sparse', 'dense'])
++    TableTypes = {
++        'otu table',
++        'pathway table',
++        'function table',
++        'ortholog table',
++        'gene table',
++        'metabolite table',
++        'taxon table',
++    }
++    MatrixTypes = {'sparse', 'dense'}
+     ElementTypes = {'int': int, 'str': str, 'float': float, 'unicode': str}
+-    HDF5FormatVersions = set([(2, 0), (2, 0, 0), (2, 1), (2, 1, 0)])
++    HDF5FormatVersions = {(2, 0), (2, 0, 0), (2, 1), (2, 1, 0)}
+ 
+     def run(self, **kwargs):
+         is_json = not is_hdf5_file(kwargs['table'])
+@@ -102,7 +106,7 @@ class TableValidator(object):
+                     sys.exit(1)
+                 return self._validate_hdf5(**kwargs)
+             else:
+-                raise IOError("h5py is not installed, can only validate JSON "
++                raise OSError("h5py is not installed, can only validate JSON "
+                               "tables")
+ 
+     def __call__(self, table, format_version=None):
+@@ -448,11 +452,10 @@ class TableValidator(object):
+ 
+     def _valid_format(self, table_json):
+         """Format must be the expected version"""
+-        formal = "Biological Observation Matrix %s" % self._format_version
++        formal = f"Biological Observation Matrix {self._format_version}"
+ 
+         if table_json['format'] not in [formal, self._format_version]:
+-            return "Invalid format '%s', must be '%s'" % (table_json['format'],
+-                                                          self._format_version)
++            return f"Invalid format '{table_json['format']}', must be '{self._format_version}'"  # noqa: E501
+         else:
+             return ''
+ 
+diff --git a/biom/cli/uc_processor.py b/biom/cli/uc_processor.py
+index 8dc2c0a..0eeb522 100644
+--- a/biom/cli/uc_processor.py
++++ b/biom/cli/uc_processor.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import click
+ 
+@@ -44,9 +43,9 @@ def from_uc(input_fp, output_fp, rep_set_fp):
+     $ biom from-uc -i in.uc -o out.biom --rep-set-fp rep-set.fna
+ 
+     """
+-    input_f = open(input_fp, 'U')
++    input_f = open(input_fp)
+     if rep_set_fp is not None:
+-        rep_set_f = open(rep_set_fp, 'U')
++        rep_set_f = open(rep_set_fp)
+     else:
+         rep_set_f = None
+     table = _from_uc(input_f, rep_set_f)
+diff --git a/biom/cli/util.py b/biom/cli/util.py
+index 9906363..b350d84 100644
+--- a/biom/cli/util.py
++++ b/biom/cli/util.py
+@@ -6,7 +6,6 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import biom.util
+ import biom.parse
+diff --git a/biom/err.py b/biom/err.py
+index 0cbcf0a..013875e 100644
+--- a/biom/err.py
++++ b/biom/err.py
+@@ -122,7 +122,7 @@ def _create_error_states(msg, callback, exception):
+             'print': lambda x: stdout.write(msg + '\n')}
+ 
+ 
+-class ErrorProfile(object):
++class ErrorProfile:
+     """An error profile
+ 
+     The error profile defines the types of errors that can be optionally
+diff --git a/biom/exception.py b/biom/exception.py
+index 0969ac4..e4dc7ac 100644
+--- a/biom/exception.py
++++ b/biom/exception.py
+@@ -37,13 +37,13 @@ class DisjointIDError(BiomException):
+ 
+ class UnknownAxisError(TableException):
+     def __init__(self, axis):
+-        super(UnknownAxisError, self).__init__()
++        super().__init__()
+         self.args = ("Unknown axis '%s'." % axis,)
+ 
+ 
+ class UnknownIDError(TableException):
+     def __init__(self, missing_id, axis):
+-        super(UnknownIDError, self).__init__()
++        super().__init__()
+         self.args = ("The %s ID '%s' could not be found in the BIOM table." %
+                      (axis, missing_id),)
+ 
+diff --git a/biom/parse.py b/biom/parse.py
+index ad29f02..e329c57 100644
+--- a/biom/parse.py
++++ b/biom/parse.py
+@@ -8,10 +8,8 @@
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # ----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ 
+ import numpy as np
+-from future.utils import string_types
+ import io
+ import h5py
+ 
+@@ -33,26 +31,27 @@ __maintainer__ = "Daniel McDonald"
+ __email__ = "daniel.mcdonald at colorado.edu"
+ 
+ MATRIX_ELEMENT_TYPE = {'int': int, 'float': float, 'unicode': str,
+-                       u'int': int, u'float': float, u'unicode': str}
++                       'int': int, 'float': float, 'unicode': str}
+ 
+ QUOTE = '"'
+-JSON_OPEN = set(["[", "{"])
+-JSON_CLOSE = set(["]", "}"])
+-JSON_SKIP = set([" ", "\t", "\n", ","])
+-JSON_START = set(
+-    ["0",
+-     "1",
+-     "2",
+-     "3",
+-     "4",
+-     "5",
+-     "6",
+-     "7",
+-     "8",
+-     "9",
+-     "{",
+-     "[",
+-     '"'])
++JSON_OPEN = {"[", "{"}
++JSON_CLOSE = {"]", "}"}
++JSON_SKIP = {" ", "\t", "\n", ","}
++JSON_START = {
++    "0",
++    "1",
++    "2",
++    "3",
++    "4",
++    "5",
++    "6",
++    "7",
++    "8",
++    "9",
++    "{",
++    "[",
++    '"',
++}
+ 
+ 
+ def direct_parse_key(biom_str, key):
+@@ -160,7 +159,7 @@ def direct_slice_data(biom_str, to_keep, axis):
+     elif axis == 'sample':
+         new_data = _direct_slice_data_sparse_samp(data_fields, to_keep)
+ 
+-    return '"data": %s, "shape": %s' % (new_data, new_shape)
++    return f'"data": {new_data}, "shape": {new_shape}'
+ 
+ 
+ def strip_f(x):
+@@ -170,13 +169,13 @@ def strip_f(x):
+ def _remap_axis_sparse_obs(rcv, lookup):
+     """Remap a sparse observation axis"""
+     row, col, value = list(map(strip_f, rcv.split(',')))
+-    return "%s,%s,%s" % (lookup[row], col, value)
++    return f"{lookup[row]},{col},{value}"
+ 
+ 
+ def _remap_axis_sparse_samp(rcv, lookup):
+     """Remap a sparse sample axis"""
+     row, col, value = list(map(strip_f, rcv.split(',')))
+-    return "%s,%s,%s" % (row, lookup[col], value)
++    return f"{row},{lookup[col]},{value}"
+ 
+ 
+ def _direct_slice_data_sparse_obs(data, to_keep):
+@@ -187,7 +186,7 @@ def _direct_slice_data_sparse_obs(data, to_keep):
+     """
+     # interogate all the datas
+     new_data = []
+-    remap_lookup = dict([(str(v), i) for i, v in enumerate(sorted(to_keep))])
++    remap_lookup = {str(v): i for i, v in enumerate(sorted(to_keep))}
+     for rcv in data.split('],'):
+         r, c, v = strip_f(rcv).split(',')
+         if r in remap_lookup:
+@@ -204,7 +203,7 @@ def _direct_slice_data_sparse_samp(data, to_keep):
+     # could do sparse obs/samp in one forloop, but then theres the
+     # expense of the additional if-statement in the loop
+     new_data = []
+-    remap_lookup = dict([(str(v), i) for i, v in enumerate(sorted(to_keep))])
++    remap_lookup = {str(v): i for i, v in enumerate(sorted(to_keep))}
+     for rcv in data.split('],'):
+         r, c, v = rcv.split(',')
+         if c in remap_lookup:
+@@ -236,7 +235,7 @@ def get_axis_indices(biom_str, to_keep, axis):
+ 
+     axis_data = json.loads("{%s}" % axis_data)
+ 
+-    all_ids = set([v['id'] for v in axis_data[axis_key]])
++    all_ids = {v['id'] for v in axis_data[axis_key]}
+     if not to_keep.issubset(all_ids):
+         raise KeyError("Not all of the to_keep ids are in biom_str!")
+ 
+@@ -480,8 +479,8 @@ class MetadataMap(dict):
+         if hasattr(lines, "upper"):
+             # Try opening if a string was passed
+             try:
+-                lines = open(lines, 'U')
+-            except IOError:
++                lines = open(lines)
++            except OSError:
+                 raise BiomParseException("A string was passed that doesn't "
+                                          "refer to an accessible filepath.")
+ 
+@@ -565,7 +564,7 @@ class MetadataMap(dict):
+ 
+         {'Sample1': {'Treatment': 'Fast'}, 'Sample2': {'Treatment': 'Control'}}
+         """
+-        super(MetadataMap, self).__init__(mapping)
++        super().__init__(mapping)
+ 
+ 
+ def generatedby():
+@@ -593,7 +592,7 @@ def biom_meta_to_string(metadata, replace_str=':'):
+     # Note that since ';' and '|' are used as seperators we must replace them
+     # if they exist
+ 
+-    if isinstance(metadata, string_types):
++    if isinstance(metadata, str):
+         return metadata.replace(';', replace_str)
+     elif isinstance(metadata, list):
+         transtab = bytes.maketrans(';|', ''.join([replace_str, replace_str]))
+diff --git a/biom/table.py b/biom/table.py
+index 33a4d78..8e691fd 100644
+--- a/biom/table.py
++++ b/biom/table.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ """
+ BIOM Table (:mod:`biom.table`)
+ ==============================
+@@ -172,7 +171,6 @@ Bacteria; Bacteroidetes   1.0 1.0 0.0 1.0
+ # The full license is in the file COPYING.txt, distributed with this software.
+ # -----------------------------------------------------------------------------
+ 
+-from __future__ import division
+ import numpy as np
+ import scipy.stats
+ from copy import deepcopy
+@@ -180,16 +178,13 @@ from datetime import datetime
+ from json import dumps
+ from functools import reduce, partial
+ from operator import itemgetter, or_
+-from future.builtins import zip
+-from future.utils import viewitems
+-from collections import defaultdict, Hashable, Iterable
++from collections import defaultdict
++from collections.abc import Hashable, Iterable
+ from numpy import ndarray, asarray, zeros, newaxis
+ from scipy.sparse import (coo_matrix, csc_matrix, csr_matrix, isspmatrix,
+                           vstack, hstack)
+ import pandas as pd
+ import re
+-import six
+-from future.utils import string_types as _future_string_types
+ from biom.exception import (TableException, UnknownAxisError, UnknownIDError,
+                             DisjointIDError)
+ from biom.util import (get_biom_format_version_string,
+@@ -202,15 +197,6 @@ from ._transform import _transform
+ from ._subsample import _subsample
+ 
+ 
+-if not six.PY3:
+-    string_types = list(_future_string_types)
+-    string_types.append(str)
+-    string_types.append(unicode)  # noqa
+-    string_types = tuple(string_types)
+-else:
+-    string_types = _future_string_types
+-
+-
+ __author__ = "Daniel McDonald"
+ __copyright__ = "Copyright 2011-2020, The BIOM Format Development Team"
+ __credits__ = ["Daniel McDonald", "Jai Ram Rideout", "Greg Caporaso",
+@@ -224,7 +210,7 @@ __email__ = "daniel.mcdonald at colorado.edu"
+ 
+ 
+ MATRIX_ELEMENT_TYPE = {'int': int, 'float': float, 'unicode': str,
+-                       u'int': int, u'float': float, u'unicode': str}
++                       'int': int, 'float': float, 'unicode': str}
+ 
+ 
+ def _identify_bad_value(dtype, fields):
+@@ -280,7 +266,7 @@ def general_formatter(grp, header, md, compression):
+     name = 'metadata/%s' % header
+     dtypes = [type(m[header]) for m in md]
+ 
+-    if set(dtypes).issubset(set(string_types)):
++    if set(dtypes).issubset({str}):
+         grp.create_dataset(name, shape=shape,
+                            dtype=H5PY_VLEN_STR,
+                            data=[m[header].encode('utf8') for m in md],
+@@ -293,16 +279,16 @@ def general_formatter(grp, header, md, compression):
+         for dt, m in zip(dtypes, md):
+             val = m[header]
+             if val is None:
+-                val = '\0'
++                val = ''
+                 dt = str
+ 
+-            if dt in string_types:
++            if dt == str:
+                 val = val.encode('utf8')
+ 
+             formatted.append(val)
+             dtypes_used.append(dt)
+ 
+-        if set(dtypes_used).issubset(set(string_types)):
++        if set(dtypes_used).issubset({str}):
+             dtype_to_use = H5PY_VLEN_STR
+         else:
+             dtype_to_use = None
+@@ -380,7 +366,7 @@ def vlen_list_of_str_formatter(grp, header, md, compression):
+         compression=compression)
+ 
+ 
+-class Table(object):
++class Table:
+ 
+     """The (canonically pronounced 'teh') Table.
+ 
+@@ -831,7 +817,7 @@ class Table(object):
+         """
+         metadata = self.metadata(axis=axis)
+         if metadata is not None:
+-            for id_, md_entry in viewitems(md):
++            for id_, md_entry in md.items():
+                 if self.exists(id_, axis=axis):
+                     idx = self.index(id_, axis=axis)
+                     metadata[idx].update(md_entry)
+@@ -839,10 +825,10 @@ class Table(object):
+             ids = self.ids(axis=axis)
+             if axis == 'sample':
+                 self._sample_metadata = tuple(
+-                    [md[id_] if id_ in md else None for id_ in ids])
++                    md[id_] if id_ in md else None for id_ in ids)
+             elif axis == 'observation':
+                 self._observation_metadata = tuple(
+-                    [md[id_] if id_ in md else None for id_ in ids])
++                    md[id_] if id_ in md else None for id_ in ids)
+             else:
+                 raise UnknownAxisError(axis)
+         self._cast_metadata()
+@@ -1547,9 +1533,9 @@ class Table(object):
+         """
+         return id in self._index(axis=axis)
+ 
+-    def delimited_self(self, delim=u'\t', header_key=None, header_value=None,
++    def delimited_self(self, delim='\t', header_key=None, header_value=None,
+                        metadata_formatter=str,
+-                       observation_column_name=u'#OTU ID', direct_io=None):
++                       observation_column_name='#OTU ID', direct_io=None):
+         """Return self as a string in a delimited form
+ 
+         Default str output for the Table is just row/col ids and table data
+@@ -1595,12 +1581,13 @@ class Table(object):
+                     "You need to specify both header_key and header_value")
+ 
+         if header_value:
+-            output = [u'# Constructed from biom file',
+-                      u'%s%s%s\t%s' % (observation_column_name, delim,
+-                                       samp_ids, header_value)]
++            output = [
++                '# Constructed from biom file',
++                f'{observation_column_name}{delim}{samp_ids}\t{header_value}'
++            ]
+         else:
+             output = ['# Constructed from biom file',
+-                      '%s%s%s' % (observation_column_name, delim, samp_ids)]
++                      f'{observation_column_name}{delim}{samp_ids}']
+ 
+         if direct_io is not None:
+             direct_io.writelines([i+"\n" for i in output])
+@@ -1616,7 +1603,7 @@ class Table(object):
+             if header_key and obs_metadata is not None:
+                 md = obs_metadata[self._obs_index[obs_id]]
+                 md_out = metadata_formatter(md.get(header_key, None))
+-                output_row = u'%s%s%s\t%s%s' % \
++                output_row = '%s%s%s\t%s%s' % \
+                     (obs_id, delim, str_obs_vals, md_out, end_line)
+ 
+                 if direct_io is None:
+@@ -1624,12 +1611,12 @@ class Table(object):
+                 else:
+                     direct_io.write(output_row)
+             else:
+-                output_row = u'%s%s%s%s' % \
++                output_row = '%s%s%s%s' % \
+                             (obs_id, delim, str_obs_vals, end_line)
+                 if direct_io is None:
+                     output.append(output_row)
+                 else:
+-                    direct_io.write((output_row))
++                    direct_io.write(output_row)
+ 
+         return '\n'.join(output)
+ 
+@@ -2324,7 +2311,7 @@ class Table(object):
+ 
+         md = self.metadata(axis=self._invert_axis(axis))
+ 
+-        for part, (ids, values, metadata) in viewitems(partitions):
++        for part, (ids, values, metadata) in partitions.items():
+             if axis == 'sample':
+                 data = self._conv_to_self_type(values, transpose=True)
+                 samp_ids = ids
+@@ -2589,11 +2576,11 @@ class Table(object):
+ 
+             if include_collapsed_metadata:
+                 # reassociate pathway information
+-                for k, i in sorted(viewitems(idx_lookup), key=itemgetter(1)):
++                for k, i in sorted(idx_lookup.items(), key=itemgetter(1)):
+                     collapsed_md.append({one_to_many_md_key: new_md[k]})
+ 
+             # get the new sample IDs
+-            collapsed_ids = [k for k, i in sorted(viewitems(idx_lookup),
++            collapsed_ids = [k for k, i in sorted(idx_lookup.items(),
+                                                   key=itemgetter(1))]
+ 
+             # convert back to self type
+@@ -3968,17 +3955,11 @@ html
+         shape = h5grp.attrs['shape']
+         type_ = None if h5grp.attrs['type'] == '' else h5grp.attrs['type']
+ 
+-        if isinstance(id_, six.binary_type):
+-            if six.PY3:
+-                id_ = id_.decode('ascii')
+-            else:
+-                id_ = str(id_)
++        if isinstance(id_, bytes):
++            id_ = id_.decode('ascii')
+ 
+-        if isinstance(type_, six.binary_type):
+-            if six.PY3:
+-                type_ = type_.decode('ascii')
+-            else:
+-                type_ = str(type_)
++        if isinstance(type_, bytes):
++            type_ = type_.decode('ascii')
+ 
+         def axis_load(grp):
+             """Loads all the data of the given group"""
+@@ -3996,7 +3977,7 @@ html
+ 
+             # fetch ID specific metadata
+             md = [{} for i in range(len(ids))]
+-            for category, dset in viewitems(grp['metadata']):
++            for category, dset in grp['metadata'].items():
+                 parse_f = parser[category]
+                 data = dset[:]
+                 for md_dict, data_row in zip(md, data):
+@@ -4065,8 +4046,9 @@ html
+             # load the subset of the data
+             idx = samp_idx if axis == 'sample' else obs_idx
+             keep = np.where(idx)[0]
+-            indptr_indices = sorted([(h5_indptr[i], h5_indptr[i+1])
+-                                     for i in keep])
++            indptr_indices = sorted(
++                (h5_indptr[i], h5_indptr[i+1]) for i in keep
++            )
+             # Create the new indptr
+             indptr_subset = np.array([end - start
+                                       for start, end in indptr_indices])
+@@ -4596,28 +4578,28 @@ html
+         str
+             A JSON-formatted string representing the biom table
+         """
+-        if not isinstance(generated_by, string_types):
++        if not isinstance(generated_by, str):
+             raise TableException("Must specify a generated_by string")
+ 
+         # Fill in top-level metadata.
+         if direct_io:
+-            direct_io.write(u'{')
+-            direct_io.write(u'"id": "%s",' % str(self.table_id))
++            direct_io.write('{')
++            direct_io.write('"id": "%s",' % str(self.table_id))
+             direct_io.write(
+-                u'"format": "%s",' %
++                '"format": "%s",' %
+                 get_biom_format_version_string((1, 0)))  # JSON table -> 1.0.0
+             direct_io.write(
+-                u'"format_url": "%s",' %
++                '"format_url": "%s",' %
+                 get_biom_format_url_string())
+-            direct_io.write(u'"generated_by": "%s",' % generated_by)
+-            direct_io.write(u'"date": "%s",' % datetime.now().isoformat())
++            direct_io.write('"generated_by": "%s",' % generated_by)
++            direct_io.write('"date": "%s",' % datetime.now().isoformat())
+         else:
+-            id_ = u'"id": "%s",' % str(self.table_id)
+-            format_ = u'"format": "%s",' % get_biom_format_version_string(
++            id_ = '"id": "%s",' % str(self.table_id)
++            format_ = '"format": "%s",' % get_biom_format_version_string(
+                 (1, 0))  # JSON table -> 1.0.0
+-            format_url = u'"format_url": "%s",' % get_biom_format_url_string()
+-            generated_by = u'"generated_by": "%s",' % generated_by
+-            date = u'"date": "%s",' % datetime.now().isoformat()
++            format_url = '"format_url": "%s",' % get_biom_format_url_string()
++            generated_by = '"generated_by": "%s",' % generated_by
++            date = '"date": "%s",' % datetime.now().isoformat()
+ 
+         # Determine if we have any data in the matrix, and what the shape of
+         # the matrix is.
+@@ -4635,30 +4617,30 @@ html
+ 
+         # Determine the type of elements the matrix is storing.
+         if isinstance(test_element, int):
+-            matrix_element_type = u"int"
++            matrix_element_type = "int"
+         elif isinstance(test_element, float):
+-            matrix_element_type = u"float"
+-        elif isinstance(test_element, string_types):
+-            matrix_element_type = u"str"
++            matrix_element_type = "float"
++        elif isinstance(test_element, str):
++            matrix_element_type = "str"
+         else:
+             raise TableException("Unsupported matrix data type.")
+ 
+         # Fill in details about the matrix.
+         if direct_io:
+             direct_io.write(
+-                u'"matrix_element_type": "%s",' %
++                '"matrix_element_type": "%s",' %
+                 matrix_element_type)
+-            direct_io.write(u'"shape": [%d, %d],' % (num_rows, num_cols))
++            direct_io.write('"shape": [%d, %d],' % (num_rows, num_cols))
+         else:
+-            matrix_element_type = u'"matrix_element_type": "%s",' % \
++            matrix_element_type = '"matrix_element_type": "%s",' % \
+                 matrix_element_type
+-            shape = u'"shape": [%d, %d],' % (num_rows, num_cols)
++            shape = '"shape": [%d, %d],' % (num_rows, num_cols)
+ 
+         # Fill in the table type
+         if self.type is None:
+-            type_ = u'"type": null,'
++            type_ = '"type": null,'
+         else:
+-            type_ = u'"type": "%s",' % self.type
++            type_ = '"type": "%s",' % self.type
+ 
+         if direct_io:
+             direct_io.write(type_)
+@@ -4666,24 +4648,26 @@ html
+         # Fill in details about the rows in the table and fill in the matrix's
+         # data. BIOM 2.0+ is now only sparse
+         if direct_io:
+-            direct_io.write(u'"matrix_type": "sparse",')
+-            direct_io.write(u'"data": [')
++            direct_io.write('"matrix_type": "sparse",')
++            direct_io.write('"data": [')
+         else:
+-            matrix_type = u'"matrix_type": "sparse",'
+-            data = [u'"data": [']
++            matrix_type = '"matrix_type": "sparse",'
++            data = ['"data": [']
+ 
+         max_row_idx = len(self.ids(axis='observation')) - 1
+         max_col_idx = len(self.ids()) - 1
+-        rows = [u'"rows": [']
++        rows = ['"rows": [']
+         have_written = False
+         for obs_index, obs in enumerate(self.iter(axis='observation')):
+             # i'm crying on the inside
+             if obs_index != max_row_idx:
+-                rows.append(u'{"id": %s, "metadata": %s},' % (dumps(obs[1]),
+-                                                              dumps(obs[2])))
++                rows.append(
++                    f'{{"id": {dumps(obs[1])}, "metadata": {dumps(obs[2])}}},'
++                )
+             else:
+-                rows.append(u'{"id": %s, "metadata": %s}],' % (dumps(obs[1]),
+-                                                               dumps(obs[2])))
++                rows.append(
++                    f'{{"id": {dumps(obs[1])}, "metadata": {dumps(obs[2])}}}],'
++                )
+ 
+             # turns out its a pain to figure out when to place commas. the
+             # simple work around, at the expense of a little memory
+@@ -4692,55 +4676,66 @@ html
+             built_row = []
+             for col_index, val in enumerate(obs[0]):
+                 if float(val) != 0.0:
+-                    built_row.append(u"[%d,%d,%r]" % (obs_index, col_index,
+-                                                      val))
++                    built_row.append(
++                        "[%d,%d,%r]" % (obs_index, col_index, val)
++                    )
+             if built_row:
+                 # if we have written a row already, its safe to add a comma
+                 if have_written:
+                     if direct_io:
+-                        direct_io.write(u',')
++                        direct_io.write(',')
+                     else:
+-                        data.append(u',')
++                        data.append(',')
+                 if direct_io:
+-                    direct_io.write(u','.join(built_row))
++                    direct_io.write(','.join(built_row))
+                 else:
+-                    data.append(u','.join(built_row))
++                    data.append(','.join(built_row))
+ 
+                 have_written = True
+ 
+         # finalize the data block
+         if direct_io:
+-            direct_io.write(u"],")
++            direct_io.write("],")
+         else:
+-            data.append(u"],")
++            data.append("],")
+ 
+         # Fill in details about the columns in the table.
+-        columns = [u'"columns": [']
++        columns = ['"columns": [']
+         for samp_index, samp in enumerate(self.iter()):
+             if samp_index != max_col_idx:
+-                columns.append(u'{"id": %s, "metadata": %s},' % (
++                columns.append('{{"id": {}, "metadata": {}}},'.format(
+                     dumps(samp[1]), dumps(samp[2])))
+             else:
+-                columns.append(u'{"id": %s, "metadata": %s}]' % (
++                columns.append('{{"id": {}, "metadata": {}}}]'.format(
+                     dumps(samp[1]), dumps(samp[2])))
+ 
+-        if rows[0] == u'"rows": [' and len(rows) == 1:
++        if rows[0] == '"rows": [' and len(rows) == 1:
+             # empty table case
+-            rows = [u'"rows": [],']
+-            columns = [u'"columns": []']
++            rows = ['"rows": [],']
++            columns = ['"columns": []']
+ 
+-        rows = u''.join(rows)
+-        columns = u''.join(columns)
++        rows = ''.join(rows)
++        columns = ''.join(columns)
+ 
+         if direct_io:
+             direct_io.write(rows)
+             direct_io.write(columns)
+-            direct_io.write(u'}')
++            direct_io.write('}')
+         else:
+-            return u"{%s}" % ''.join([id_, format_, format_url, matrix_type,
+-                                      generated_by, date, type_,
+-                                      matrix_element_type, shape,
+-                                      u''.join(data), rows, columns])
++            return "{%s}" % ''.join([
++                id_,
++                format_,
++                format_url,
++                matrix_type,
++                generated_by,
++                date,
++                type_,
++                matrix_element_type,
++                shape,
++                ''.join(data),
++                rows,
++                columns,
++            ])
+ 
+     @staticmethod
+     def from_adjacency(lines):
+@@ -5090,7 +5085,7 @@ html
+         >>> with open("result.tsv", "w") as f:
+                 table.to_tsv(direct_io=f)
+         """
+-        return self.delimited_self(u'\t', header_key, header_value,
++        return self.delimited_self('\t', header_key, header_value,
+                                    metadata_formatter,
+                                    observation_column_name,
+                                    direct_io=direct_io)
+@@ -5341,7 +5336,7 @@ def dict_to_sparse(data, dtype=float, shape=None):
+     rows = []
+     cols = []
+     vals = []
+-    for (r, c), v in viewitems(data):
++    for (r, c), v in data.items():
+         rows.append(r)
+         cols.append(c)
+         vals.append(v)
+diff --git a/biom/tests/test_cli/test_subset_table.py b/biom/tests/test_cli/test_subset_table.py
+index bd38c18..9848a69 100644
+--- a/biom/tests/test_cli/test_subset_table.py
++++ b/biom/tests/test_cli/test_subset_table.py
+@@ -9,7 +9,7 @@
+ import os
+ import unittest
+ 
+-import numpy.testing as npt
++import pytest
+ 
+ from biom.cli.table_subsetter import _subset_table
+ from biom.parse import parse_biom_table
+@@ -55,24 +55,24 @@ class TestSubsetTable(unittest.TestCase):
+             _subset_table(json_table_str=self.biom_str1, hdf5_biom='foo',
+                           axis='sample', ids=['f2', 'f4'])
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_subset_samples_hdf5(self):
+         """Correctly subsets samples in a hdf5 table"""
+         cwd = os.getcwd()
+         if '/' in __file__:
+             os.chdir(__file__.rsplit('/', 1)[0])
+         obs = _subset_table(hdf5_biom='test_data/test.biom', axis='sample',
+-                            ids=[u'Sample1', u'Sample2', u'Sample3'],
++                            ids=['Sample1', 'Sample2', 'Sample3'],
+                             json_table_str=None)
+         os.chdir(cwd)
+         obs = obs[0]
+         self.assertEqual(len(obs.ids()), 3)
+         self.assertEqual(len(obs.ids(axis='observation')), 5)
+-        self.assertTrue(u'Sample1' in obs.ids())
+-        self.assertTrue(u'Sample2' in obs.ids())
+-        self.assertTrue(u'Sample3' in obs.ids())
++        self.assertTrue('Sample1' in obs.ids())
++        self.assertTrue('Sample2' in obs.ids())
++        self.assertTrue('Sample3' in obs.ids())
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_subset_observations_hdf5(self):
+         """Correctly subsets samples in a hdf5 table"""
+         cwd = os.getcwd()
+@@ -80,15 +80,15 @@ class TestSubsetTable(unittest.TestCase):
+             os.chdir(__file__.rsplit('/', 1)[0])
+         obs = _subset_table(hdf5_biom='test_data/test.biom',
+                             axis='observation',
+-                            ids=[u'GG_OTU_1', u'GG_OTU_3', u'GG_OTU_5'],
++                            ids=['GG_OTU_1', 'GG_OTU_3', 'GG_OTU_5'],
+                             json_table_str=None)
+         os.chdir(cwd)
+         obs = obs[0]
+         self.assertEqual(len(obs.ids()), 4)
+         self.assertEqual(len(obs.ids(axis='observation')), 3)
+-        self.assertTrue(u'GG_OTU_1' in obs.ids(axis='observation'))
+-        self.assertTrue(u'GG_OTU_3' in obs.ids(axis='observation'))
+-        self.assertTrue(u'GG_OTU_5' in obs.ids(axis='observation'))
++        self.assertTrue('GG_OTU_1' in obs.ids(axis='observation'))
++        self.assertTrue('GG_OTU_3' in obs.ids(axis='observation'))
++        self.assertTrue('GG_OTU_5' in obs.ids(axis='observation'))
+ 
+ 
+ biom1 = ('{"id": "None","format": "Biological Observation Matrix 1.0.0",'
+diff --git a/biom/tests/test_cli/test_table_converter.py b/biom/tests/test_cli/test_table_converter.py
+index a0a7dc1..af20013 100644
+--- a/biom/tests/test_cli/test_table_converter.py
++++ b/biom/tests/test_cli/test_table_converter.py
+@@ -74,7 +74,7 @@ class TableConverterTests(TestCase):
+         self.assertEqual(len(obs.ids(axis='observation')), 14)
+         self.assertNotEqual(obs.metadata(), None)
+         self.assertNotEqual(obs.metadata(axis='observation'), None)
+-        self.assertEqual(obs.metadata()[obs.index(u'p2', u'sample')],
++        self.assertEqual(obs.metadata()[obs.index('p2', 'sample')],
+                          {'foo': 'c;b;a'})
+         self.assertEqual(obs.metadata()[obs.index('not16S.1', 'sample')],
+                          {'foo': 'b;c;d'})
+@@ -127,41 +127,73 @@ class TableConverterTests(TestCase):
+         obs = load_table(self.output_filepath)
+         exp = Table(np.array([[0., 1.], [6., 6.], [6., 1.],
+                               [1., 4.], [0., 2.]]),
+-                    observation_ids=[u'GG_OTU_1', u'GG_OTU_2', u'GG_OTU_3',
+-                                     u'GG_OTU_4', u'GG_OTU_5'],
+-                    sample_ids=[u'skin', u'gut'],
++                    observation_ids=[
++                        'GG_OTU_1',
++                        'GG_OTU_2',
++                        'GG_OTU_3',
++                        'GG_OTU_4',
++                        'GG_OTU_5',
++                    ],
++                    sample_ids=['skin', 'gut'],
+                     observation_metadata=[
+-                        {u'taxonomy': [u'k__Bacteria', u'p__Proteobacteria',
+-                                       u'c__Gammaproteobacteria',
+-                                       u'o__Enterobacteriales',
+-                                       u'f__Enterobacteriaceae',
+-                                       u'g__Escherichia', u's__']},
+-                        {u'taxonomy': [u'k__Bacteria', u'p__Cyanobacteria',
+-                                       u'c__Nostocophycideae',
+-                                       u'o__Nostocales', u'f__Nostocaceae',
+-                                       u'g__Dolichospermum', u's__']},
+-                        {u'taxonomy': [u'k__Archaea', u'p__Euryarchaeota',
+-                                       u'c__Methanomicrobia',
+-                                       u'o__Methanosarcinales',
+-                                       u'f__Methanosarcinaceae',
+-                                       u'g__Methanosarcina', u's__']},
+-                        {u'taxonomy': [u'k__Bacteria', u'p__Firmicutes',
+-                                       u'c__Clostridia', u'o__Halanaerobiales',
+-                                       u'f__Halanaerobiaceae',
+-                                       u'g__Halanaerobium',
+-                                       u's__Halanaerobiumsaccharolyticum']},
+-                        {u'taxonomy': [u'k__Bacteria', u'p__Proteobacteria',
+-                                       u'c__Gammaproteobacteria',
+-                                       u'o__Enterobacteriales',
+-                                       u'f__Enterobacteriaceae',
+-                                       u'g__Escherichia', u's__']}],
++                        {'taxonomy': [
++                            'k__Bacteria',
++                            'p__Proteobacteria',
++                            'c__Gammaproteobacteria',
++                            'o__Enterobacteriales',
++                            'f__Enterobacteriaceae',
++                            'g__Escherichia',
++                            's__',
++                        ]},
++                        {'taxonomy': [
++                            'k__Bacteria',
++                            'p__Cyanobacteria',
++                            'c__Nostocophycideae',
++                            'o__Nostocales',
++                            'f__Nostocaceae',
++                            'g__Dolichospermum',
++                            's__',
++                        ]},
++                        {'taxonomy': [
++                            'k__Archaea',
++                            'p__Euryarchaeota',
++                            'c__Methanomicrobia',
++                            'o__Methanosarcinales',
++                            'f__Methanosarcinaceae',
++                            'g__Methanosarcina',
++                            's__',
++                        ]},
++                        {'taxonomy': [
++                            'k__Bacteria',
++                            'p__Firmicutes',
++                            'c__Clostridia',
++                            'o__Halanaerobiales',
++                            'f__Halanaerobiaceae',
++                            'g__Halanaerobium',
++                            's__Halanaerobiumsaccharolyticum',
++                        ]},
++                        {'taxonomy': [
++                            'k__Bacteria',
++                            'p__Proteobacteria',
++                            'c__Gammaproteobacteria',
++                            'o__Enterobacteriales',
++                            'f__Enterobacteriaceae',
++                            'g__Escherichia',
++                            's__',
++                        ]}],
+                     sample_metadata=[
+-                        {u'collapsed_ids': [u'Sample4', u'Sample5',
+-                                            u'Sample6']},
+-                        {u'collapsed_ids': [u'Sample1', u'Sample2',
+-                                            u'Sample3']}
+-                        ],
+-                    type=u'OTU table')
++                        {'collapsed_ids': [
++                            'Sample4',
++                            'Sample5',
++                            'Sample6',
++                        ]},
++                        {'collapsed_ids': [
++                            'Sample1',
++                            'Sample2',
++                            'Sample3',
++                        ]}
++                    ],
++                    type='OTU table')
+         self.assertEqual(obs, exp)
+ 
+     def test_json_to_hdf5_collapsed_metadata(self):
+@@ -176,42 +208,51 @@ class TableConverterTests(TestCase):
+                               [0., 0., 1., 4., 0., 2.],
+                               [5., 1., 0., 2., 3., 1.],
+                               [0., 1., 2., 0., 0., 0.]]),
+-                    observation_ids=[u'p__Firmicutes', u'p__Euryarchaeota',
+-                                     u'p__Cyanobacteria',
+-                                     u'p__Proteobacteria'],
+-                    sample_ids=[u'Sample1', u'Sample2', u'Sample3',
+-                                u'Sample4', u'Sample5', u'Sample6'],
++                    observation_ids=[
++                        'p__Firmicutes',
++                        'p__Euryarchaeota',
++                        'p__Cyanobacteria',
++                        'p__Proteobacteria',
++                    ],
++                    sample_ids=[
++                        'Sample1',
++                        'Sample2',
++                        'Sample3',
++                        'Sample4',
++                        'Sample5',
++                        'Sample6',
++                    ],
+                     observation_metadata=[
+-                        {u'collapsed_ids': [u'GG_OTU_4']},
+-                        {u'collapsed_ids': [u'GG_OTU_3']},
+-                        {u'collapsed_ids': [u'GG_OTU_2']},
+-                        {u'collapsed_ids': [u'GG_OTU_1', u'GG_OTU_5']}],
++                        {'collapsed_ids': ['GG_OTU_4']},
++                        {'collapsed_ids': ['GG_OTU_3']},
++                        {'collapsed_ids': ['GG_OTU_2']},
++                        {'collapsed_ids': ['GG_OTU_1', 'GG_OTU_5']}],
+                     sample_metadata=[
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CGCTTATCGAGA',
+-                         u'Description': u'human gut',
+-                         u'BODY_SITE': u'gut'},
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CATACCAGTAGC',
+-                         u'Description': u'human gut',
+-                         u'BODY_SITE': u'gut'},
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CTCTCTACCTGT',
+-                         u'Description': u'human gut',
+-                         u'BODY_SITE': u'gut'},
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CTCTCGGCCTGT',
+-                         u'Description': u'human skin',
+-                         u'BODY_SITE': u'skin'},
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CTCTCTACCAAT',
+-                         u'Description': u'human skin',
+-                         u'BODY_SITE': u'skin'},
+-                        {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                         u'BarcodeSequence': u'CTAACTACCAAT',
+-                         u'Description': u'human skin',
+-                         u'BODY_SITE': u'skin'}],
+-                    type=u'OTU table')
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CGCTTATCGAGA',
++                         'Description': 'human gut',
++                         'BODY_SITE': 'gut'},
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CATACCAGTAGC',
++                         'Description': 'human gut',
++                         'BODY_SITE': 'gut'},
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CTCTCTACCTGT',
++                         'Description': 'human gut',
++                         'BODY_SITE': 'gut'},
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CTCTCGGCCTGT',
++                         'Description': 'human skin',
++                         'BODY_SITE': 'skin'},
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CTCTCTACCAAT',
++                         'Description': 'human skin',
++                         'BODY_SITE': 'skin'},
++                        {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                         'BarcodeSequence': 'CTAACTACCAAT',
++                         'Description': 'human skin',
++                         'BODY_SITE': 'skin'}],
++                    type='OTU table')
+ 
+         self.assertEqual(obs, exp)
+ 
+diff --git a/biom/tests/test_cli/test_validate_table.py b/biom/tests/test_cli/test_validate_table.py
+index cf7558f..a454d53 100644
+--- a/biom/tests/test_cli/test_validate_table.py
++++ b/biom/tests/test_cli/test_validate_table.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ # -----------------------------------------------------------------------------
+ # Copyright (c) 2011-2017, The BIOM Format Development Team.
+ #
+@@ -23,7 +22,7 @@ from unittest import TestCase, main
+ from shutil import copy
+ 
+ import numpy as np
+-import numpy.testing as npt
++import pytest
+ 
+ from biom.cli.table_validator import TableValidator
+ from biom.util import HAVE_H5PY
+@@ -56,7 +55,7 @@ class TableValidatorTests(TestCase):
+         for f in self.to_remove:
+             os.remove(f)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_valid_hdf5_metadata_v210(self):
+         exp = {'valid_table': True, 'report_lines': []}
+         obs = self.cmd(table=self.hdf5_file_valid,
+@@ -66,11 +65,11 @@ class TableValidatorTests(TestCase):
+                        format_version='2.1')
+         self.assertEqual(obs, exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_valid_hdf5_metadata_v200(self):
+         pass  # omitting, not a direct way to test at this time using the repo
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_valid_hdf5(self):
+         """Test a valid HDF5 table"""
+         exp = {'valid_table': True,
+@@ -79,7 +78,7 @@ class TableValidatorTests(TestCase):
+         obs = self.cmd(table=self.hdf5_file_valid)
+         self.assertEqual(obs, exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_invalid_hdf5(self):
+         """Test an invalid HDF5 table"""
+         exp = {'valid_table': False,
+@@ -290,7 +289,7 @@ class TableValidatorTests(TestCase):
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+-        table['matrix_element_type'] = u'int'
++        table['matrix_element_type'] = 'int'
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+@@ -298,7 +297,7 @@ class TableValidatorTests(TestCase):
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+-        table['matrix_element_type'] = u'float'
++        table['matrix_element_type'] = 'float'
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+@@ -306,7 +305,7 @@ class TableValidatorTests(TestCase):
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+-        table['matrix_element_type'] = u'str'
++        table['matrix_element_type'] = 'str'
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) == 0)
+ 
+@@ -314,7 +313,7 @@ class TableValidatorTests(TestCase):
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) > 0)
+ 
+-        table['matrix_element_type'] = u'asd'
++        table['matrix_element_type'] = 'asd'
+         obs = self.cmd._valid_matrix_element_type(table)
+         self.assertTrue(len(obs) > 0)
+ 
+diff --git a/biom/tests/test_err.py b/biom/tests/test_err.py
+index 339ecd5..67ae1af 100644
+--- a/biom/tests/test_err.py
++++ b/biom/tests/test_err.py
+@@ -102,9 +102,9 @@ class ErrorProfileTests(TestCase):
+ 
+     def test_state(self):
+         self.ep.state = {'all': 'ignore'}
+-        self.assertEqual(set(self.ep._state.values()), set(['ignore']))
++        self.assertEqual(set(self.ep._state.values()), {'ignore'})
+         self.ep.state = {'empty': 'call'}
+-        self.assertEqual(set(self.ep._state.values()), set(['ignore', 'call']))
++        self.assertEqual(set(self.ep._state.values()), {'ignore', 'call'})
+         self.assertEqual(self.ep.state['empty'], 'call')
+ 
+         with self.assertRaises(KeyError):
+diff --git a/biom/tests/test_parse.py b/biom/tests/test_parse.py
+index 9d4a77a..28da0b6 100644
+--- a/biom/tests/test_parse.py
++++ b/biom/tests/test_parse.py
+@@ -15,6 +15,7 @@ from unittest import TestCase, main
+ 
+ import numpy as np
+ import numpy.testing as npt
++import pytest
+ 
+ from biom.parse import (generatedby, MetadataMap, parse_biom_table, parse_uc,
+                         load_table)
+@@ -170,7 +171,7 @@ class ParseTests(TestCase):
+         self.assertEqual(tab.metadata(), None)
+         self.assertEqual(tab.metadata(axis='observation'), None)
+ 
+-        tablestring = u'''{
++        tablestring = '''{
+             "id":null,
+             "format": "Biological Observation Matrix 0.9.1-dev",
+             "format_url": "http://biom-format.org",
+@@ -274,7 +275,7 @@ class ParseTests(TestCase):
+         obs = Table.from_adjacency(''.join(lines))
+         self.assertEqual(obs, exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_parse_biom_table_hdf5(self):
+         """Make sure we can parse a HDF5 table through the same loader"""
+         cwd = os.getcwd()
+@@ -283,7 +284,7 @@ class ParseTests(TestCase):
+         Table.from_hdf5(h5py.File('test_data/test.biom'))
+         os.chdir(cwd)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_load_table_filepath(self):
+         cwd = os.getcwd()
+         if '/' in __file__[1:]:
+@@ -291,7 +292,7 @@ class ParseTests(TestCase):
+         load_table('test_data/test.biom')
+         os.chdir(cwd)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_load_table_inmemory(self):
+         cwd = os.getcwd()
+         if '/' in __file__[1:]:
+@@ -337,7 +338,7 @@ class ParseTests(TestCase):
+         t_json = parse_biom_table(t_json_stringio)
+         self.assertEqual(t, t_json)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_parse_biom_table_with_hdf5(self):
+         """tests for parse_biom_table when we have h5py"""
+         # We will round-trip the HDF5 file to several different formats, and
+@@ -448,7 +449,7 @@ K00507	0.0	0.0	Metabolism; Lipid Metabolism; Biosynthesis of unsaturated fatt\
+ y acids|Organismal Systems; Endocrine System; PPAR signaling pathway
+ """
+ 
+-biom_minimal_sparse = u"""
++biom_minimal_sparse = """
+     {
+         "id":null,
+         "format": "Biological Observation Matrix v0.9",
+diff --git a/biom/tests/test_table.py b/biom/tests/test_table.py
+index fe33b69..2403c17 100644
+--- a/biom/tests/test_table.py
++++ b/biom/tests/test_table.py
+@@ -12,7 +12,6 @@ from tempfile import NamedTemporaryFile
+ from unittest import TestCase, main
+ from io import StringIO
+ 
+-import six
+ import numpy.testing as npt
+ import numpy as np
+ from scipy.sparse import lil_matrix, csr_matrix, csc_matrix
+@@ -628,12 +627,12 @@ class TableTests(TestCase):
+             obs = general_parser(test)
+             self.assertEqual(obs, exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_non_hdf5_file_or_group(self):
+         with self.assertRaises(ValueError):
+             Table.from_hdf5(10)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_empty_md(self):
+         """Parse a hdf5 formatted BIOM table w/o metadata"""
+         cwd = os.getcwd()
+@@ -645,10 +644,10 @@ class TableTests(TestCase):
+         self.assertTrue(t._sample_metadata is None)
+         self.assertTrue(t._observation_metadata is None)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_custom_parsers(self):
+         def parser(item):
+-            return item.upper()
++            return general_parser(item).upper()
+         parse_fs = {'BODY_SITE': parser}
+ 
+         cwd = os.getcwd()
+@@ -661,13 +660,13 @@ class TableTests(TestCase):
+         for m in t.metadata():
+             self.assertIn(m['BODY_SITE'], (b'GUT', b'SKIN'))
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_issue_731(self):
+         t = Table.from_hdf5(h5py.File('test_data/test.biom'))
+         self.assertTrue(isinstance(t.table_id, str))
+         self.assertTrue(isinstance(t.type, str))
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5(self):
+         """Parse a hdf5 formatted BIOM table"""
+         cwd = os.getcwd()
+@@ -676,72 +675,83 @@ class TableTests(TestCase):
+         t = Table.from_hdf5(h5py.File('test_data/test.biom'))
+         os.chdir(cwd)
+ 
+-        npt.assert_equal(t.ids(), (u'Sample1', u'Sample2', u'Sample3',
+-                                   u'Sample4', u'Sample5', u'Sample6'))
++        npt.assert_equal(t.ids(), ('Sample1', 'Sample2', 'Sample3',
++                                   'Sample4', 'Sample5', 'Sample6'))
+         npt.assert_equal(t.ids(axis='observation'),
+-                         (u'GG_OTU_1', u'GG_OTU_2', u'GG_OTU_3',
+-                          u'GG_OTU_4', u'GG_OTU_5'))
+-        exp_obs_md = ({u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Proteobacteria',
+-                                     u'c__Gammaproteobacteria',
+-                                     u'o__Enterobacteriales',
+-                                     u'f__Enterobacteriaceae',
+-                                     u'g__Escherichia',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Cyanobacteria',
+-                                     u'c__Nostocophycideae',
+-                                     u'o__Nostocales',
+-                                     u'f__Nostocaceae',
+-                                     u'g__Dolichospermum',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Archaea',
+-                                     u'p__Euryarchaeota',
+-                                     u'c__Methanomicrobia',
+-                                     u'o__Methanosarcinales',
+-                                     u'f__Methanosarcinaceae',
+-                                     u'g__Methanosarcina',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Firmicutes',
+-                                     u'c__Clostridia',
+-                                     u'o__Halanaerobiales',
+-                                     u'f__Halanaerobiaceae',
+-                                     u'g__Halanaerobium',
+-                                     u's__Halanaerobiumsaccharolyticum']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Proteobacteria',
+-                                     u'c__Gammaproteobacteria',
+-                                     u'o__Enterobacteriales',
+-                                     u'f__Enterobacteriaceae',
+-                                     u'g__Escherichia',
+-                                     u's__']})
++                         ('GG_OTU_1', 'GG_OTU_2', 'GG_OTU_3',
++                          'GG_OTU_4', 'GG_OTU_5'))
++        exp_obs_md = (
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Proteobacteria',
++                'c__Gammaproteobacteria',
++                'o__Enterobacteriales',
++                'f__Enterobacteriaceae',
++                'g__Escherichia',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Cyanobacteria',
++                'c__Nostocophycideae',
++                'o__Nostocales',
++                'f__Nostocaceae',
++                'g__Dolichospermum',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Archaea',
++                'p__Euryarchaeota',
++                'c__Methanomicrobia',
++                'o__Methanosarcinales',
++                'f__Methanosarcinaceae',
++                'g__Methanosarcina',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Firmicutes',
++                'c__Clostridia',
++                'o__Halanaerobiales',
++                'f__Halanaerobiaceae',
++                'g__Halanaerobium',
++                's__Halanaerobiumsaccharolyticum',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Proteobacteria',
++                'c__Gammaproteobacteria',
++                'o__Enterobacteriales',
++                'f__Enterobacteriaceae',
++                'g__Escherichia',
++                's__',
++            ]})
+         self.assertEqual(t._observation_metadata, exp_obs_md)
+ 
+-        exp_samp_md = ({u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CGCTTATCGAGA',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CATACCAGTAGC',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCTACCTGT',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCGGCCTGT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCTACCAAT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTAACTACCAAT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'})
++        exp_samp_md = ({'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CGCTTATCGAGA',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CATACCAGTAGC',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCTACCTGT',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCGGCCTGT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCTACCAAT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTAACTACCAAT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'})
+         self.assertEqual(t._sample_metadata, exp_samp_md)
+ 
+         exp = [np.array([0., 0., 1., 0., 0., 0.]),
+@@ -751,7 +761,7 @@ class TableTests(TestCase):
+                np.array([0., 1., 1., 0., 0., 0.])]
+         npt.assert_equal(list(t.iter_data(axis="observation")), exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_sample_subset_no_metadata(self):
+         """Parse a sample subset of a hdf5 formatted BIOM table"""
+         samples = [b'Sample2', b'Sample4', b'Sample6']
+@@ -779,10 +789,10 @@ class TableTests(TestCase):
+                np.array([1., 0., 0.])]
+         npt.assert_equal(list(t.iter_data(axis='observation')), exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_sample_subset(self):
+         """Parse a sample subset of a hdf5 formatted BIOM table"""
+-        samples = [u'Sample2', u'Sample4', u'Sample6']
++        samples = ['Sample2', 'Sample4', 'Sample6']
+ 
+         cwd = os.getcwd()
+         if '/' in __file__:
+@@ -790,51 +800,60 @@ class TableTests(TestCase):
+         t = Table.from_hdf5(h5py.File('test_data/test.biom'), ids=samples)
+         os.chdir(cwd)
+ 
+-        npt.assert_equal(t.ids(), [u'Sample2', u'Sample4', u'Sample6'])
++        npt.assert_equal(t.ids(), ['Sample2', 'Sample4', 'Sample6'])
+         npt.assert_equal(t.ids(axis='observation'),
+-                         [u'GG_OTU_2', u'GG_OTU_3', u'GG_OTU_4', u'GG_OTU_5'])
+-        exp_obs_md = ({u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Cyanobacteria',
+-                                     u'c__Nostocophycideae',
+-                                     u'o__Nostocales',
+-                                     u'f__Nostocaceae',
+-                                     u'g__Dolichospermum',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Archaea',
+-                                     u'p__Euryarchaeota',
+-                                     u'c__Methanomicrobia',
+-                                     u'o__Methanosarcinales',
+-                                     u'f__Methanosarcinaceae',
+-                                     u'g__Methanosarcina',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Firmicutes',
+-                                     u'c__Clostridia',
+-                                     u'o__Halanaerobiales',
+-                                     u'f__Halanaerobiaceae',
+-                                     u'g__Halanaerobium',
+-                                     u's__Halanaerobiumsaccharolyticum']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Proteobacteria',
+-                                     u'c__Gammaproteobacteria',
+-                                     u'o__Enterobacteriales',
+-                                     u'f__Enterobacteriaceae',
+-                                     u'g__Escherichia',
+-                                     u's__']})
++                         ['GG_OTU_2', 'GG_OTU_3', 'GG_OTU_4', 'GG_OTU_5'])
++        exp_obs_md = (
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Cyanobacteria',
++                'c__Nostocophycideae',
++                'o__Nostocales',
++                'f__Nostocaceae',
++                'g__Dolichospermum',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Archaea',
++                'p__Euryarchaeota',
++                'c__Methanomicrobia',
++                'o__Methanosarcinales',
++                'f__Methanosarcinaceae',
++                'g__Methanosarcina',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Firmicutes',
++                'c__Clostridia',
++                'o__Halanaerobiales',
++                'f__Halanaerobiaceae',
++                'g__Halanaerobium',
++                's__Halanaerobiumsaccharolyticum',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Proteobacteria',
++                'c__Gammaproteobacteria',
++                'o__Enterobacteriales',
++                'f__Enterobacteriaceae',
++                'g__Escherichia',
++                's__',
++            ]})
+         self.assertEqual(t._observation_metadata, exp_obs_md)
+ 
+-        exp_samp_md = ({u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CATACCAGTAGC',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCGGCCTGT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTAACTACCAAT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'})
++        exp_samp_md = ({'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CATACCAGTAGC',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCGGCCTGT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTAACTACCAAT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'})
+         self.assertEqual(t._sample_metadata, exp_samp_md)
+ 
+         exp = [np.array([1., 2., 1.]),
+@@ -843,7 +862,7 @@ class TableTests(TestCase):
+                np.array([1., 0., 0.])]
+         npt.assert_equal(list(t.iter_data(axis='observation')), exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_observation_subset_no_metadata(self):
+         """Parse a observation subset of a hdf5 formatted BIOM table"""
+         observations = [b'GG_OTU_1', b'GG_OTU_3', b'GG_OTU_5']
+@@ -871,10 +890,10 @@ class TableTests(TestCase):
+                np.array([0, 1., 1., 0., 0, 0.])]
+         npt.assert_equal(list(t.iter_data(axis='observation')), exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_observation_subset(self):
+         """Parse a observation subset of a hdf5 formatted BIOM table"""
+-        observations = [u'GG_OTU_1', u'GG_OTU_3', u'GG_OTU_5']
++        observations = ['GG_OTU_1', 'GG_OTU_3', 'GG_OTU_5']
+ 
+         cwd = os.getcwd()
+         if '/' in __file__:
+@@ -883,49 +902,56 @@ class TableTests(TestCase):
+                             ids=observations, axis='observation')
+         os.chdir(cwd)
+ 
+-        npt.assert_equal(t.ids(), [u'Sample2', u'Sample3', u'Sample4',
+-                                   u'Sample6'])
++        npt.assert_equal(t.ids(), ['Sample2', 'Sample3', 'Sample4',
++                                   'Sample6'])
+         npt.assert_equal(t.ids(axis='observation'),
+-                         [u'GG_OTU_1', u'GG_OTU_3', u'GG_OTU_5'])
+-        exp_obs_md = ({u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Proteobacteria',
+-                                     u'c__Gammaproteobacteria',
+-                                     u'o__Enterobacteriales',
+-                                     u'f__Enterobacteriaceae',
+-                                     u'g__Escherichia',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Archaea',
+-                                     u'p__Euryarchaeota',
+-                                     u'c__Methanomicrobia',
+-                                     u'o__Methanosarcinales',
+-                                     u'f__Methanosarcinaceae',
+-                                     u'g__Methanosarcina',
+-                                     u's__']},
+-                      {u'taxonomy': [u'k__Bacteria',
+-                                     u'p__Proteobacteria',
+-                                     u'c__Gammaproteobacteria',
+-                                     u'o__Enterobacteriales',
+-                                     u'f__Enterobacteriaceae',
+-                                     u'g__Escherichia',
+-                                     u's__']})
++                         ['GG_OTU_1', 'GG_OTU_3', 'GG_OTU_5'])
++        exp_obs_md = (
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Proteobacteria',
++                'c__Gammaproteobacteria',
++                'o__Enterobacteriales',
++                'f__Enterobacteriaceae',
++                'g__Escherichia',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Archaea',
++                'p__Euryarchaeota',
++                'c__Methanomicrobia',
++                'o__Methanosarcinales',
++                'f__Methanosarcinaceae',
++                'g__Methanosarcina',
++                's__',
++            ]},
++            {'taxonomy': [
++                'k__Bacteria',
++                'p__Proteobacteria',
++                'c__Gammaproteobacteria',
++                'o__Enterobacteriales',
++                'f__Enterobacteriaceae',
++                'g__Escherichia',
++                's__',
++            ]})
+         self.assertEqual(t._observation_metadata, exp_obs_md)
+ 
+-        exp_samp_md = ({u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CATACCAGTAGC',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCTACCTGT',
+-                        u'Description': u'human gut',
+-                        u'BODY_SITE': u'gut'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTCTCGGCCTGT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'},
+-                       {u'LinkerPrimerSequence': u'CATGCTGCCTCCCGTAGGAGT',
+-                        u'BarcodeSequence': u'CTAACTACCAAT',
+-                        u'Description': u'human skin',
+-                        u'BODY_SITE': u'skin'})
++        exp_samp_md = ({'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CATACCAGTAGC',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCTACCTGT',
++                        'Description': 'human gut',
++                        'BODY_SITE': 'gut'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTCTCGGCCTGT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'},
++                       {'LinkerPrimerSequence': 'CATGCTGCCTCCCGTAGGAGT',
++                        'BarcodeSequence': 'CTAACTACCAAT',
++                        'Description': 'human skin',
++                        'BODY_SITE': 'skin'})
+         self.assertEqual(t._sample_metadata, exp_samp_md)
+ 
+         exp = [np.array([0., 1., 0., 0.]),
+@@ -933,7 +959,7 @@ class TableTests(TestCase):
+                np.array([1., 1., 0., 0.])]
+         npt.assert_equal(list(t.iter_data(axis='observation')), exp)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_subset_error(self):
+         """hdf5 biom table parse throws error with invalid parameters"""
+         cwd = os.getcwd()
+@@ -952,7 +978,7 @@ class TableTests(TestCase):
+                             axis='observation')
+         os.chdir(cwd)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_from_hdf5_empty_table(self):
+         """HDF5 biom parse successfully loads an empty table"""
+         cwd = os.getcwd()
+@@ -967,7 +993,7 @@ class TableTests(TestCase):
+         self.assertEqual(t._sample_metadata, None)
+         npt.assert_equal(list(t.iter_data(axis='observation')), [])
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_empty_table(self):
+         """Successfully writes an empty OTU table in HDF5 format"""
+         # Create an empty OTU table
+@@ -977,7 +1003,7 @@ class TableTests(TestCase):
+             t.to_hdf5(h5, 'tests')
+             h5.close()
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_empty_table_bug_619(self):
+         """Successfully writes an empty OTU table in HDF5 format"""
+         t = example_table.filter({}, axis='observation', inplace=False)
+@@ -992,7 +1018,7 @@ class TableTests(TestCase):
+             t.to_hdf5(h5, 'tests')
+             h5.close()
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_missing_metadata_observation(self):
+         # exercises a vlen_list
+         t = Table(np.array([[0, 1], [2, 3]]), ['a', 'b'], ['c', 'd'],
+@@ -1007,8 +1033,8 @@ class TableTests(TestCase):
+                          ({'taxonomy': None},
+                           {'taxonomy': ['foo', 'baz']}))
+ 
+-    #@npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
+-    @npt.dec.skipif(False is False, msg='This test fails under Debian and is ignored as long this is not clarified')
++    #@pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
++    @pytest.mark.skip(msg='This test fails under Debian and is ignored as long this is not clarified')
+     def test_to_hdf5_missing_metadata_sample(self):
+         # exercises general formatter
+         t = Table(np.array([[0, 1], [2, 3]]), ['a', 'b'], ['c', 'd'], None,
+@@ -1023,7 +1049,7 @@ class TableTests(TestCase):
+                          ({'dat': ''},
+                           {'dat': 'foo'}))
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_inconsistent_metadata_categories_observation(self):
+         t = Table(np.array([[0, 1], [2, 3]]), ['a', 'b'], ['c', 'd'],
+                   [{'taxonomy_A': 'foo; bar'},
+@@ -1031,15 +1057,11 @@ class TableTests(TestCase):
+ 
+         with NamedTemporaryFile() as tmpfile:
+             with h5py.File(tmpfile.name, 'w') as h5:
+-                if six.PY3:
+-                    with self.assertRaisesRegex(ValueError,
+-                                                'inconsistent metadata'):
+-                        t.to_hdf5(h5, 'tests')
+-                else:
+-                    with self.assertRaises(ValueError):
+-                        t.to_hdf5(h5, 'tests')
+-
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++                with self.assertRaisesRegex(ValueError,
++                                            'inconsistent metadata'):
++                    t.to_hdf5(h5, 'tests')
++
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_inconsistent_metadata_categories_sample(self):
+         t = Table(np.array([[0, 1], [2, 3]]), ['a', 'b'], ['c', 'd'],
+                   None,
+@@ -1048,15 +1070,11 @@ class TableTests(TestCase):
+ 
+         with NamedTemporaryFile() as tmpfile:
+             with h5py.File(tmpfile.name, 'w') as h5:
+-                if six.PY3:
+-                    with self.assertRaisesRegex(ValueError,
+-                                                'inconsistent metadata'):
+-                        t.to_hdf5(h5, 'tests')
+-                else:
+-                    with self.assertRaises(ValueError):
+-                        t.to_hdf5(h5, 'tests')
+-
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++                with self.assertRaisesRegex(ValueError,
++                                            'inconsistent metadata'):
++                    t.to_hdf5(h5, 'tests')
++
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_malformed_taxonomy(self):
+         t = Table(np.array([[0, 1], [2, 3]]), ['a', 'b'], ['c', 'd'],
+                   [{'taxonomy': 'foo; bar'},
+@@ -1070,7 +1088,7 @@ class TableTests(TestCase):
+                          ({'taxonomy': ['foo', 'bar']},
+                           {'taxonomy': ['foo', 'baz']}))
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_general_fallback_to_list(self):
+         st_rich = Table(self.vals,
+                         ['1', '2'], ['a', 'b'],
+@@ -1081,7 +1099,7 @@ class TableTests(TestCase):
+             h5 = h5py.File(tmpfile.name, 'w')
+             st_rich.to_hdf5(h5, 'tests')
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5_custom_formatters(self):
+         self.st_rich = Table(self.vals,
+                              ['1', '2'], ['a', 'b'],
+@@ -1117,7 +1135,7 @@ class TableTests(TestCase):
+                 self.assertEqual(m1['barcode'].lower(), m2['barcode'])
+             h5.close()
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_to_hdf5(self):
+         """Write a file"""
+         with NamedTemporaryFile() as tmpfile:
+@@ -3013,7 +3031,7 @@ class SparseTableTests(TestCase):
+         """
+         a = np.array([[2, 1, 2, 1, 8, 6, 3, 3, 5, 5], ]).T
+         dt = Table(data=a, sample_ids=['S1', ],
+-                   observation_ids=['OTU{:02d}'.format(i) for i in range(10)])
++                   observation_ids=[f'OTU{i:02d}' for i in range(10)])
+         actual = set()
+         for i in range(1000):
+             obs = dt.subsample(35)
+@@ -3031,7 +3049,7 @@ class SparseTableTests(TestCase):
+         """
+         a = np.array([[2, 1, 2, 1, 8, 6, 3, 3, 5, 5], ]).T
+         dt = Table(data=a, sample_ids=['S1', ],
+-                   observation_ids=['OTU{:02d}'.format(i) for i in range(10)])
++                   observation_ids=[f'OTU{i:02d}' for i in range(10)])
+         actual = set()
+         for i in range(1000):
+             obs = dt.subsample(35, with_replacement=True)
+@@ -3361,7 +3379,7 @@ class SparseTableTests(TestCase):
+ 
+         # two partitions, (a, c, e) and (b, d, f)
+         def partition_f(id_, md):
+-            return id_ in set(['b', 'd', 'f'])
++            return id_ in {'b', 'd', 'f'}
+ 
+         def collapse_f(t, axis):
+             return np.asarray([np.median(v) for v in t.iter_data(dense=True)])
+@@ -3934,13 +3952,9 @@ class SparseTableTests(TestCase):
+     def test_extract_data_from_tsv_badvalue_complaint(self):
+         tsv = ['#OTU ID\ta\tb', '1\t2\t3', '2\tfoo\t6']
+ 
+-        if six.PY3:
+-            msg = "Invalid value on line 2, column 1, value foo"
+-            with self.assertRaisesRegex(TypeError, msg):
+-                Table._extract_data_from_tsv(tsv, dtype=int)
+-        else:
+-            with self.assertRaises(TypeError):
+-                Table._extract_data_from_tsv(tsv, dtype=int)
++        msg = "Invalid value on line 2, column 1, value foo"
++        with self.assertRaisesRegex(TypeError, msg):
++            Table._extract_data_from_tsv(tsv, dtype=int)
+ 
+     def test_bin_samples_by_metadata(self):
+         """Yield tables binned by sample metadata"""
+@@ -4252,7 +4266,7 @@ class SupportTests2(TestCase):
+         self.assertEqual((obs != exp).sum(), 0)
+ 
+ 
+-legacy_otu_table1 = u"""# some comment goes here
++legacy_otu_table1 = """# some comment goes here
+ #OTU id\tFing\tKey\tNA\tConsensus Lineage
+ 0\t19111\t44536\t42 \tBacteria; Actinobacteria; Actinobacteridae; Propioniba\
+ cterineae; Propionibacterium
+@@ -4265,7 +4279,7 @@ ae; Corynebacteriaceae
+ aphylococcaceae
+ 4\t589\t2074\t34\tBacteria; Cyanobacteria; Chloroplasts; vectors
+ """
+-legacy_otu_table_bad_metadata = u"""# some comment goes here
++legacy_otu_table_bad_metadata = """# some comment goes here
+ #OTU id\tFing\tKey\tNA\tConsensus Lineage
+ 0\t19111\t44536\t42 \t
+ 1\t1216\t3500\t6\tBacteria; Firmicutes; Alicyclobacillaceae; Bacilli; La\
+@@ -4280,7 +4294,7 @@ extract_tsv_bug = """#OTU ID	s1	s2	taxonomy
+ 1	123	32\t
+ 2	315	3	k__test;p__test
+ 3	0	22	k__test;p__test"""
+-otu_table1 = u"""# Some comment
++otu_table1 = """# Some comment
+ #OTU ID\tFing\tKey\tNA\tConsensus Lineage
+ 0\t19111\t44536\t42\tBacteria; Actinobacteria; Actinobacteridae; \
+ Propionibacterineae; Propionibacterium
+diff --git a/biom/tests/test_util.py b/biom/tests/test_util.py
+index fa0ebff..e0eba37 100644
+--- a/biom/tests/test_util.py
++++ b/biom/tests/test_util.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ # -----------------------------------------------------------------------------
+ # Copyright (c) 2011-2017, The BIOM Format Development Team.
+ #
+@@ -15,6 +14,7 @@ from unittest import TestCase, main
+ 
+ import numpy as np
+ import numpy.testing as npt
++import pytest
+ 
+ from biom.table import Table
+ from biom.parse import parse_biom_table, load_table
+@@ -252,7 +252,7 @@ class UtilTests(TestCase):
+         tmp_f.write('foo\n')
+         tmp_f.flush()
+ 
+-        obs = safe_md5(open(tmp_f.name, 'r'))
++        obs = safe_md5(open(tmp_f.name))
+         self.assertEqual(obs, exp)
+ 
+         obs = safe_md5(['foo\n'])
+@@ -261,7 +261,7 @@ class UtilTests(TestCase):
+         # unsupported type raises TypeError
+         self.assertRaises(TypeError, safe_md5, 42)
+ 
+-    @npt.dec.skipif(HAVE_H5PY is False, msg='H5PY is not installed')
++    @pytest.mark.skipif(HAVE_H5PY is False, reason='H5PY is not installed')
+     def test_biom_open_hdf5(self):
+         with biom_open(get_data_path('test.biom')) as f:
+             self.assertTrue(isinstance(f, h5py.File))
+@@ -277,7 +277,7 @@ class UtilTests(TestCase):
+                 pass
+         self.assertTrue("is empty and can't be parsed" in str(e.exception))
+ 
+-    @npt.dec.skipif(HAVE_H5PY, msg='Can only be tested without H5PY')
++    @pytest.mark.skipif(HAVE_H5PY, reason='Can only be tested without H5PY')
+     def test_biom_open_hdf5_no_h5py(self):
+         with self.assertRaises(RuntimeError):
+             with biom_open(get_data_path('test.biom')):
+@@ -289,12 +289,12 @@ class UtilTests(TestCase):
+ 
+     def test_load_table_gzip_unicode(self):
+         t = load_table(get_data_path('bad_table.txt.gz'))
+-        self.assertEqual(u's__Cortinarius grosmornënsis',
++        self.assertEqual('s__Cortinarius grosmornënsis',
+                          t.metadata('otu1', 'observation')['taxonomy'])
+ 
+     def test_load_table_unicode(self):
+         t = load_table(get_data_path('bad_table.txt'))
+-        self.assertEqual(u's__Cortinarius grosmornënsis',
++        self.assertEqual('s__Cortinarius grosmornënsis',
+                          t.metadata('otu1', 'observation')['taxonomy'])
+ 
+     def test_is_hdf5_file(self):
+diff --git a/biom/util.py b/biom/util.py
+index 6f2695f..469b023 100644
+--- a/biom/util.py
++++ b/biom/util.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ # ----------------------------------------------------------------------------
+ # Copyright (c) 2011-2020, The BIOM Format Development Team.
+ #
+@@ -98,8 +97,7 @@ def get_biom_format_version_string(version=None):
+     if version is None:
+         return "Biological Observation Matrix 1.0.0"
+     else:
+-        return "Biological Observation Matrix %s.%s.0" % (version[0],
+-                                                          version[1])
++        return f"Biological Observation Matrix {version[0]}.{version[1]}.0"
+ 
+ 
+ def get_biom_format_url_string():
+@@ -204,7 +202,7 @@ def prefer_self(x, y):
+ 
+ def index_list(item):
+     """Takes a list and returns {l[idx]:idx}"""
+-    return dict([(id_, idx) for idx, id_ in enumerate(item)])
++    return {id_: idx for idx, id_ in enumerate(item)}
+ 
+ 
+ def load_biom_config():
+@@ -278,7 +276,7 @@ def parse_biom_config_files(biom_config_files):
+     for biom_config_file in biom_config_files:
+         try:
+             results.update(parse_biom_config_file(biom_config_file))
+-        except IOError:
++        except OSError:
+             pass
+ 
+     return results
+@@ -428,7 +426,7 @@ def biom_open(fp, permission='r'):
+ 
+     """
+     if permission not in ['r', 'w', 'U', 'rb', 'wb']:
+-        raise IOError("Unknown mode: %s" % permission)
++        raise OSError("Unknown mode: %s" % permission)
+ 
+     opener = functools.partial(io.open, encoding='utf-8')
+     mode = permission
+diff --git a/doc/conf.py b/doc/conf.py
+index 5b856ce..cf50527 100644
+--- a/doc/conf.py
++++ b/doc/conf.py
+@@ -1,4 +1,3 @@
+-# -*- coding: utf-8 -*-
+ #
+ # BIOM documentation build configuration file, created by
+ # sphinx-quickstart on Mon Feb 13 08:46:01 2012.
+@@ -58,8 +57,8 @@ source_suffix = '.rst'
+ master_doc = 'index'
+ 
+ # General information about the project.
+-project = u'biom-format'
+-copyright = u'2011-2020 The BIOM Format Development Team'
++project = 'biom-format'
++copyright = '2011-2021 The BIOM Format Development Team'
+ 
+ # The version info for the project you're documenting, acts as replacement for
+ # |version| and |release|, also used in various other places throughout the
+@@ -195,8 +194,7 @@ htmlhelp_basename = 'BIOMdoc'
+ # Grouping the document tree into LaTeX files. List of tuples
+ # (source start file, target name, title, author, documentclass [howto/manual]).
+ latex_documents = [
+-    ('index', 'BIOM.tex', u'BIOM Documentation',
+-     u'The BIOM Project', 'manual'),
++    ('index', 'BIOM.tex', 'BIOM Documentation', 'The BIOM Project', 'manual'),
+ ]
+ 
+ # The name of an image file (relative to this directory) to place at the top of
+@@ -228,8 +226,7 @@ latex_documents = [
+ # One entry per manual page. List of tuples
+ # (source start file, name, description, authors, manual section).
+ man_pages = [
+-    ('index', 'biom', u'BIOM Documentation',
+-     [u'The BIOM Project'], 1)
++    ('index', 'biom', 'BIOM Documentation', ['The BIOM Project'], 1)
+ ]
+ 
+ # Add the 'copybutton' javascript, to hide/show the prompt in code
+diff --git a/setup.cfg b/setup.cfg
+index d5639ae..cea11b2 100644
+--- a/setup.cfg
++++ b/setup.cfg
+@@ -1,4 +1,8 @@
+ [aliases]
+ test=pytest
++
+ [flake8]
+ exclude=biom/tests/long_lines.py
++
++[options]
++python_requires = >=3.6
+diff --git a/setup.py b/setup.py
+index 3e22097..7854bff 100644
+--- a/setup.py
++++ b/setup.py
+@@ -1,5 +1,4 @@
+ #!/usr/bin/env python
+-# -*- coding: utf-8 -*-
+ 
+ # ----------------------------------------------------------------------------
+ # Copyright (c) 2011-2020, The BIOM Format Development Team.
+@@ -90,6 +89,8 @@ classes = """
+     Programming Language :: Python :: 3.6
+     Programming Language :: Python :: 3.7
+     Programming Language :: Python :: 3.8
++    Programming Language :: Python :: 3.9
++    Programming Language :: Python :: 3.10
+     Programming Language :: Python :: Implementation :: CPython
+     Operating System :: OS Independent
+     Operating System :: POSIX :: Linux
+@@ -110,10 +111,15 @@ extensions = [Extension("biom._filter",
+                         include_dirs=[np.get_include()])]
+ extensions = cythonize(extensions)
+ 
+-install_requires = ["click", "numpy >= 1.9.2", "future >= 0.16.0",
+-                    "scipy >= 1.3.1", 'pandas >= 0.20.0',
+-                    "six >= 1.10.0", "cython >= 0.29", "h5py",
+-                    "cython"]
++install_requires = [
++    "click",
++    "numpy >= 1.9.2",
++    "scipy >= 1.3.1",
++    'pandas >= 0.20.0',
++    "cython >= 0.29",
++    "h5py",
++    "cython"
++]
+ 
+ if sys.version_info[0] < 3:
+     raise SystemExit("Python 2.7 is no longer supported")
+@@ -130,10 +136,11 @@ setup(name='biom-format',
+       maintainer_email=__email__,
+       url='http://www.biom-format.org',
+       packages=find_packages(),
+-      tests_require=['pytest < 5.3.4',
+-                     'pytest-cov',
+-                     'flake8',
+-                     'nose'],
++      tests_require=[
++          'pytest>=6.2.4',
++          'pytest-cov',
++          'flake8',
++      ],
+       include_package_data=True,
+       ext_modules=extensions,
+       include_dirs=[np.get_include()],


=====================================
debian/patches/series
=====================================
@@ -8,3 +8,4 @@ posix_shell.patch
 sphinx_add_javascript.patch
 fix_test.patch
 ignore_failing_hdf5_test.patch
+python3.10.patch



View it on GitLab: https://salsa.debian.org/med-team/python-biom-format/-/compare/2e059febd3b6fc69cb2e3f96828760a29447eee1...3bf950b5ac51b0bc20c27f6474dea4633bd25436

-- 
View it on GitLab: https://salsa.debian.org/med-team/python-biom-format/-/compare/2e059febd3b6fc69cb2e3f96828760a29447eee1...3bf950b5ac51b0bc20c27f6474dea4633bd25436
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20211123/a37cf23a/attachment-0001.htm>


More information about the debian-med-commit mailing list