[med-svn] [Git][med-team/python-pyani][upstream] New upstream version 0.2.11

Nilesh Patra (@nilesh) gitlab at salsa.debian.org
Fri Jul 2 14:27:15 BST 2021



Nilesh Patra pushed to branch upstream at Debian Med / python-pyani


Commits:
786cc7fc by Nilesh Patra at 2021-07-02T13:01:31+00:00
New upstream version 0.2.11
- - - - -


12 changed files:

- + .circleci/config.yml
- .gitignore
- .pre-commit-config.yaml
- README.md
- pyani/__init__.py
- pyani/anib.py
- pyani/anim.py
- requirements-dev.txt
- setup.py
- tests/test_anib.py
- tests/test_anim.py
- tests/test_tetra.py


Changes:

=====================================
.circleci/config.yml
=====================================
@@ -0,0 +1,113 @@
+# Python CircleCI 2.0 configuration file
+#
+# Check https://circleci.com/docs/2.0/language-python/ for more details
+#
+# This config file is for the version_0_2 branch of pyani, and caches
+# dependencies specific to this branch
+version: 2.1
+
+orbs:
+  codecov: codecov/codecov at 1.0.5
+
+workflows:
+  version: 2.1
+  test:
+    jobs:
+      - test-3_9
+      - test-3_8
+      - test-3_7
+      - test-3_6
+  weekly:
+    triggers:
+      - schedule:
+          cron: "0 0 * * 6"
+          filters:
+            branches:
+              only:
+                - master
+    jobs:
+      - test-3_9
+      - test-3_8
+      - test-3_7
+      - test-3_6
+
+jobs:
+  test-3_7: &test-template
+    docker:
+      - image: circleci/python:3.7.10
+
+    working_directory: ~/repo
+
+    steps:
+      - checkout
+
+      - restore_cache:
+          keys:
+            - pyani-v0_2-dependencies-pip-{{ .Branch }}-{{ checksum "requirements.txt" }}-{{ checksum "requirements-dev.txt" }}-{{ checksum "requirements-pip.txt" }}
+            - pyani-v0_2-dependencies-pip-
+
+      - run:
+          name: update apt-get
+          command: |
+            sudo apt-get update
+      - run:
+          name: install third-party tools
+          command: |
+            sudo apt-get install csh mummer ncbi-blast+
+      - run:
+          name: install legacy BLAST
+          command: |
+            curl -o legacy_blast.tar.gz ftp://ftp.ncbi.nlm.nih.gov/blast/executables/legacy.NOTSUPPORTED/2.2.26/blast-2.2.26-x64-linux.tar.gz
+            tar -zxf legacy_blast.tar.gz
+            echo 'export PATH=$PWD/blast-2.2.26/bin:$PATH' >> $BASH_ENV
+            source $BASH_ENV
+
+      - run:
+          name: install dependencies
+          command: |
+            python3 -m venv venv
+            . venv/bin/activate
+            pip install --upgrade pip setuptools virtualenv
+            pip install -r requirements.txt
+            pip install -r requirements-dev.txt
+            pip install -r requirements-pip.txt
+
+      - save_cache:
+          paths:
+            - ./venv
+          key: pyani-v0_2-dependencies-pip-{{ .Branch }}-{{ checksum "requirements.txt" }}-{{ checksum "requirements-dev.txt" }}-{{ checksum "requirements-pip.txt" }}
+
+      - run:
+          name: install pyani
+          command: |
+            . venv/bin/activate
+            pip install -e .  # Install the repo as a package
+
+      - run:
+          name: run tests
+          command: |
+            . venv/bin/activate
+            pytest -v --cov=pyani --cov-report xml:.coverage.xml
+
+      - store_artifacts:
+          path: tests/test_output
+          destination: tests/test_output
+      - store_artifacts:
+          path: .coverage.xml
+          destination: .coverage.xml
+
+      - codecov/upload:
+          file: .coverage.xml
+
+  test-3_6:
+    <<: *test-template
+    docker:
+      - image: circleci/python:3.6.9
+  test-3_8:
+    <<: *test-template
+    docker:
+      - image: circleci/python:3.8.9
+  test-3_9:
+    <<: *test-template
+    docker:
+      - image: circleci/python:3.9.5


=====================================
.gitignore
=====================================
@@ -44,6 +44,7 @@ output/
 # Coverage test output
 cover/
 .coverage
+.coverage.xml
 
 # iPython/Jupyter/R
 .ipynb_checkpoints
@@ -63,4 +64,7 @@ venv-*
 issue*/
 
 # VS Code configs
-.vscode
\ No newline at end of file
+.vscode
+
+# Static test configs
+.mypy_cache/
\ No newline at end of file


=====================================
.pre-commit-config.yaml
=====================================
@@ -1,10 +1,21 @@
+
 repos:
--   repo: https://github.com/ambv/black
-    rev: stable
+-   repo: https://github.com/psf/black
+    rev: 21.5b1
     hooks:
     - id: black
-      language_version: python3.6
+      language_version: python3
+-   repo: https://github.com/asottile/blacken-docs
+    rev: v1.10.0
+    hooks:
+    -   id: blacken-docs
+        additional_dependencies: [black]
 -   repo: https://github.com/pre-commit/pre-commit-hooks
     rev: v1.2.3
     hooks:
-    - id: flake8
\ No newline at end of file
+    - id: flake8
+    - id: check-case-conflict
+    - id: check-docstring-first
+    - id: check-json
+    - id: check-merge-conflict
+#    - id: check-yaml
\ No newline at end of file


=====================================
README.md
=====================================
@@ -1,4 +1,8 @@
-# README.md (pyani)
+# `pyani`
+
+Whole-genome classification using Average Nucleotide Identity
+
+-----
 
 [![pyani PyPi version](https://img.shields.io/pypi/v/pyani.svg "PyPi version")](https://pypi.python.org/pypi/pyani)
 [![pyani licence](https://img.shields.io/pypi/l/pyani.svg "PyPi licence")](https://github.com/widdowquinn/pyani/blob/master/LICENSE)
@@ -6,52 +10,96 @@
 [![pyani codecov.io coverage](https://img.shields.io/codecov/c/github/widdowquinn/pyani/master.svg)](https://codecov.io/github/widdowquinn/pyani)
 [![pyani Docker Pulls](https://img.shields.io/docker/pulls/leightonpritchard/average_nucleotide_identity.svg)](https://hub.docker.com/r/leightonpritchard/average_nucleotide_identity)
 
+-----
+
+`pyani` is maintained by:
+
+- [Leighton Pritchard](https://pureportal.strath.ac.uk/en/persons/leighton-pritchard)
+- [Bailey Harrington](https://pureportal.strath.ac.uk/en/persons/bailey-ann-harrington)
+
+and we are grateful to all who have contributed to this software:
+
+- [Balázs Brankovics](https://github.com/b-brankovics)
+- [Peter Cock](https://github.com/peterjc)
+- [Robert Davey](https://github.com/froggleston)
+- [Özcan Esen](https://github.com/ozcan)
+- [Nick Waters](https://github.com/nickp60)
+- [@ytanzaw](https://github.com/ytanizaw)
+
+-----
+
+## Table of Contents
+
 <!-- TOC -->
 
-- [Overview](#overview)
-- [Installation](#installation)
-- [Docker images](#docker-images)
-- [Testing `pyani`](#testing-pyani)
-- [Running `pyani`](#running-pyani)
-    - [Script: <a name="average_nucleotide_identity.py">`average_nucleotide_identity.py`</a>](#script-a-nameaverage_nucleotide_identitypyaverage_nucleotide_identitypya)
-    - [Script: <a name="genbank_get_genomes_by_taxon.py">`genbank_get_genomes_by_taxon.py`</a>](#script-a-namegenbank_get_genomes_by_taxonpygenbank_get_genomes_by_taxonpya)
-- [DEPENDENCIES](#dependencies)
-    - [For ANI analysis](#for-ani-analysis)
-    - [For graphical output](#for-graphical-output)
-- [Method and Output Description](#method-and-output-description)
+- [`pyani`](#pyani)
+  - [Table of Contents](#table-of-contents)
+  - [Overview](#overview)
+  - [Installation](#installation)
+    - [`conda`](#conda)
+    - [`pip`](#pip)
+    - [Docker images](#docker-images)
+      - [NOTE](#note)
+    - [Installing from the repository/source code](#installing-from-the-repositorysource-code)
+      - [IMPORTANT NOTICE](#important-notice)
+      - [Obtain source code](#obtain-source-code)
+        - [Direct download](#direct-download)
+        - [Clone the repository using `git`](#clone-the-repository-using-git)
+      - [Installation From Source](#installation-from-source)
+      - [Third-party tools](#third-party-tools)
+  - [How To Run `pyani`](#how-to-run-pyani)
+    - [Script: <a name="average_nucleotide_identity.py">`average_nucleotide_identity.py`</a>](#script-average_nucleotide_identitypy)
+    - [Script: <a name="genbank_get_genomes_by_taxon.py">`genbank_get_genomes_by_taxon.py`</a>](#script-genbank_get_genomes_by_taxonpy)
+  - [Testing `pyani`](#testing-pyani)
+  - [Method and Output Description](#method-and-output-description)
     - [Average Nucleotide Identity (ANI)](#average-nucleotide-identity-ani)
-- [Developer notes](#developer-notes)
+  - [Developer notes](#developer-notes)
     - [Code Style and Pre-Commit Hooks](#code-style-and-pre-commit-hooks)
-- [Licensing](#licensing)
+  - [Licensing](#licensing)
 
 <!-- /TOC -->
 
+-----
+
 ## Overview
-`pyani` is a Python3 module that provides support for calculating average nucleotide identity (ANI) and related measures for whole genome comparisons, and rendering relevant graphical summary output. Where available, it takes advantage of multicore systems, and can integrate with [SGE/OGE](http://gridscheduler.sourceforge.net/)-type job schedulers for the sequence comparisons.
 
-`pyani` installs the following scripts into the `$PATH`:
+`pyani` is a program that calculates average nucleotide identity (ANI) and related measures for whole genome comparisons, and renders graphical summary output. Where available, it takes advantage of multicore systems, and can integrate with [SGE/OGE](http://gridscheduler.sourceforge.net/)-type job schedulers for the sequence comparisons.
+
+**This `README.md` applies only to v0.2 of `pyani`.**
+
+The following scripts should be visible in your `$PATH` after installation:
+
+- `average_nucleotide_identity.py` that enables command-line ANI analysis.
+- `genbank_get_genomes_by_taxon.py` that downloads publicly-available genomes from NCBI.
+- `delta_filter_wrapper.py` is a helper script required to run delta-filter on SGE/OGE systems.
 
-* `average_nucleotide_identity.py` that enables command-line ANI analysis.
-* `genbank_get_genomes_by_taxon.py` that downloads publicly-available genomes from NCBI.
-* `delta_filter_wrapper.py` is a helper script required to run delta-filter on SGE/OGE systems.
+-----
 
 ## Installation
 
-The easiest way to install `pyani` is to use `pip3`:
+The easiest way to install `pyani` v0.2 is to use `conda` or  `pip`. `conda` is recommended for the simplest installation of third-party tool dependencies (`mummer` and `BLAST`/`BLAST+`).
+
+### `conda`
+
+You will need to install the `bioconda` channel, following instructions at [https://bioconda.github.io/user/install.html](https://bioconda.github.io/user/install.html). Then, to create a new environment for `pyani` and install the program, issue the following command:
 
 ```bash
-pip3 install pyani
+conda create --name pyani_env python=3.8 -y
+conda activate pyani_env
+conda install pyani
 ```
 
-From version 0.1.3.2 onwards, this should also install all the required Python package dependencies. Prior to this version (i.e. 0.1.3.1 and earlier), you can acquire these dependencies with `pip -r`, and pointing at `requirements.txt` from this repository:
+### `pip`
+
+`pip` will install `pyani` and its Python dependencies, but not the third-party tools.
 
 ```bash
-pip3 install -r requirements.txt
+pip3 install pyani
 ```
 
-## Docker images
+### Docker images
 
-`pyani`'s scripts are also provided as [Docker](https://www.docker.com/) images, that can be run locally as containers. To use these images, first install Docker, then to run the corresponding scripts issue either:
+`pyani` v0.2 scripts are also provided as [Docker](https://www.docker.com/) images that can be run locally as containers on any operating system which supports Docker. To use these images, first install [Docker](https://www.docker.com/). Then, to run the corresponding scripts, issue either:
 
 ```bash
 docker run -v ${PWD}:/host_dir leightonpritchard/average_nucleotide_identity
@@ -63,20 +111,82 @@ or
 docker run -v ${PWD}:/host_dir leightonpritchard/genbank_get_genomes_by_taxon
 ```
 
+#### NOTE
+
 The `-v ${PWD}:/host_dir` argument enables the Docker container to see the current directory. Without this argument, the container will not be able to see your input files, or write output data.
 
+### Installing from the repository/source code
 
-## Testing `pyani`
+If you wish to install `pyani` v0.2 from source code, you will need to download this code from GitHub either directly, or by cloning the repository.
+
+Both methods will create a directory called `pyani` which contains the source code for v0.2.
+
+#### IMPORTANT NOTICE
+
+Please note that v0.2 is a **MAINTENANCE-ONLY RELEASE** and is found on the branch called `version_0_2`. Bugs and security issues will be fixed as we become aware of them, but no new features will be added. All new feature development now takes place under v0.3 (the `master` branch on the repository).
+
+#### Obtain source code
+
+You can obtain the source for `pyani` eithe by downloading the latest release as a compressed archive, or cloning the `version_0_2` branch.
 
-`pyani` includes tests that can be run with `nosetest` (including coverage measurement using `coverage.py`) with the following command, executed from the repository root directory:
+##### Direct download
+
+Download the source as `.zip` or `.tar.gz` from:
+
+- [https://github.com/widdowquinn/pyani/releases/tag/v0.2.10](https://github.com/widdowquinn/pyani/releases/tag/v0.2.10)
+
+Extract the archived file using the appropriate commands (or by double-clicking in your file explorer/finder):
+
+```bash
+unzip v0.2.10.zip
+```
+
+or
 
 ```bash
-nosetests --cover-erase --cover-package=pyani --cover-html --with-coverage
+tar -zxvf v0.2.10.tar.gz
 ```
 
-Coverage output will be placed (by default) in the `cover` subdirectory, and can be loaded into the web browser.
+##### Clone the repository using `git`
 
-## Running `pyani`
+Download the current `pyani` repository with `git clone`, and change branch to `version_0_2`:
+
+```bash
+git clone git at github.com:widdowquinn/pyani.git
+git checkout version_0_2
+```
+
+#### Installation From Source
+
+Change directory to `pyani`, and use the `python setup.py` setuptools command, to install the package and scripts. This will **not** install the third-party tools `BLAST`/`BLAST+` and `mummer`.
+
+```bash
+cd pyani
+python setup.py build
+python setup.py install
+```
+
+#### Third-party tools
+
+Three alignment packages are required if you wish to use all of `pyani`'s methods: `mummer`, `BLAST+`, and legacy `BLAST`.
+
+**NOTE: the legacy BLAST executables provided by NCBI will not run on macOS Big Sur; we do not provide executables for this tool.***
+
+The simplest route to obtaining these tools is to use `conda`/`bioconda`:
+
+```bash
+conda install mummer blast legacy-blast -y
+```
+
+But they can also be installed by following instructions from the tools' own websites.
+
+- **BLAST+** (for `anib`) [ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/](ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/)
+- **legacy BLAST** (for `aniblastall`) [ftp://ftp.ncbi.nlm.nih.gov/blast/executables/release/LATEST/](ftp://ftp.ncbi.nlm.nih.gov/blast/executables/release/LATEST/)
+- **MUMmer** (for `anim`) [http://mummer.sourceforge.net/](ftp://ftp.ncbi.nlm.nih.gov/blast/executables/release/LATEST/)
+
+-----
+
+## How To Run `pyani`
 
 ### Script: <a name="average_nucleotide_identity.py">`average_nucleotide_identity.py`</a>
 
@@ -110,10 +220,13 @@ usage: average_nucleotide_identity.py [-h] [-o OUTDIRNAME] [-i INDIRNAME] [-v]
 Example data and output can be found in the directory `test_ani_data`. The data are chromosomes of four isolates of *Caulobacter*. Basic analyses can be performed with the command lines:
 
 ```bash
-$ average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIm_output -m ANIm -g
-$ average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIb_output -m ANIb -g
-$ average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIblastall_output -m ANIblastall -g
-$ average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_TETRA_output -m TETRA -g
+average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIm_output -m ANIm -g
+
+average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIb_output -m ANIb -g
+
+average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_ANIblastall_output -m ANIblastall -g
+
+average_nucleotide_identity.py -i tests/test_ani_data/ -o tests/test_TETRA_output -m TETRA -g
 ```
 
 The graphical output below, supporting assignment of `NC_002696` and `NC_011916` to the same species (*C.crescentus*), and the other two isolates to distinct species (`NC_014100`:*C.segnis*; `NC_010338`:*C.* sp K31), was generated with the command-line:
@@ -125,7 +238,6 @@ average_nucleotide_identity.py -v -i tests/test_ani_data/ \
     --labels tests/test_ani_data/labels.tab
 ```
 
-
 ![ANIm percentage identity for *Caulobacter* test data](tests/test_ani_data/ANIm_percentage_identity.png "ANIm percentage identity")
 ![ANIm alignment coverage for *Caulobacter* test data](tests/test_ani_data/ANIm_alignment_coverage.png "ANIm alignment coverage")
 ![ANIm alignment length for *Caulobacter* test data](tests/test_ani_data/ANIm_alignment_lengths.png "ANIm alignment length")
@@ -174,30 +286,17 @@ INFO: Done.
 
 The number of attempted retries for each download, and the size of a batch download can be modified. By default, the script will attempt 20 download retries, and obtain sequences in batches of 10000.
 
+-----
 
-## DEPENDENCIES
-
-Note that Python package dependencies should automatically be installed if you are using version 0.1.3.2 or greater, and installing with `pip install pyani`.
-
-For earlier versions, you can satisfy dependencies by using `pip install -r requirements.txt` (using the `requirements.txt` file in this repository).
-
-### For ANI analysis
-
-* **Biopython** <http://www.biopython.org>
-* **NumPy** <http://www.numpy.org/>
-* **pandas** <http://pandas.pydata.org/>
-* **SciPy** <http://www.scipy.org/>
-
-#### Alignment tools
+## Testing `pyani`
 
-* **BLAST+** executable in the `$PATH`, or available on the command line (required for **ANIb** analysis) <ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/>
-* **legacy BLAST** executable in the `$PATH` or available on the command line (required for **ANIblastall** analysis) <ftp://ftp.ncbi.nlm.nih.gov/blast/executables/release/LATEST/>
-* **MUMmer** executables in the $PATH, or available on the command line (required for **ANIm** analysis) <http://mummer.sourceforge.net/>
+The `pyani` repository includes tests that can be run with `pytest` (including coverage measurement using `pytest-cov`) with the following command, executed from the repository root directory:
 
-### For graphical output
+```bash
+pytest -v
+```
 
-* **matplotlib** <http://matplotlib.org/>
-* **seaborn** <https://github.com/mwaskom/seaborn>
+-----
 
 ## Method and Output Description
 
@@ -205,12 +304,12 @@ For earlier versions, you can satisfy dependencies by using `pip install -r requ
 
 This module calculates Average Nucleotide Identity (ANI) according to one of a number of alternative methods described in, e.g.
 
-* Richter M, Rossello-Mora R (2009) Shifting the genomic gold standard for the prokaryotic species definition. Proc Natl Acad Sci USA 106: 19126-19131. doi:10.1073/pnas.0906412106. (ANI1020, ANIm, ANIb)
-* Goris J, Konstantinidis KT, Klappenbach JA, Coenye T, Vandamme P, et al. (2007) DNA-DNA hybridization values and their relationship to whole-genome sequence similarities. Int J Syst Evol Micr 57: 81-91. doi:10.1099/ijs.0.64483-0.
+- Richter M, Rossello-Mora R (2009) Shifting the genomic gold standard for the prokaryotic species definition. Proc Natl Acad Sci USA 106: 19126-19131. doi:10.1073/pnas.0906412106. (ANI1020, ANIm, ANIb)
+- Goris J, Konstantinidis KT, Klappenbach JA, Coenye T, Vandamme P, et al. (2007) DNA-DNA hybridization values and their relationship to whole-genome sequence similarities. Int J Syst Evol Micr 57: 81-91. doi:10.1099/ijs.0.64483-0.
 
-ANI is proposed to be the appropriate *in silico* substitute for DNA-DNA 
-hybridisation (DDH), and so useful for delineating species boundaries. A 
-typical percentage threshold for species boundary in the literature is 95% 
+ANI is proposed to be the appropriate *in silico* substitute for DNA-DNA
+hybridisation (DDH), and so useful for delineating species boundaries. A
+typical percentage threshold for species boundary in the literature is 95%
 ANI (e.g. Richter et al. 2009).
 
 All ANI methods follow the basic algorithm:
@@ -220,19 +319,19 @@ All ANI methods follow the basic algorithm:
 
 Methods differ on: (1) what alignment algorithm is used, and the choice of parameters (this affects the aligned region boundaries); (2) what the input is for alignment (typically either fragments of fixed size, or the most complete assembly available).
 
-* **ANIm**: uses MUMmer (NUCmer) to align the input sequences.
-* **ANIb**: uses BLASTN+ to align 1020nt fragments of the input sequences
-* **ANIblastall**: uses legacy BLASTN to align 1020nt fragments of the input sequences
-* **TETRA**: calculates tetranucleotide frequencies of each input sequence
+- **ANIm**: uses MUMmer (NUCmer) to align the input sequences.
+- **ANIb**: uses BLASTN+ to align 1020nt fragments of the input sequences
+- **ANIblastall**: uses legacy BLASTN to align 1020nt fragments of the input sequences
+- **TETRA**: calculates tetranucleotide frequencies of each input sequence
 
-The algorithms takes as input correctly-formatted FASTA multiple sequence files. All sequences for a single organism should be contained in only one sequence file. Although it is possible to provide new labels for each input genome, for rendering graphical output, the names of these files are used for identification so it is best to name 
+The algorithms takes as input correctly-formatted FASTA multiple sequence files. All sequences for a single organism should be contained in only one sequence file. Although it is possible to provide new labels for each input genome, for rendering graphical output, the names of these files are used for identification so it is best to name
 them sensibly.
 
 Output is written to a named directory. The output files differ depending on the chosen ANI method.
 
-* **ANIm**: MUMmer/NUCmer .delta files, describing each pairwise sequence alignment. Output as tab-separated plain text format tables describing: alignment coverage; total alignment lengths; similarity errors; and percentage identity (ANIm).
-* **ANIb** and **ANIblastall**: FASTA sequences describing 1020nt fragments of each input sequence; BLAST nucleotide databases - one for each set of fragments; and BLASTN output files (tab-separated tabular format plain text) - one for each pairwise comparison of input sequences. Output as tab-separated plain text tables describing: alignment coverage; total alignment lengths; similarity errors; and percentage identity (ANIb or ANIblastall).
-* **TETRA**: Tab-separated plain text files describing the Pearson correlations between Z-score distributions for each tetranucleotide in each input sequence (TETRA).
+- **ANIm**: MUMmer/NUCmer .delta files, describing each pairwise sequence alignment. Output as tab-separated plain text format tables describing: alignment coverage; total alignment lengths; similarity errors; and percentage identity (ANIm).
+- **ANIb** and **ANIblastall**: FASTA sequences describing 1020nt fragments of each input sequence; BLAST nucleotide databases - one for each set of fragments; and BLASTN output files (tab-separated tabular format plain text) - one for each pairwise comparison of input sequences. Output as tab-separated plain text tables describing: alignment coverage; total alignment lengths; similarity errors; and percentage identity (ANIb or ANIblastall).
+- **TETRA**: Tab-separated plain text files describing the Pearson correlations between Z-score distributions for each tetranucleotide in each input sequence (TETRA).
 
 If graphical output is chosen, the output directory will also contain PDF, PNG and EPS files representing the various output measures as a heatmap with row and column dendrograms. Other output formats (e.g. SVG) can be specified with the `--gformat` argument.
 
@@ -243,7 +342,7 @@ The `pyani` package is presented at [`GitHub`](https://github.com/widdowquinn/py
 - `master` is the source code underpinning the most recent/current release of `pyani`. It will (almost) always be in sync with the latest release found at [https://github.com/widdowquinn/pyani/releases](https://github.com/widdowquinn/pyani/releases). The only time this code should not be in sync with the release is when there are modifications to documentation, or immediately preceding a release.
 - `development` is the current bleeding-edge version of `pyani`. It should (almost) always be in a working and usable condition, but may not be complete and/or some features may be missing or still under development.
 
-### Code Style and Pre-Commit Hooks
+### Code Style and Pre-Commit Hooks
 
 The source code for `pyani` is expected to conform to `flake8` linting, and `black` code styling. These are enforced as pre-commit hooks using the `pre-commit` package (included in `requirements.txt`).
 
@@ -255,44 +354,47 @@ To enable pre-commit checks in the codebase on your local machine, execute the f
 pre-commit install
 ```
 
+-----
+
 ## Licensing
 
 Unless otherwise indicated, all code is subject to the following agreement:
 
-    (c) The James Hutton Institute 2014-2019
-    Author: Leighton Pritchard
-
-    Contact: leighton.pritchard at hutton.ac.uk
-
-    Address: 
-    Leighton Pritchard,
-    Information and Computational Sciences,
-    James Hutton Institute,
-    Errol Road,
-    Invergowrie,
-    Dundee,
-    DD6 9LH,
-    Scotland,
-    UK
-
-The MIT License
-
-Copyright (c) 2014-2019 The James Hutton Institute
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE.
+>(c) The James Hutton Institute 2014-2019
+>(c) The University of Strathclyde 2019-2021
+>Author: Leighton Pritchard
+>
+>Contact: leighton.pritchard at strath.ac.uk
+>
+>Address:
+>Leighton Pritchard
+>Strathclyde Institute for Pharmacy and Biomedical Sciences
+>University of Strathclyde
+>161 Cathedral Street
+>Glasgow
+>Scotland
+>G4 0RE
+>UK
+>
+>The MIT License
+>
+>Copyright (c) 2014-2019 The James Hutton Institute
+>Copyright (c) 2019-2021 The University of Strathclyde
+>
+>Permission is hereby granted, free of charge, to any person obtaining a copy
+>of this software and associated documentation files (the "Software"), to deal
+>in the Software without restriction, including without limitation the rights
+>to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+>copies of the Software, and to permit persons to whom the Software is
+>furnished to do so, subject to the following conditions:
+>
+>The above copyright notice and this permission notice shall be included in
+>all copies or substantial portions of the Software.
+>
+>THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+>IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+>FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+>AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+>LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+>OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+>THE SOFTWARE.


=====================================
pyani/__init__.py
=====================================
@@ -1,3 +1,3 @@
 # python package version
 # should match r"^__version__ = '(?P<version>[^']+)'$" for setup.py
-__version__ = "0.2.10"
+__version__ = "0.2.11"


=====================================
pyani/anib.py
=====================================
@@ -506,6 +506,7 @@ def parse_blast_tab(filename, fraglengths, identity, coverage, mode="ANIb"):
             "ppos",
             "blast_gaps",
         ]
+
     # We may receive an empty BLASTN output file, if there are no significant
     # regions of homology. This causes pandas to throw an error on CSV import.
     # To get past this, we create an empty dataframe with the appropriate
@@ -513,25 +514,28 @@ def parse_blast_tab(filename, fraglengths, identity, coverage, mode="ANIb"):
     try:
         data = pd.read_csv(filename, header=None, sep="\t", index_col=0)
         data.columns = columns
-    except pd.io.common.EmptyDataError:
+    except pd.errors.EmptyDataError:
         data = pd.DataFrame(columns=columns)
     # Add new column for fragment length, only for BLASTALL
     if mode == "ANIblastall":
         data["qlen"] = pd.Series(
             [qfraglengths[idx] for idx in data.index], index=data.index
         )
+
     # Add new columns for recalculated alignment length, proportion, and
     # percentage identity
     data["ani_alnlen"] = data["blast_alnlen"] - data["blast_gaps"]
     data["ani_alnids"] = data["ani_alnlen"] - data["blast_mismatch"]
     data["ani_coverage"] = data["ani_alnlen"] / data["qlen"]
     data["ani_pid"] = data["ani_alnids"] / data["qlen"]
+
     # Filter rows on 'ani_coverage' > 0.7, 'ani_pid' > 0.3
     filtered = data[(data["ani_coverage"] > coverage) & (data["ani_pid"] > identity)]
     # Dedupe query hits, so we only take the best hit
     filtered = filtered.groupby(filtered.index).first()
     # Replace NaNs with zero
     filtered = filtered.fillna(value=0)  # Needed if no matches
+
     # The ANI value is then the mean percentage identity.
     # We report total alignment length and the number of similarity errors
     # (mismatches and gaps), as for ANIm


=====================================
pyani/anim.py
=====================================
@@ -155,7 +155,7 @@ def parse_delta(filename):
     each as a tuple.
     """
     aln_length, sim_errors = 0, 0
-    for line in [l.strip().split() for l in open(filename, "r").readlines()]:
+    for line in [_.strip().split() for _ in open(filename, "r").readlines()]:
         if line[0] == "NUCMER" or line[0].startswith(">"):  # Skip headers
             continue
         # We only process lines with seven columns:


=====================================
requirements-dev.txt
=====================================
@@ -2,6 +2,7 @@ black
 flake8
 nose
 pytest
+pytest-cov
 setuptools
 twine
 wheel


=====================================
setup.py
=====================================
@@ -1,3 +1,6 @@
+# -*- coding: utf-8 -*-
+"""Setup script for package distribution."""
+
 # try using distribute or setuptools or distutils.
 try:
     import distribute_setup
@@ -33,7 +36,10 @@ setuptools.setup(
     version=version,
     author="Leighton Pritchard",
     author_email="leighton.pritchard at hutton.ac.uk",
-    description="pyani provides a package and script for calculation of genome-scale average nucleotide identity.",
+    description=(
+        "pyani provides a package and script for calculation of "
+        "genome-scale average nucleotide identity."
+    ),
     long_description=long_description,
     long_description_content_type="text/markdown",
     license="MIT",


=====================================
tests/test_anib.py
=====================================
@@ -56,7 +56,7 @@ import unittest
 import pandas as pd
 
 from nose.tools import assert_equal, nottest
-from pandas.util.testing import assert_frame_equal
+from pandas.testing import assert_frame_equal
 
 from pyani import anib, pyani_files
 


=====================================
tests/test_anim.py
=====================================
@@ -56,10 +56,10 @@ import unittest
 
 import pandas as pd
 
-from nose.tools import (assert_equal,)
-from pandas.util.testing import (assert_frame_equal,)
+from nose.tools import assert_equal
+from pandas.testing import assert_frame_equal
 
-from pyani import (anim, pyani_files)
+from pyani import anim, pyani_files
 
 
 class TestNUCmerCmdline(unittest.TestCase):
@@ -69,42 +69,82 @@ class TestNUCmerCmdline(unittest.TestCase):
     def setUp(self):
         """Set parameters for tests."""
         # Basic NUCmer and delta-filter command-line targets
-        self.ntgt = ' '.join(["nucmer --mum -p",
-                              "tests/test_output/anim/nucmer_output/file1_vs_file2",
-                              "file1.fna file2.fna"])
-        self.ntgtmax = ' '.join(["nucmer --maxmatch -p",
-                                 "tests/test_output/anim/nucmer_output/file1_vs_file2",
-                                 "file1.fna file2.fna"])
-        self.ftgt = ' '.join(["delta_filter_wrapper.py delta-filter -1",
-                              "tests/test_output/anim/nucmer_output/file1_vs_file2.delta",
-                              "tests/test_output/anim/nucmer_output/file1_vs_file2.filter"])
+        self.ntgt = " ".join(
+            [
+                "nucmer --mum -p",
+                "tests/test_output/anim/nucmer_output/file1_vs_file2",
+                "file1.fna file2.fna",
+            ]
+        )
+        self.ntgtmax = " ".join(
+            [
+                "nucmer --maxmatch -p",
+                "tests/test_output/anim/nucmer_output/file1_vs_file2",
+                "file1.fna file2.fna",
+            ]
+        )
+        self.ftgt = " ".join(
+            [
+                "delta_filter_wrapper.py delta-filter -1",
+                "tests/test_output/anim/nucmer_output/file1_vs_file2.delta",
+                "tests/test_output/anim/nucmer_output/file1_vs_file2.filter",
+            ]
+        )
         self.files = ["file1", "file2", "file3", "file4"]
-        self.ncmdlist = ['nucmer --mum -p ./nucmer_output/file1_vs_file2 file1 file2',
-                         'nucmer --mum -p ./nucmer_output/file1_vs_file3 file1 file3',
-                         'nucmer --mum -p ./nucmer_output/file1_vs_file4 file1 file4',
-                         'nucmer --mum -p ./nucmer_output/file2_vs_file3 file2 file3',
-                         'nucmer --mum -p ./nucmer_output/file2_vs_file4 file2 file4',
-                         'nucmer --mum -p ./nucmer_output/file3_vs_file4 file3 file4']
-        self.fcmdlist = [' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file1_vs_file2.delta',
-                                   './nucmer_output/file1_vs_file2.filter']),
-                         ' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file1_vs_file3.delta',
-                                   './nucmer_output/file1_vs_file3.filter']),
-                         ' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file1_vs_file4.delta',
-                                   './nucmer_output/file1_vs_file4.filter']),
-                         ' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file2_vs_file3.delta',
-                                   './nucmer_output/file2_vs_file3.filter']),
-                         ' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file2_vs_file4.delta',
-                                   './nucmer_output/file2_vs_file4.filter']),
-                         ' '.join(['delta_filter_wrapper.py delta-filter -1',
-                                   './nucmer_output/file3_vs_file4.delta',
-                                   './nucmer_output/file3_vs_file4.filter'])]
-        self.outdir = os.path.join('tests', 'test_output', 'anim')
-        self.indir = os.path.join('tests', 'test_input', 'anim')
+        self.ncmdlist = [
+            "nucmer --mum -p ./nucmer_output/file1_vs_file2 file1 file2",
+            "nucmer --mum -p ./nucmer_output/file1_vs_file3 file1 file3",
+            "nucmer --mum -p ./nucmer_output/file1_vs_file4 file1 file4",
+            "nucmer --mum -p ./nucmer_output/file2_vs_file3 file2 file3",
+            "nucmer --mum -p ./nucmer_output/file2_vs_file4 file2 file4",
+            "nucmer --mum -p ./nucmer_output/file3_vs_file4 file3 file4",
+        ]
+        self.fcmdlist = [
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file1_vs_file2.delta",
+                    "./nucmer_output/file1_vs_file2.filter",
+                ]
+            ),
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file1_vs_file3.delta",
+                    "./nucmer_output/file1_vs_file3.filter",
+                ]
+            ),
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file1_vs_file4.delta",
+                    "./nucmer_output/file1_vs_file4.filter",
+                ]
+            ),
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file2_vs_file3.delta",
+                    "./nucmer_output/file2_vs_file3.filter",
+                ]
+            ),
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file2_vs_file4.delta",
+                    "./nucmer_output/file2_vs_file4.filter",
+                ]
+            ),
+            " ".join(
+                [
+                    "delta_filter_wrapper.py delta-filter -1",
+                    "./nucmer_output/file3_vs_file4.delta",
+                    "./nucmer_output/file3_vs_file4.filter",
+                ]
+            ),
+        ]
+        self.outdir = os.path.join("tests", "test_output", "anim")
+        self.indir = os.path.join("tests", "test_input", "anim")
 
     def test_single_cmd_generation(self):
         """generate single abstract NUCmer/delta-filter command-line.
@@ -112,15 +152,16 @@ class TestNUCmerCmdline(unittest.TestCase):
         Tests that a single NUCmer/delta-filter command-line pair is
         produced correctly
         """
-        cmds = anim.construct_nucmer_cmdline("file1.fna", "file2.fna",
-                                             outdir=self.outdir)
+        cmds = anim.construct_nucmer_cmdline(
+            "file1.fna", "file2.fna", outdir=self.outdir
+        )
         assert_equal(cmds, (self.ntgt, self.ftgt))
 
     def test_maxmatch_cmd_generation(self):
         """generate NUCmer command line with maxmatch."""
-        ncmd, fcmd = anim.construct_nucmer_cmdline("file1.fna", "file2.fna",
-                                                   outdir=self.outdir,
-                                                   maxmatch=True)
+        ncmd, fcmd = anim.construct_nucmer_cmdline(
+            "file1.fna", "file2.fna", outdir=self.outdir, maxmatch=True
+        )
         assert_equal(ncmd, self.ntgtmax)
 
     def test_multi_cmd_generation(self):
@@ -136,14 +177,14 @@ class TestNUCmerCmdline(unittest.TestCase):
 
         Tests that the correct dependency graph and naming scheme is produced.
         """
-        joblist = anim.generate_nucmer_jobs(self.files,
-                                            jobprefix="test")
+        joblist = anim.generate_nucmer_jobs(self.files, jobprefix="test")
         assert_equal(len(joblist), 6)
         for idx, job in enumerate(joblist):
             assert_equal(job.name, "test_%06d-f" % idx)  # filter job name
-            assert_equal(len(job.dependencies), 1)       # has NUCmer job
-            assert_equal(job.dependencies[0].name,
-                         "test_%06d-n" % idx)            # NUCmer job name
+            assert_equal(len(job.dependencies), 1)  # has NUCmer job
+            assert_equal(
+                job.dependencies[0].name, "test_%06d-n" % idx
+            )  # NUCmer job name
 
 
 class TestDeltafileProcessing(unittest.TestCase):
@@ -152,18 +193,20 @@ class TestDeltafileProcessing(unittest.TestCase):
 
     def setUp(self):
         """Set parameters for tests."""
-        self.indir = os.path.join('tests', 'test_input', 'anim')
-        self.seqdir = os.path.join('tests', 'test_input', 'sequences')
-        self.deltafile = os.path.join(self.indir, 'test.delta')
-        self.deltadir = os.path.join(self.indir, 'deltadir')
-        self.df_pid = pd.DataFrame([[1.000000, 0.850994, 0.999974, 0.867940],
-                                    [0.850994, 1.000000, 0.851074, 0.852842],
-                                    [0.999974, 0.851074, 1.000000, 0.867991],
-                                    [0.867940, 0.852842, 0.867991, 1.000000]],
-                                   columns=['NC_002696',  'NC_010338',
-                                            'NC_011916',  'NC_014100'],
-                                   index=['NC_002696',  'NC_010338',
-                                          'NC_011916',  'NC_014100'])
+        self.indir = os.path.join("tests", "test_input", "anim")
+        self.seqdir = os.path.join("tests", "test_input", "sequences")
+        self.deltafile = os.path.join(self.indir, "test.delta")
+        self.deltadir = os.path.join(self.indir, "deltadir")
+        self.df_pid = pd.DataFrame(
+            [
+                [1.000000, 0.850994, 0.999974, 0.867940],
+                [0.850994, 1.000000, 0.851074, 0.852842],
+                [0.999974, 0.851074, 1.000000, 0.867991],
+                [0.867940, 0.852842, 0.867991, 1.000000],
+            ],
+            columns=["NC_002696", "NC_010338", "NC_011916", "NC_014100"],
+            index=["NC_002696", "NC_010338", "NC_011916", "NC_014100"],
+        )
 
     def test_deltafile_import(self):
         """parses NUCmer .delta/.filter file."""
@@ -175,5 +218,7 @@ class TestDeltafileProcessing(unittest.TestCase):
         seqfiles = pyani_files.get_fasta_files(self.seqdir)
         orglengths = pyani_files.get_sequence_lengths(seqfiles)
         result = anim.process_deltadir(self.deltadir, orglengths)
-        assert_frame_equal(result.percentage_identity.sort_index(1).sort_index(),
-                           self.df_pid.sort_index(1).sort_index())
+        assert_frame_equal(
+            result.percentage_identity.sort_index(1).sort_index(),
+            self.df_pid.sort_index(1).sort_index(),
+        )


=====================================
tests/test_tetra.py
=====================================
@@ -57,10 +57,10 @@ import unittest
 
 import pandas as pd
 
-from nose.tools import (assert_equal, assert_false, assert_true)
-from pandas.util.testing import (assert_frame_equal,)
+from nose.tools import assert_equal, assert_false, assert_true
+from pandas.testing import assert_frame_equal
 
-from pyani import (tetra, )
+from pyani import tetra
 
 
 def ordered(obj):
@@ -78,22 +78,23 @@ class TestTETRA(unittest.TestCase):
 
     def setUp(self):
         """Define parameters and values for tests."""
-        self.indir = os.path.join('tests', 'test_input', 'tetra')
-        self.tgtdir = os.path.join('tests', 'test_targets', 'tetra')
-        self.seqdir = os.path.join('tests', 'test_input', 'sequences')
-        self.infile = os.path.join(self.seqdir, 'NC_002696.fna')
-        self.infiles = [os.path.join(self.seqdir, fname) for fname in
-                        os.listdir(self.seqdir)]
+        self.indir = os.path.join("tests", "test_input", "tetra")
+        self.tgtdir = os.path.join("tests", "test_targets", "tetra")
+        self.seqdir = os.path.join("tests", "test_input", "sequences")
+        self.infile = os.path.join(self.seqdir, "NC_002696.fna")
+        self.infiles = [
+            os.path.join(self.seqdir, fname) for fname in os.listdir(self.seqdir)
+        ]
 
     def test_tetraclean(self):
         """detects unambiguous IUPAC symbols correctly."""
-        assert_false(tetra.tetra_clean('ACGTYACGTACNGTACGWTACGT'))
-        assert_true(tetra.tetra_clean('ACGTACGTACGTACGTACGTAC'))
+        assert_false(tetra.tetra_clean("ACGTYACGTACNGTACGWTACGT"))
+        assert_true(tetra.tetra_clean("ACGTACGTACGTACGTACGTAC"))
 
     def test_zscore(self):
         """TETRA Z-score calculated correctly."""
         tetra_z = tetra.calculate_tetra_zscore(self.infile)
-        with open(os.path.join(self.tgtdir, 'zscore.json'), 'r') as ifh:
+        with open(os.path.join(self.tgtdir, "zscore.json"), "r") as ifh:
             target = json.load(ifh)
         assert_equal(ordered(tetra_z), ordered(target))
 
@@ -101,6 +102,7 @@ class TestTETRA(unittest.TestCase):
         """TETRA correlation calculated correctly."""
         infiles = ordered(self.infiles)[:2]  # only test a single correlation
         corr = tetra.calculate_correlations(tetra.calculate_tetra_zscores(infiles))
-        target = pd.read_csv(os.path.join(self.tgtdir, 'correlation.tab'), sep='\t',
-                             index_col=0)
+        target = pd.read_csv(
+            os.path.join(self.tgtdir, "correlation.tab"), sep="\t", index_col=0
+        )
         assert_frame_equal(corr, target)



View it on GitLab: https://salsa.debian.org/med-team/python-pyani/-/commit/786cc7fc01bbde84ff26329fcfe3d7d009599183

-- 
View it on GitLab: https://salsa.debian.org/med-team/python-pyani/-/commit/786cc7fc01bbde84ff26329fcfe3d7d009599183
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20210702/79eca029/attachment-0001.htm>


More information about the debian-med-commit mailing list