[med-svn] [Git][med-team/paleomix][master] 6 commits: routine-update: Fix watchfile to detect new versions on github

Andreas Tille (@tille) gitlab at salsa.debian.org
Wed Aug 25 15:15:53 BST 2021



Andreas Tille pushed to branch master at Debian Med / paleomix


Commits:
447955c6 by Andreas Tille at 2021-08-25T16:09:50+02:00
routine-update: Fix watchfile to detect new versions on github

- - - - -
f9367bda by Andreas Tille at 2021-08-25T16:09:53+02:00
New upstream version 1.3.3
- - - - -
44c71f1a by Andreas Tille at 2021-08-25T16:09:53+02:00
routine-update: New upstream version

- - - - -
6def934b by Andreas Tille at 2021-08-25T16:09:58+02:00
Update upstream source from tag 'upstream/1.3.3'

Update to upstream version '1.3.3'
with Debian dir 3e10f6f7136d57966c62abad8710204c0adbe33c
- - - - -
17bee7e8 by Andreas Tille at 2021-08-25T16:09:58+02:00
routine-update: Standards-Version: 4.6.0

- - - - -
ee8247d9 by Andreas Tille at 2021-08-25T16:14:15+02:00
routine-update: Ready to upload to unstable

- - - - -


16 changed files:

- CHANGES.md
- debian/changelog
- debian/control
- debian/watch
- docs/bam_pipeline/configuration.rst
- docs/bam_pipeline/overview.rst
- docs/bam_pipeline/requirements.rst
- docs/bam_pipeline/usage.rst
- docs/conf.py
- docs/installation.rst
- docs/other_tools.rst
- docs/yaml.rst
- paleomix/__init__.py
- paleomix/nodes/picard.py
- paleomix/pipelines/bam/parts/summary.py
- paleomix_environment.yaml


Changes:

=====================================
CHANGES.md
=====================================
@@ -1,5 +1,14 @@
 # Changelog
 
+## [1.3.3] - 2021-04-06
+
+### Fixed
+  - Fixed regression in BAM pipeline summary node, causing failing if there were zero
+    hits or reads.
+  - Fixed BAM validation always being run in big-genome mode, resulting in some checks
+    being disabled despite being applicable.
+
+
 ## [1.3.2] - 2020-09-03
 
 ### Added
@@ -720,7 +729,8 @@ the (partially) updated documentation now hosted on ReadTheDocs.
   - Switching to more traditional version-number tracking.
 
 
-[Unreleased]: https://github.com/MikkelSchubert/paleomix/compare/v1.3.2...HEAD
+[Unreleased]: https://github.com/MikkelSchubert/paleomix/compare/v1.3.3...HEAD
+[1.3.3]: https://github.com/MikkelSchubert/paleomix/compare/v1.3.2...v1.3.3
 [1.3.2]: https://github.com/MikkelSchubert/paleomix/compare/v1.3.1...v1.3.2
 [1.3.1]: https://github.com/MikkelSchubert/paleomix/compare/v1.3.0...v1.3.1
 [1.3.0]: https://github.com/MikkelSchubert/paleomix/compare/v1.2.14...v1.3.0


=====================================
debian/changelog
=====================================
@@ -1,8 +1,14 @@
-paleomix (1.3.2-2) UNRELEASED; urgency=medium
+paleomix (1.3.3-1) unstable; urgency=medium
 
+  [ Étienne Mollier ]
   * Add myself to uploaders.
 
- -- Étienne Mollier <etienne.mollier at mailoo.org>  Mon, 09 Nov 2020 15:36:24 +0100
+  [ Andreas Tille ]
+  * Fix watchfile to detect new versions on github (routine-update)
+  * New upstream version
+  * Standards-Version: 4.6.0 (routine-update)
+
+ -- Andreas Tille <tille at debian.org>  Wed, 25 Aug 2021 16:10:14 +0200
 
 paleomix (1.3.2-1) unstable; urgency=medium
 


=====================================
debian/control
=====================================
@@ -19,7 +19,7 @@ Build-Depends: debhelper-compat (= 13),
                rsync,
                examl,
                picard-tools
-Standards-Version: 4.5.0
+Standards-Version: 4.6.0
 Vcs-Browser: https://salsa.debian.org/med-team/paleomix
 Vcs-Git: https://salsa.debian.org/med-team/paleomix.git
 Homepage: https://geogenetics.ku.dk/publications/paleomix


=====================================
debian/watch
=====================================
@@ -1,3 +1,3 @@
 version=4
 
-https://github.com/MikkelSchubert/paleomix/releases .*/archive/v(\d[\d.-]+)\.(?:tar(?:\.gz|\.bz2)?|tgz)
+https://github.com/MikkelSchubert/paleomix/tags (?:.*?/)?v?(\d[\d.]*)\.tar\.gz


=====================================
docs/bam_pipeline/configuration.rst
=====================================
@@ -7,15 +7,15 @@ Configuring the BAM pipeline
 
 The BAM pipeline supports a number of command-line options (see `paleomix bam run --help`). These options may be set directly on the command-line (e.g. using `--max-threads 16`), but it is also possible to set default values for such options.
 
-This is accomplished by writing options in `~/.paleomix/bam_pipeline.ini`::
+This is accomplished by writing options in `~/.paleomix/bam_pipeline.ini`, such as::
 
     max-threads = 16
-    log-level = warning
-    jar-root = /home/username/install/jar_root
+    bowtie2-max-threads = 1
     bwa-max-threads = 1
-    temp-root = /tmp/username/bam_pipeline
+    jar-root = /home/username/install/jar_root
     jre-option = -Xmx4g
-    bowtie2-max-threads = 1
+    log-level = warning
+    temp-root = /tmp/username/bam_pipeline
 
 Options in the configuration file correspond directly to command-line options for the BAM pipeline, with leading dashes removed. For example, the command-line option `--max-threads` becomes `max-threads` in the configuration file.
 


=====================================
docs/bam_pipeline/overview.rst
=====================================
@@ -18,17 +18,17 @@ During a typical analyses, the BAM pipeline will proceed through the following s
 
     2. The records of the resulting BAM are updated using `samtools fixmate` to ensure that PE reads contain the correct information about the mate read.
 
-    3. The BAM is sorted using `samtools sort`, indexed using `samtools index`, and validated using `Picard ValidateSamFile`.
+    3. The BAM is sorted using `samtools sort`, indexed using `samtools index`, and validated using Picard `ValidateSamFile`.
 
-    4. Finally, the records are updated using `samtools calm` to ensure consistent reporting of the number of mismatches relative to the reference genome (BAM tag 'NM').
+    4. Finally, the records are updated using `samtools calmd` to ensure consistent reporting of the number of mismatches relative to the reference genome (BAM tag 'NM').
 
-4. Filtering of duplicates, recaluation (rescaling) of quality scores, and validation
+4. Filtering of duplicates, recalculation (rescaling) of quality scores, and validation
 
     1. If enabled, PCR duplicates are filtered using Picard `MarkDuplicates` for SE and PE reads and using `paleomix rmdup_collapsed` for collapsed reads (see the :ref:`other_tools` section). PCR filtering is carried out per library.
 
     2. If mapDamage based rescaling of quality scores is, quality scores of bases that are potentially the result of *post-mortem* DNA damage are recalculated using a damage model built using mapDamage2.0 [Jonsson2013]_.
 
-    3. The resulting BAMs are indexed and validated using `Picard ValidateSamFile`. Mapped reads at each position of the alignments are compared using the query name, sequence, and qualities. If a match is found, it is assumed to represent a duplication of input data (see :ref:`troubleshooting_bam`).
+    3. The resulting BAMs are indexed and validated using Picard `ValidateSamFile`. Mapped reads at each position of the alignments are compared using the query name, sequence, and qualities. If a match is found, it is assumed to represent a duplication of input data (see :ref:`troubleshooting_bam`).
 
 5. Generation of final BAMs
 
@@ -38,6 +38,6 @@ During a typical analyses, the BAM pipeline will proceed through the following s
 
     1. If the `Summary` feature is enable, a single summary table is generated for each target. This table summarizes the input data in terms of the raw number of reads, the number of reads following filtering / collapsing, the fraction of reads mapped to each prefix, the fraction of reads filtered as duplicates, and more.
 
-    2. Coverage statistics and depth histograms are calculated for the intermediate and final BAM files using `paleomix coverage` and `paleomix depths` if enabled. Statistics are calculated genome-wide and for any regions of interest specified by the user.
+    2. Coverage statistics and depth histograms are calculated for the intermediate and final BAM files using `paleomix coverage` and `paleomix depths`, if enabled. Statistics are calculated genome-wide and for any regions of interest specified by the user.
 
-    3. If mapDamage plotting or modeling is enabled, mapDamage plots are generated; if rescaling is enabled, a model of the post-mortem DNA damage is also generated.
+    3. If mapDamage is enabled, mapDamage plots are generated; if modeling or rescaling is enabled, a model of the post-mortem DNA damage is also generated.


=====================================
docs/bam_pipeline/requirements.rst
=====================================
@@ -11,12 +11,12 @@ In addition to the requirements listed in the :ref:`installation` section, the B
 * `SAMTools`_ v1.3.1 [Li2009b]_
 * `Picard Tools`_ v1.137
 
-The Picard Tools JAR-file (picard.jar) is expected to be located in ~/install/jar_root/ by default, but this behavior may be changed using either the --jar-root command-line option, or via the global configuration file (see section :ref:`bam_configuration`)::
+The Picard Tools JAR-file (`picard.jar`) is expected to be located in `~/install/jar_root` by default, but this behavior may be changed using either the `--jar-root` command-line option, or via the global configuration file (see section :ref:`bam_configuration`)::
 
     $ mkdir -p ~/install/jar_root
     $ wget -O ~/install/jar_root/picard.jar https://github.com/broadinstitute/picard/releases/download/2.23.3/picard.jar
 
-Running Picard requires a Jave Runtime Environment (JRE). Please refer to your ditro's documentation for how to install a JRE.
+Running Picard requires a Jave Runtime Environment (i.e. the `java` command). Please refer to your distro's documentation for how to install a JRE.
 
 Furthermore, one or both of the following sequence aligners must be installed:
 
@@ -47,7 +47,7 @@ Testing the pipeline
 An example project is included with the BAM pipeline, and it is recommended to run this project in order to verify that the pipeline and required applications have been correctly installed. See the :ref:`examples` section for a description of how to run this example project.
 
 .. Note::
-    The example project does not carry out rescaling using mapDamage by default. If you wish to test that the requirements for this feature have been installed correctly, then change the following line
+    The example project does not carry out rescaling using mapDamage by default. If you wish to test that the requirements for mapDamage rescaling feature have been installed correctly, then change the following line
 
     .. code-block:: yaml
 


=====================================
docs/bam_pipeline/usage.rst
=====================================
@@ -6,25 +6,25 @@ Pipeline usage
 
 The following describes, step by step, the process of setting up a project for mapping FASTQ reads against a reference sequence using the BAM pipeline. For a detailed description of the configuration file (makefile) used by the BAM pipeline, please refer to the section :ref:`bam_makefile`, and for a detailed description of the files generated by the pipeline, please refer to the section :ref:`bam_filestructure`.
 
-The BAM pipeline is invoked using either the 'paleomix' command, which offers access to all tools included with PALEOMIX (see section :ref:`other_tools`). For the purpose of these instructions, we will make use of a tiny FASTQ data set included with PALEOMIX pipeline, consisting of synthetic FASTQ reads simulated against the human mitochondrial genome. To follow along, first create a local copy of the BAM pipeline example data:
+The BAM pipeline is invoked using the `paleomix` command, which offers access to the pipelines and to other tools included with PALEOMIX (see section :ref:`other_tools`). For the purpose of these instructions, we will make use of a tiny FASTQ data set included with PALEOMIX pipeline, consisting of synthetic FASTQ reads simulated against the human mitochondrial genome. To follow along, first create a local copy of the example data-set:
 
 .. code-block:: bash
 
     $ paleomix bam example .
 
-This will create a folder named 'bam_pipeline' in the current folder, which contain the example FASTQ reads and a 'makefile' showcasing various features of the BAM pipeline ('makefile.yaml'). We will make use of a subset of the data, but we will not make use of the makefile. The data we will use consists of 3 simulated ancient DNA libraries (independent amplifications), for which either one or two lanes have been simulated:
+This will create a folder named `bam_pipeline` in the current folder, which contain the example FASTQ reads and a 'makefile' showcasing various features of the BAM pipeline (`makefile.yaml`). We will make use of a subset of the data, but we will not make use of the makefile. The data we will use consists of 3 simulated ancient DNA libraries (independent amplifications), for which either one or two lanes have been simulated:
 
-+-------------+------+------+-----------------------------+
-| Library     | Lane | Type | Files                       |
-+-------------+------+------+-----------------------------+
-| ACGATA      |    1 |   PE | data/ACGATA\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-| GCTCTG      |    1 |   SE | data/GCTCTG\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-| TGCTCA      |    1 |   SE | data/TGCTCA\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-|             |    2 |   PE | data/TGCTCA\_L2\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
+  +-------------+------+------+-----------------------------+
+  | Library     | Lane | Type | Files                       |
+  +-------------+------+------+-----------------------------+
+  | ACGATA      |    1 |   PE | data/ACGATA\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  | GCTCTG      |    1 |   SE | data/GCTCTG\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  | TGCTCA      |    1 |   SE | data/TGCTCA\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  |             |    2 |   PE | data/TGCTCA\_L2\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
 
 
 .. warning::
@@ -43,9 +43,9 @@ To start a new project, we must first generate a makefile template using the fol
 .. code-block:: bash
 
     $ cd bam_pipeline/
-    $ paleomix bam mkfile > makefile.yaml
+    $ paleomix bam makefile > makefile.yaml
 
-Once you open the resulting file ('makefile.yaml') in your text editor of choice, you will find that BAM pipeline makefiles are split into 3 major sections, representing 1) the default options used for processing the data; 2) the reference genomes against which reads are to be mapped; and 3) sets of input files for one or more samples which is to be processed.
+Once you open the resulting file (`makefile.yaml`) in your text editor of choice, you will find that BAM pipeline makefiles are split into three major sections, representing 1) the default options; 2) the reference genomes against which reads are to be mapped; and 3) the of input files for the samples which are to be processed.
 
 In a typical project, we will need to review the default options, add one or more reference genomes which we wish to target, and list the input data to be processed.
 
@@ -53,44 +53,48 @@ In a typical project, we will need to review the default options, add one or mor
 Default options
 ^^^^^^^^^^^^^^^
 
-The makefile starts with an "Options" section, which is applied to every set of input-files in the makefile unless explicitly overwritten for a given sample (this is described in the :ref:`bam_makefile` section). For most part, the default values should be suitable for a given project, but special attention should be paid to the following options (colons indicates subsections):
+The makefile starts with an `Options` section, which is applied to every set of input-files in the makefile unless explicitly overwritten for a given sample (this is described in the :ref:`bam_makefile` section). For most part, the default values should be suitable for any given project, but special attention should be paid to the following options (double colons are used to separate subsections):
 
-**Options\:Platform**
+**Options \:\: Platform**
 
-    The sequencing platform used to generate the sequencing data; this information is recorded in the resulting BAM file, and may be used by downstream tools. The `SAM/BAM specification`_ the valid platforms, which currently include 'CAPILLARY', 'HELICOS', 'ILLUMINA', 'IONTORRENT', 'LS454', 'ONT', 'PACBIO', and 'SOLID'.
+    The sequencing platform used to generate the sequencing data; this information is recorded in the resulting BAM file, and may be used by downstream tools. The `SAM/BAM specification`_ the valid platforms, which currently include `CAPILLARY`, `HELICOS`, `ILLUMINA`, `IONTORRENT`, `LS454`, `ONT`, `PACBIO`, and `SOLID`.
 
-**Options\:QualityOffset**
+**Options \:\: QualityOffset**
 
-    The QualityOffset option refers to the starting ASCII value used to encode `Phred quality-scores`_ in user-provided FASTQ files, with the possible values of 33, 64, and 'Solexa'. For most modern data, this will be 33, corresponding to ASCII characters in the range '!' to 'J'. Older data is often encoded using the offset 64, corresponding to ASCII characters in the range '@' to 'h', and more rarely using Solexa quality-scores, which represent a different scheme than Phred scores, and which occupy the range of ASCII values from ';' to 'h'. For a visual representation of this, refer to the Wikipedia article linked above.
+    The QualityOffset option refers to the starting ASCII value used to encode `Phred quality-scores`_ in user-provided FASTQ files, with the possible values of 33, 64, and `Solexa`. For most modern data, this will be 33, corresponding to ASCII characters in the range `!` to `J`. Older data is often encoded using the offset 64, corresponding to ASCII characters in the range `@` to `h`, and more rarely using Solexa quality-scores, which represent a different scheme than Phred scores, and which occupy the range of ASCII values from `;` to `h`. For a visual representation of this, refer to the `Phred quality-scores`_ page.
 
 .. warning::
 
-    By default, the adapter trimming software used by PALEOMIX expects quality-scores no higher than 41, corresponding to the ASCII character 'J' when encoded using offset 33. If the input-data contains quality-scores higher greater than this value, then it is necessary to specify the maximum value using the '--qualitymax' command-line option. See below.
+    By default, the adapter trimming software used by PALEOMIX expects quality-scores no greater than 41, corresponding to the ASCII character `J` when encoded using offset 33. If the input-data contains quality-scores higher greater than this value, then it is necessary to specify the maximum value using the `--qualitymax` command-line option. See below.
 
 .. warning::
 
-    Presently, quality-offsets other than 33 are not supported when using the BWA 'mem' or the BWA 'bwasw' algorithms. To use these algorithms with quality-offset 64 data, it is therefore necessary to first convert these data to offset 33. This can be accomplished using the `seqtk`_ tool.
+    Presently, quality-offsets other than 33 are not supported when using the BWA `mem` or the BWA `bwasw` algorithms. To use these algorithms with quality-offset 64 data, it is therefore necessary to first convert these data to offset 33. This can be accomplished using the `seqtk`_ tool.
 
-**Options\:AdapterRemoval\:--adapter1**
-**Options\:AdapterRemoval\:--adapter2**
+**Options \:\: AdapterRemoval \:\: --adapter1** and **Options \:\: AdapterRemoval \:\: --adapter2**
 
-These two options are used to specify the adapter sequences used to identify and trim reads that contain adapter contamination using AdapterRemoval. Thus, the sequence provided for --adapter1 is expected to be found in the mate 1 reads, and the sequence specified for --adapter2 is expected to be found in the mate 2 reads. In both cases, these should be specified as in the orientation that appear in these files (i.e. it should be possible to grep the files for these, assuming that the reads were long enough, and treating Ns as wildcards). It is very important that these be specified correctly. Please refer to the `AdapterRemoval documentation`_ for more information.
+These two options are used to specify the adapter sequences used to identify and trim reads that contain adapter contamination using AdapterRemoval. Thus, the sequence provided for `--adapter1` is expected to be found in the mate 1 reads, and the sequence specified for `--adapter2` is expected to be found in the mate 2 reads. In both cases, these should be specified as in the orientation that appear in these files (i.e. it should be possible to grep the files for these sequences, assuming that the reads were long enough, if you treat Ns as wildcards).
 
 
-**Aligners\:Program**
+.. warning::
+
+  It is very important that the correct adapter sequences are used. Please refer to the `AdapterRemoval documentation`_ for more information and for help identifying the adapters for paired-end reads.
+
+
+**Aligners \:\: Program**
 
-    The short read alignment program to use to map the (trimmed) reads to the reference genome. Currently, users many choose between 'BWA' and 'Bowtie2', with additional options available for each program.
+    The short read alignment program to use to map the (trimmed) reads to the reference genome. Currently, users many choose between `BWA` and `Bowtie2`, with additional options available for each program.
 
-**Aligners\:BWA\:MinQuality** and **Aligners\:Bowtie2\:MinQuality**
+**Aligners \:\: \* \:\: MinQuality**
 
-    The minimum mapping quality of hits to retain during the mapping process. If this option is set to a non-zero value, any hits with a mapping quality below this value are removed from the resulting BAM file (this option does not apply to unmapped reads). If the final BAM should contain all reads in the input files, this option must be set to 0, and the 'FilterUnmappedReads' option set to 'no'.
+    The minimum mapping quality of hits to retain during the mapping process. If this option is set to a non-zero value, any hits with a mapping quality below this value are removed from the resulting BAM file (this option does not apply to unmapped reads). If the final BAM should contain all reads in the input files, this option must be set to 0, and the `FilterUnmappedReads` option set to `no`.
 
-**Aligners\:BWA\:UseSeed**
+**Aligners \:\: BWA \:\: UseSeed**
 
-    Enable/disable the use of a seed region when mapping reads using the BWA 'backtrack' alignment algorithm (the default). Disabling this option may yield some improvements in the alignment of highly damaged ancient DNA, at the cost of significantly increasing the running time. As such, this option is not recommended for modern samples [Schubert2012]_.
+    Enable/disable the use of a seed region when mapping reads using the BWA `backtrack` alignment algorithm (the default). Disabling this option may yield some improvements in the alignment of highly damaged ancient DNA, at the cost of significantly increasing the running time. As such, this option is not recommended for modern samples [Schubert2012]_.
 
 
-For the purpose of the example project, we need only change a few options. Since the reads were simulated using an Phred score offset of 33, there is no need to change the 'QualityOffset' option, and since the simulated adapter sequences matches the adapters that AdapterRemoval searches for by default, so we do not need to set eiter of '--adapter1' or '--adapter2'. We will, however, use the default mapping program (BWA) and algorithm ('backtrack'), but change the minimum mapping quality to 30 (corresponding to an error probability of 0.001). Changing the minimum quality is accomplished by locating the 'Aligners' section of the makefile, and changing the 'MinQuality' value from 0 to 30 (line 12):
+For the purpose of the example project, we need only change a few options. Since the reads were simulated using an Phred score offset of 33, there is no need to change the `QualityOffset` option, and since the simulated adapter sequences matches the adapters that AdapterRemoval searches for by default, so we do not need to set either of `--adapter1` or `--adapter2`. We will, however, use the default mapping program (BWA) and algorithm (`backtrack`), but change the minimum mapping quality to 30 (corresponding to an error probability of 0.001). Changing the minimum quality is accomplished by locating the `Aligners` section of the makefile, and changing the `MinQuality` value from 0 to 30 (line 40):
 
 .. code-block:: yaml
     :emphasize-lines: 12
@@ -116,7 +120,7 @@ For the purpose of the example project, we need only change a few options. Since
         # have few errors (sets "-l"). See http://pmid.us/22574660
         UseSeed: yes
 
-Since the data we will be mapping represents (simulated) ancient DNA, we will furthermore set the UseSeed option to 'no' (line 18), in order to recover a small additional amount of alignments during mapping (c.f. [Schubert2012]_):
+Since the data we will be mapping represents (simulated) ancient DNA, we will furthermore set the UseSeed option to `no` (line 55), in order to recover a small additional amount of alignments during mapping (see [Schubert2012]_):
 
 .. code-block:: yaml
     :emphasize-lines: 18
@@ -148,24 +152,24 @@ Once this is done, we can proceed to specify the location of the reference genom
 Reference genomes (prefixes)
 ----------------------------
 
-Mapping is carried out using one or more reference genomes (or other sequences) in the form of FASTA files, which are indexed for use in read mapping (automatically, by the pipeline) using either the "bwa index" or "bowtie2-build" commands. Since sequence alignment index are generated at the location of these files, reference genomes are also referred to as "prefixes" in the documentation. In other words, using BWA as an example, the PALEOMIX pipeline will generate a index (prefix) of the reference genome using a command corresponding to the following, for BWA:
+Mapping is carried out using one or more reference genomes (or other sequences) in the form of FASTA files, which are indexed for use in read mapping (automatically, by the pipeline) using either the `bwa index` or `bowtie2-build` commands. Since sequence alignment index are generated at the location of these files, reference genomes are also referred to as "prefixes" in the documentation. In other words, using BWA as an example, the PALEOMIX pipeline will generate a index (prefix) of the reference genome using a command corresponding to the following, for BWA:
 
 .. code-block:: bash
 
-    $ bwa index prefixes/my_genome.fa
+    $ bwa index prefixes/my_genome.fasta
 
-In addition to the BWA / Bowtie2 index, several other related files are also automatically generated, including a FASTA index file (.fai), which are required for various operations of the pipeline. These are similarly located at the same folder as the reference FASTA file. For a more detailed description, please refer to the :ref:`bam_filestructure` section.
+In addition to the BWA / Bowtie2 index, several other related files are also automatically generated, including a FASTA index file (`.fai`), which are required for various operations of the pipeline. These are similarly located at the same folder as the reference FASTA file. For a more detailed description, please refer to the :ref:`bam_filestructure` section.
 
 .. warning::
     Since the pipeline automatically carries out indexing of the FASTA files, it therefore requires write-access to the folder containing the FASTA files. If this is not possible, one may simply create a local folder containing symbolic links to the original FASTA file(s), and point the makefile to this location. All automatically generated files will then be placed in this location.
 
-Specifying which FASTA file to align sequences is accomplished by listing these in the "Prefixes" section in the makefile. For example, assuming that we had a FASTA file named "my\_genome.fasta" which is located in the folder "my\_prefixes", the following might be used::
+Specifying which FASTA file to align sequences is accomplished by listing these in the `Prefixes` section in the makefile. For example, assuming that we had a FASTA file named `my\_genome.fasta` which is located in the `my\_prefixes` folder, the following might be used::
 
     Prefixes:
       my_genome:
         Path: my_prefixes/my_genome.fasta
 
-The name of the prefix (here 'my\_genome') will be used to name the resulting files and in various tables that are generated by the pipeline. Typical names include 'hg19', 'EquCab20', and other standard abbreviations for reference genomes, accession numbers, and the like. Multiple prefixes can be specified, but each name MUST be unique::
+The name of the prefix (here `my\_genome`) will be used to name the resulting files and in various tables that are generated by the pipeline. Typical names include `hg19`, `EquCab20`, and other standard abbreviations for reference genomes, accession numbers, and the like. Multiple prefixes can be specified, but each name MUST be unique::
 
     Prefixes:
       my_genome:
@@ -173,7 +177,7 @@ The name of the prefix (here 'my\_genome') will be used to name the resulting fi
       my_other_genome:
         Path: my_prefixes/my_other_genome.fasta
 
-In the case of this example project, we will be mapping our data against the revised Cambridge Reference Sequence (rCRS) for the human mitochondrial genome, which is included in examples folder under 'prefixes', as a file named 'rCRS.fasta'. To add it to the makefile, locate the 'Prefixes' section located below the 'Options' section, and update it as described above (lines 5 and 7):
+In the case of this example project, we will be mapping our data against the revised Cambridge Reference Sequence (rCRS) for the human mitochondrial genome, which is included in examples folder under `prefixes`, as a file named `rCRS.fasta`. To add it to the makefile, locate the `Prefixes` section located below the `Options` section, and update it as described above (lines 115 and 119):
 
 .. code-block:: yaml
     :emphasize-lines: 6,10
@@ -191,13 +195,13 @@ In the case of this example project, we will be mapping our data against the rev
         # recommended (e.g. /path/to/Human_g1k_v37.fasta should be named 'Human_g1k_v37').
         Path: prefixes/rCRS.fasta
 
-Once this is done, we may specify the input data that we wish the pipeline to process for us.
+Once this is done, we may specify the input data that we want the pipeline to process.
 
 
 Specifying read data
 --------------------
 
-A single makefile may be used to process one or more samples, to generate one or more BAM files and supplementary statistics. In this project we will only deal with a single sample, which we accomplish by adding creating our own section at the end of the makefile. The first step is to determine the name for the files generated by the BAM pipeline. Specifically, we will specify a name which is prefixed to all output generated for our sample (here named 'MyFilename'), by adding the following line to the end of the makefile:
+A single makefile may be used to process one or more samples, to generate one or more BAM files and supplementary statistics. In this project we will only deal with a single sample, which we accomplish by adding creating our own section at the end of the makefile. The first step is to determine the name for the files generated by the BAM pipeline. Specifically, we will specify a name which is prefixed to all output generated for our sample (here named `MyFilename`), by adding the following line to the end of the makefile:
 
 .. code-block:: yaml
     :linenos:
@@ -217,21 +221,21 @@ This first name, or grouping, is referred to as the target, and typically corres
     MyFilename:
       MySample:
 
-Similarly, we need to specify the name of each library in our dataset. By convention, I often use the index used to construct the library as the library name (which allows for easy identification), but any name may be used for a library, provided that it unique to that sample. As described near the start of this document, we are dealing with 3 libraries:
+Similarly, we need to specify the name of each library in our data set. By convention, I often use the index used to construct the library as the library name (which allows for easy identification), but any name may be used for a library, provided that it unique to that sample. As described near the start of this document, we are dealing with 3 libraries:
 
-+-------------+------+------+-----------------------------+
-| Library     | Lane | Type | Fiels                       |
-+-------------+------+------+-----------------------------+
-| ACGATA      |    1 |   PE | data/ACGATA\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-| GCTCTG      |    1 |   SE | data/GCTCTG\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-| TGCTCA      |    1 |   SE | data/TGCTCA\_L1\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
-|             |    2 |   PE | data/TGCTCA\_L2\_*.fastq.gz |
-+-------------+------+------+-----------------------------+
+  +-------------+------+------+-----------------------------+
+  | Library     | Lane | Type | Files                       |
+  +-------------+------+------+-----------------------------+
+  | ACGATA      |    1 |   PE | data/ACGATA\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  | GCTCTG      |    1 |   SE | data/GCTCTG\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  | TGCTCA      |    1 |   SE | data/TGCTCA\_L1\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
+  |             |    2 |   PE | data/TGCTCA\_L2\_*.fastq.gz |
+  +-------------+------+------+-----------------------------+
 
-It is important to correctly specify the libraries, since the pipeline will not only use this information for summary statistics and record it in the resulting BAM files, but will also carry out filtering of PCR duplicates (and other analyses) on a per-library basis. Wrongly grouping together data will therefore result in a loss of useful alignments wrongly identified as PCR duplicates, or, similarly, in the inclusion of reads that should have been filtered as PCR duplicates. The library names are added below the name of the sample ('MySample'), in a similar manner to the sample itself:
+It is important to correctly specify the libraries, since the pipeline will not only use this information for summary statistics and record it in the resulting BAM files, but will also carry out filtering of PCR duplicates (and other analyses) on a per-library basis. Wrongly grouping together data will therefore result in a loss of useful alignments wrongly identified as PCR duplicates, or, similarly, in the inclusion of reads that should have been filtered as PCR duplicates. The library names are added below the name of the sample (`MySample`), in a similar manner to the sample itself:
 
 .. code-block:: yaml
     :linenos:
@@ -246,7 +250,7 @@ It is important to correctly specify the libraries, since the pipeline will not
 
         TGCTCA:
 
-The final step involves specifying the location of the raw FASTQ reads that should be processed for each library, and consists of specifying one or more "lanes" of reads, each of which must be given a unique name. For single-end reads, this is accomplished simply by providing a path (with optional wildcards) to the location of the file(s). For example, for lane 1 of library ACGATA, the files are located at data/ACGATA\_L1\_*.fastq.gz:
+The final step involves specifying the location of the raw FASTQ reads that should be processed for each library, and consists of specifying one or more "lanes" of reads, each of which must be given a unique name. For single-end reads, this is accomplished simply by providing a path (with optional wildcards) to the location of the file(s). For example, for lane 1 of library ACGATA, the files are located at `data/ACGATA\_L1\_*.fastq.gz`:
 
 .. code-block:: bash
 
@@ -286,7 +290,7 @@ Specifying the location of paired-end data is slightly more complex, since the p
     data/ACGATA_L1_R2_03.fastq.gz
     data/ACGATA_L1_R2_04.fastq.gz
 
-Knowing how that the files contain a number specifying which file in a pair they correspond to, we can then construct a path that includes the keyword '{Pair}' in place of that number. For the above example, that path would therefore be 'data/ACGATA\_L1\_R{Pair}_*.fastq.gz' (corresponding to 'data/ACGATA\_L1\_R[12]_*.fastq.gz'):
+Knowing how that the files contain a number specifying which file in a pair they correspond to, we can then construct a path that includes the keyword `{Pair}` in place of that number. For the above example, that path would therefore be `data/ACGATA\_L1\_R{Pair}_*.fastq.gz` (corresponding to `data/ACGATA\_L1\_R[12]_*.fastq.gz`):
 
 .. code-block:: yaml
     :linenos:
@@ -469,7 +473,7 @@ With this makefile in hand, the pipeline may be executed using the following com
 
     $ paleomix bam run makefile.yaml
 
-The pipeline will run as many simultaneous processes as there are cores in the current system, but this behavior may be changed by using the '--max-threads' command-line option. Use the '--help' command-line option to view additional options available when running the pipeline. By default, output files are placed in the same folder as the makefile, but this behavior may be changed by setting the '--destination' command-line option. For this projects, these files include the following:
+The pipeline will run as many simultaneous processes as there are cores in the current system, but this behavior may be changed by using the `--max-threads` command-line option. Use the `--help` command-line option to view additional options available when running the pipeline. By default, output files are placed in the same folder as the makefile, but this behavior may be changed by setting the `--destination` command-line option. For this projects, these files include the following:
 
 .. code-block:: bash
 
@@ -480,10 +484,10 @@ The pipeline will run as many simultaneous processes as there are cores in the c
     MyFilename.rCRS.mapDamage
     MyFilename.summary
 
-The files include a table of the average coverage, a histogram of the per-site coverage (depths), a folder containing one set of mapDamage plots per library, and the final BAM file and its index (the .bai file), as well as a table summarizing the entire analysis. For a more detailed description of the files generated by the pipeline, please refer to the :ref:`bam_filestructure` section; should problems occur during the execution of the pipeline, then please verify that the makefile is correctly filled out as described above, and refer to the :ref:`troubleshooting_bam` section.
+The files include a table of the average coverage, a histogram of the per-site coverage (depths), a folder containing one set of mapDamage plots per library, and the final BAM file and its index (the `.bai` file), as well as a table summarizing the entire analysis. For a more detailed description of the files generated by the pipeline, please refer to the :ref:`bam_filestructure` section; should problems occur during the execution of the pipeline, then please verify that the makefile is correctly filled out as described above, and refer to the :ref:`troubleshooting_bam` section.
 
 .. note::
-    The first item, 'MyFilename', is a folder containing intermediate files generated while running the pipeline, required due to the many steps involved in a typical analyses, and which also allows for the pipeline to resume should the process be interrupted. This folder will typically take up 3-4x the disk-space used by the final BAM file(s), and can safely be removed once the pipeline has run to completion, in order to reduce disk-usage.
+    The first item, `MyFilename`, is a folder containing intermediate files generated while running the pipeline, required due to the many steps involved in a typical analyses, and which also allows for the pipeline to resume should the process be interrupted. This folder will typically take up 3-4x the disk-space used by the final BAM file(s), and can safely be removed once the pipeline has run to completion, in order to reduce disk-usage.
 
 
 .. _SAM/BAM specification: http://samtools.sourceforge.net/SAM1.pdf


=====================================
docs/conf.py
=====================================
@@ -26,7 +26,7 @@ author = "Mikkel Schubert"
 # The short X.Y version
 version = "1.3"
 # The full version, including alpha/beta/rc tags
-release = "1.3.2"
+release = "1.3.3"
 
 
 # -- General configuration ---------------------------------------------------


=====================================
docs/installation.rst
=====================================
@@ -7,7 +7,7 @@ Installation
 
 The following instructions will install PALEOMIX for the current user, but does not include specific programs required by the pipelines. For pipeline specific instructions, refer to the requirements sections for the :ref:`BAM <bam_requirements>`, the :ref:`Phylogentic <phylo_requirements>`, and the :ref:`Zonkey <zonkey_requirements>` pipeline.
 
-The recommended way of installing PALEOMIX is by use of the `pip`_ package manager for Python 3. If `pip` is not installed, then please consult the documentation for your operating system. For Debian based operating systems, `pip` may be installed as follows::
+The recommended way of installing PALEOMIX is by use of the `pip`_ package manager for Python 3. If pip is not installed, then please consult the documentation for your operating system. For Debian based operating systems, pip may be installed as follows::
 
     $ sudo apt-get install python3-pip
 
@@ -51,16 +51,14 @@ Once `venv` is installed, creation of a virtual environment and installation of
 .. parsed-literal::
 
     $ python3 -m venv venv
-    $ source ./venv/bin/activate
-    $ (venv) pip install paleomix==\ |release|
-    $ (venv) deactivate
+    $ ./venv/bin/pip install paleomix==\ |release|
 
-Following successful completion of these commands, the PALEOMIX tools will be accessible in the `./venv/bin/` folder. However, as this folder also contains a copy of Python itself, it is not recommended to add it to your `PATH`. Instead, simply link the `paleomix` commands to a folder in your `PATH`. This can be accomplished as follows::
+Following successful completion of these commands, the `paleomix` executable will be accessible in the `./venv/bin/` folder. However, as this folder also contains a copy of Python itself, it is not recommended to add it to your `PATH`. Instead, simply link the `paleomix` executable to a folder in your `PATH`. This can be accomplished as follows::
 
     $ mkdir -p ~/.local/bin/
     $ ln -s ${PWD}/venv/bin/paleomix ~/.local/bin/
 
-If ~/.local/bin is not already in your PATH, then it can be added as follows:
+If ~/.local/bin is not already in your PATH, then it can be added as follows::
 
     $ echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
 
@@ -72,15 +70,13 @@ Upgrade an existing installation of PALEOMIX, installed using the methods descri
 
     $ pip install --upgrade paleomix
 
-To upgrade an installation a self-contained installation, activate the environment before calling `pip`::
+To upgrade an installation a self-contained installation, simply call the `pip` executable in that environment::
 
-    $ source ./venv/bin/activate
-    $ (paleomix) pip install --upgrade paleomix
-    $ (paleomix) deactivate
+    $ ./venv/bin/pip install --upgrade paleomix
 
 
 Conda installation
--------------------
+------------------
 
 To have a completely contained environment that includes all software dependencies, you can create a `conda`_ environment.
 
@@ -100,19 +96,13 @@ You can now activate the paleomix environment with::
 
     $ conda activate paleomix
 
-Next, install PALEOMIX in the activated environment using pip:
-
-.. parsed-literal::
-
-    $ (paleomix) pip install paleomix==\ |release|
-
 PALEOMIX requires that the Picard JAR file can be found in a specific location, so we can symlink the versions in your conda environment into the correct place::
 
     $ (paleomix) mkdir -p ~/install/jar_root/
-    $ (paleomix) ln -s ~/miniconda*/envs/paleomix/share/picard-*/picard.jar ~/install/jar_root/
+    $ (paleomix) ln -s ~/*conda*/envs/paleomix/share/picard-*/picard.jar ~/install/jar_root/
 
 .. note::
-    If you installed miniconda in a different location, then you can obtain the location of the `paleomix` environment by running `conda env list`.
+    If you installed conda in a different location, then you can obtain the location of the `paleomix` environment by running `conda env list`.
 
 Once completed, you can test the environment works correctly using the pipeline test commands described in :ref:`examples`.
 


=====================================
docs/other_tools.rst
=====================================
@@ -3,7 +3,7 @@
 Other tools
 ===========
 
-On top of the pipelines described in the major sections of the documentation, the pipeline comes bundled with several other, smaller tools, all accessible via the 'paleomix' command. These tools are (briefly) described in this section.
+On top of the pipelines described in the major sections of the documentation, the pipeline comes bundled with several other, smaller tools, all accessible via the `paleomix` command. These tools are (briefly) described in this section.
 
 
 paleomix coverage
@@ -21,15 +21,15 @@ Calculates a depth histogram for a BAM file, either for the entire genome or for
 paleomix rmdup_collapsed
 ------------------------
 
-Filters PCR duplicates for merged/collapsed paired-ended reads, such as those generated by AdapterRemoval with the --collapse option enabled. Unlike `SAMtools rmdup` or `Picard MarkDuplicates`, this tool identifies duplicates based on both the 5' and the 3' alignment coordinates of individual reads.
+Filters PCR duplicates for merged/collapsed paired-ended reads, such as those generated by AdapterRemoval with the `--collapse` option enabled. Unlike `SAMtools rmdup` or `Picard MarkDuplicates`, this tool identifies duplicates based on both the 5' and the 3' alignment coordinates of individual reads.
 
 paleomix vcf_filter
 -------------------
 
-Quality filters for VCF records, similar to 'vcfutils.pl varFilter'.
+Quality filters for VCF records, similar to `vcfutils.pl varFilter`.
 
 
 paleomix vcf_to_fasta
 ---------------------
 
-The 'paleomix vcf\_to\_fasta' command is used to generate FASTA sequences from a VCF file, based either on a set of BED coordinates provided by the user, or for the entire genome covered by the VCF file. By default, heterzyous SNPs are represented using IUPAC codes.
+The 'paleomix vcf\_to\_fasta' command is used to generate FASTA sequences from a VCF file, based either on a set of BED coordinates provided by the user, or for the entire genome covered by the VCF file. By default, heterozygous SNPs are represented using IUPAC codes.


=====================================
docs/yaml.rst
=====================================
@@ -7,17 +7,17 @@ YAML usage in PALEOMIX
 `YAML`_ is a simple markup language adopted for use in configuration files by pipelines included in PALEOMIX. YAML was chosen because it is a plain-text format that is easy to read and write by hand. Since YAML files are plain-text, they may be edited using any standard text editors, with the following caveats:
 
 * YAML exclusively uses spaces for indentation, not tabs; attempting to use tabs in YAML files will cause failures when the file is read by the pipelines.
-* YAML is case-sensitive; an option such as 'QualityOffset' is not the same as 'qualityoffset'.
-* It is strongly recommended that all files be named using the '.yaml' file-extension; setting the extension helps ensure proper handling by editors that natively support the YAML format.
+* YAML is case-sensitive; an option such as `QualityOffset` is not the same as  `qualityoffset`.
+* It is strongly recommended that all files be named using the `.yaml` file-extension; setting the extension helps ensure proper handling by editors that natively support the YAML format.
 
-Only a subset of YAML features are actually used by PALEOMIX, which are described below. These include **mappings**, by which values are identified by names; **lists** of values; and **numbers**, **text-strings**, and **true** / **false** values, typically representing program options, file-paths, and the like. In addition, comments prefixed by the hash-sign (#) are frequently used to provide documentation.
+Only a subset of YAML features are actually used by PALEOMIX, which are described below. These include **mappings**, by which values are identified by names; **lists** of values; and **numbers**, **text-strings**, and **true** / **false** values, typically representing program options, file-paths, and the like. In addition, comments prefixed by the hash-sign (`#`) are frequently used to provide documentation.
 
 
 
 Comments
 --------
 
-Comments are specified by prefixing unquoted text with the hash-sign (#); all comments are ignored, and have no effect on the operation of the program. Comments are used solely to document the YAML files used by the pipelines::
+Comments are specified by prefixing unquoted text with the hash-sign (`#`); all comments are ignored, and have no effect on the operation of the program. Comments are used solely to document the YAML files used by the pipelines::
 
     # This is a comment; the next line contains both a value and a comment:
     123  # Comments may be placed on the same line as values.
@@ -63,7 +63,7 @@ And similarly, the following values are all interpreted as *false*::
     no
     off
 
-Template files included with the pipelines mostly use 'yes' and 'no', but either of the above corresponding values may be used. Note however that none of these values are quoted: If single or double-quotations were used, then these vales would be read as text rather than truth-values, as described next.
+Template files included with the pipelines mostly use `yes` and `no`, but either of the above corresponding values may be used. Note however that none of these values are quoted: If single or double-quotations were used, then these vales would be read as text rather than truth-values, as described next.
 
 
 Text (strings)
@@ -90,7 +90,7 @@ For most part it is not necessary to use quotation marks, and the above could in
 
     /path/to/my/files/reads.fastq
 
-However, it is important to make sure that values that are intended to be used strings are not mis-interpreted as a different type of value. For example, without the quotation marks the following values would be interpreted as numbers or truth-values::
+However, it is important to make sure that values that are intended to be used strings are not misinterpreted as a different type of value. For example, without the quotation marks the following values would be interpreted as numbers or truth-values::
 
     "true"
 
@@ -137,7 +137,7 @@ Mappings can be nested any number of times, which is used in this manner to crea
         Option1: /path/to/file.fastq
         Option2: no
 
-Note that the two mappings belonging to the 'Option' mapping are both indented the same number of spaces, which is what allows the program to figure out which values belong to what label. It is therefore important to keep indentation consistent.
+Note that the two mappings belonging to the `Option` mapping are both indented the same number of spaces, which is what allows the program to figure out which values belong to what label. It is therefore important to keep indentation consistent.
 
 Lists of values
 ---------------


=====================================
paleomix/__init__.py
=====================================
@@ -21,5 +21,5 @@
 # SOFTWARE.
 #
 
-__version_info__ = (1, 3, 2)
+__version_info__ = (1, 3, 3)
 __version__ = "%i.%i.%i" % __version_info__


=====================================
paleomix/nodes/picard.py
=====================================
@@ -61,7 +61,7 @@ class ValidateBAMNode(PicardNode):
         builder = picard_command(config, "ValidateSamFile")
         _set_max_open_files(builder, "MAX_OPEN_TEMP_FILES")
 
-        if True or big_genome_mode:
+        if big_genome_mode:
             self._configure_for_big_genome(config, builder)
 
         builder.set_option("I", "%(IN_BAM)s", sep="=")


=====================================
paleomix/pipelines/bam/parts/summary.py
=====================================
@@ -125,7 +125,12 @@ class SummaryTableNode(Node):
         for (_, prefix) in sorted(self._prefixes.items()):
             stats = genomes[prefix["Name"]]
             rows.append(
-                (prefix["Name"], stats["NContigs"], stats["Size"], prefix["Path"],)
+                (
+                    prefix["Name"],
+                    stats["NContigs"],
+                    stats["Size"],
+                    prefix["Path"],
+                )
             )
 
         for line in text.padded_table(rows):
@@ -248,19 +253,19 @@ class SummaryTableNode(Node):
                 total_reads = subtables.get("reads", {}).get("seq_retained_reads", 0)
                 genome_size = genomes[tblname]["Size"]
 
-                subtable["hits_raw_frac(%s)" % tblname] = total_hits / (
+                subtable["hits_raw_frac(%s)" % tblname] = total_hits / float(
                     total_reads or "NaN"
                 )
-                subtable["hits_unique_frac(%s)" % tblname] = total_uniq / (
+                subtable["hits_unique_frac(%s)" % tblname] = total_uniq / float(
                     total_reads or "NaN"
                 )
-                subtable["hits_clonality(%s)" % tblname] = 1 - total_uniq / (
+                subtable["hits_clonality(%s)" % tblname] = 1 - total_uniq / float(
                     total_hits or "NaN"
                 )
-                subtable["hits_length(%s)" % tblname] = total_nts / (
+                subtable["hits_length(%s)" % tblname] = total_nts / float(
                     total_uniq or "NaN"
                 )
-                subtable["hits_coverage(%s)" % tblname] = total_nts / (
+                subtable["hits_coverage(%s)" % tblname] = total_nts / float(
                     genome_size or "NaN"
                 )
 


=====================================
paleomix_environment.yaml
=====================================
@@ -7,7 +7,7 @@ dependencies:
   - adapterremoval>=2.2.0
   - samtools>=1.3.0
   - picard>=1.137
-  - bowtie2>=2.3.0
+  - bowtie2>=2.3.0,<2.4.0
   - bwa>=0.7.15
   - mapdamage2>=2.2.1
   - r-base
@@ -16,4 +16,4 @@ dependencies:
   - r-gam
   - r-inline
   - pip:
-    - paleomix==1.3.2
+    - paleomix==1.3.3



View it on GitLab: https://salsa.debian.org/med-team/paleomix/-/compare/5fd23c0144f51a5b82a5beb1bcd67eef4fd2ad04...ee8247d9190b6217892cba9a37476e2c92024913

-- 
View it on GitLab: https://salsa.debian.org/med-team/paleomix/-/compare/5fd23c0144f51a5b82a5beb1bcd67eef4fd2ad04...ee8247d9190b6217892cba9a37476e2c92024913
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20210825/ed908540/attachment-0001.htm>


More information about the debian-med-commit mailing list