[med-svn] [Git][med-team/srst2][master] 10 commits: Use 2to3 to port from Python2 to Python3

Andreas Tille gitlab at salsa.debian.org
Wed Dec 11 15:09:58 GMT 2019



Andreas Tille pushed to branch master at Debian Med / srst2


Commits:
6fc82fcc by Andreas Tille at 2019-12-11T14:22:34Z
Use 2to3 to port from Python2 to Python3

- - - - -
668c7961 by Andreas Tille at 2019-12-11T14:26:35Z
Use Python3 in packaging

- - - - -
7c84fa63 by Andreas Tille at 2019-12-11T14:27:51Z
routine-update: debhelper-compat 12

- - - - -
8d9e4aa1 by Andreas Tille at 2019-12-11T14:28:04Z
routine-update: Standards-Version: 4.4.1

- - - - -
bd975a47 by Andreas Tille at 2019-12-11T14:28:04Z
R-U: Trailing whitespace in debian/rules

- - - - -
5aa85cb7 by Andreas Tille at 2019-12-11T14:28:04Z
R-U: autopkgtest: s/ADTTMP/AUTOPKGTEST_TMP/g

- - - - -
3987fe8b by Andreas Tille at 2019-12-11T14:29:01Z
Set upstream metadata fields: Repository, Repository-Browse.
- - - - -
403e62cd by Andreas Tille at 2019-12-11T14:32:57Z
Use buildsystem=pybuild

- - - - -
8e3509ba by Andreas Tille at 2019-12-11T15:05:24Z
Use markdown instead of python-markdown

- - - - -
7ea98208 by Andreas Tille at 2019-12-11T15:09:39Z
Add TODO

- - - - -


9 changed files:

- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3.patch
- debian/patches/series
- debian/rules
- debian/tests/control
- debian/tests/run-unit-test
- debian/upstream/metadata


Changes:

=====================================
debian/changelog
=====================================
@@ -1,3 +1,55 @@
+srst2 (0.2.0-7) UNRELEASED; urgency=medium
+
+  * Use 2to3 to port from Python2 to Python3
+    Closes: #938560
+  * debhelper-compat 12
+  * Standards-Version: 4.4.1
+  * Remove trailing whitespace in debian/rules
+  * autopkgtest: s/ADTTMP/AUTOPKGTEST_TMP/g
+  * Set upstream metadata fields: Repository, Repository-Browse.
+  * Use markdown instead of python-markdown
+  
+  TODO:
+cd tests && python3 test_slurm_srst2.py && python3 test_srst2.py
+...........
+----------------------------------------------------------------------
+Ran 11 tests in 0.024s
+
+OK
+...........EE....
+======================================================================
+ERROR: test_get_pileup_with_defaults (__main__.TestMPileup)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+  File "/usr/lib/python3/dist-packages/mock/mock.py", line 1330, in patched
+    return func(*args, **keywargs)
+  File "test_srst2.py", line 294, in test_get_pileup_with_defaults
+    'bowtie_sam_mod', 'fasta', 'pileup')
+  File "/build/srst2-0.2.0/scripts/srst2.py", line 735, in get_pileup
+    if args.threads > 1 and samtools_v1:
+TypeError: '>' not supported between instances of 'MagicMock' and 'int'
+
+======================================================================
+ERROR: test_get_pileup_with_overides (__main__.TestMPileup)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+  File "/usr/lib/python3/dist-packages/mock/mock.py", line 1330, in patched
+    return func(*args, **keywargs)
+  File "test_srst2.py", line 242, in test_get_pileup_with_overides
+    'bowtie_sam_mod', 'fasta', 'pileup')
+  File "/build/srst2-0.2.0/scripts/srst2.py", line 735, in get_pileup
+    if args.threads > 1 and samtools_v1:
+TypeError: '>' not supported between instances of 'MagicMock' and 'int'
+
+----------------------------------------------------------------------
+Ran 17 tests in 0.042s
+
+FAILED (errors=2)
+make[1]: *** [debian/rules:34: override_dh_auto_test] Error 1
+
+
+ -- Andreas Tille <tille at debian.org>  Wed, 11 Dec 2019 15:20:13 +0100
+
 srst2 (0.2.0-6) unstable; urgency=medium
 
   * Respect DEB_BUILD_OPTIONS in override_dh_auto_test


=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11


=====================================
debian/control
=====================================
@@ -3,31 +3,31 @@ Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.
 Uploaders: Andreas Tille <tille at debian.org>
 Section: science
 Priority: optional
-Build-Depends: debhelper (>= 11~),
+Build-Depends: debhelper-compat (= 12),
                dh-python,
-               python-all,
-               python-setuptools,
-               python-markdown,
+               python3-all,
+               python3-setuptools,
+               markdown,
                dos2unix,
-               python-mock,
-               python-scipy,
-               bowtie2,
-               samtools
-Standards-Version: 4.2.1
+               python3-mock <!nocheck>,
+               python3-scipy <!nocheck>,
+               bowtie2 <!nocheck>,
+               samtools <!nocheck>
+Standards-Version: 4.4.1
 Vcs-Browser: https://salsa.debian.org/med-team/srst2
 Vcs-Git: https://salsa.debian.org/med-team/srst2.git
 Homepage: https://katholt.github.io/srst2/
 
 Package: srst2
 Architecture: any
-Depends: ${python:Depends},
+Depends: ${python3:Depends},
          ${misc:Depends},
          bowtie2,
          cd-hit,
          samtools,
-         python-scipy,
-         python-biopython
-Recommends: python-rpy2
+         python3-scipy,
+         python3-biopython
+Recommends: python3-rpy2
 Description: Short Read Sequence Typing for Bacterial Pathogens
  This program is designed to take Illumina sequence data, a MLST database
  and/or a database of gene sequences (e.g. resistance genes, virulence


=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,775 @@
+Description: Use 2to3 to port from Python2 to Python3
+Bug-Debian: https://bugs.debian.org/938560
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Wed, 11 Dec 2019 15:20:13 +0100
+Bug-Upstream: https://github.com/katholt/srst2/issues/122
+
+--- a/README.md
++++ b/README.md
+@@ -70,7 +70,7 @@ Current release - v0.2.0 - July 28, 2016
+ -----
+ 
+ Dependencies:
+-* python (v2.7.5 or later)
++* python3
+ * scipy, numpy   http://www.scipy.org/install.html
+ * bowtie2 (v2.1.0 or later)   http://bowtie-bio.sourceforge.net/bowtie2/index.shtml
+ * SAMtools v0.1.18   https://sourceforge.net/projects/samtools/files/samtools/0.1.18/ (NOTE: later versions can be used, but better results are obtained with v0.1.18, especially at low read depths (<20x))
+@@ -171,7 +171,7 @@ Updates in v0.1.3
+ 
+ ### 1 - Install dependencies
+ 
+-* python (v2.7.5)
++* python3
+ * scipy http://www.scipy.org/install.html
+ * bowtie2 v2.1.0 http://bowtie-bio.sourceforge.net/bowtie2/index.shtml
+ * SAMtools v0.1.18 https://sourceforge.net/projects/samtools/files/samtools/0.1.18/ (NOTE 0.1.19 DOES NOT WORK)
+@@ -541,13 +541,13 @@ IN ADDITION TO THE NOVEL ALLELES FILE OU
+ For genes:
+ 
+ ```
+-python srst2/scripts/consensus_alignment.py --in *.all_consensus_alleles.fasta --pre test --type gene
++python3 srst2/scripts/consensus_alignment.py --in *.all_consensus_alleles.fasta --pre test --type gene
+ ```
+ 
+ For mlst:
+ 
+ ```
+-python srst2/scripts/consensus_alignment.py --in *.all_consensus_alleles.fasta --pre test --type mlst --mlst_delimiter _
++python3 srst2/scripts/consensus_alignment.py --in *.all_consensus_alleles.fasta --pre test --type mlst --mlst_delimiter _
+ ```
+ 
+ # More basic usage examples
+@@ -787,7 +787,7 @@ cdhit-est -i rawseqs.fasta -o rawseqs_cd
+ 2 - Parse the cluster output and tabulate the results, check for inconsistencies between gene names and the sequence clusters, and generate individual fasta files for each cluster to facilitate further checking:
+ 
+ ```
+-python cdhit_to_csv.py --cluster_file rawseqs_cdhit90.clstr --infasta raw_sequences.fasta --outfile rawseqs_clustered.csv
++python3 cdhit_to_csv.py --cluster_file rawseqs_cdhit90.clstr --infasta raw_sequences.fasta --outfile rawseqs_clustered.csv
+ ```
+ 
+ For comparing gene names to cluster assignments, this script assumes very basic nomenclature of the form gene-allele, ie a gene symbol followed by '-' followed by some more specific allele designation. E.g. adk-1, blaCTX-M-15. The full name of the gene (adk-1, blaCTX-M-15) will be stored as the allele, and the bit before the '-' will be stored as the name of the gene cluster (adk, blaCTX). This won't always give you exactly what you want, because there really are no standards for gene nomenclature! But it will work for many cases, and you can always modify the script if you need to parse names in a different way. Note though that this only affects how sensible the gene cluster nomenclature is going to be in your srst2 results, and will not affect the behaviour of the clustering (which is purely sequence based using cdhit) or srst2 (which will assign a top scoring allele per cluster, the cluster name may not be perfect but the full allele name will always be reported anyway).
+@@ -820,13 +820,13 @@ To type these virulence genes using SRST
+ 1 - Extract virulence genes by genus from the main VFDB file, `CP_VFs.ffn`:
+ 
+ ```
+-python VFDBgenus.py --infile CP_VFs.ffn --genus Clostridium
++python3 VFDBgenus.py --infile CP_VFs.ffn --genus Clostridium
+ ```
+ 
+ or, to get all availabel genera in separate files:
+ 
+ ```
+-python VFDBgenus.py --infile CP_VFs.ffn
++python3 VFDBgenus.py --infile CP_VFs.ffn
+ ```
+ 
+ 2 - Run CD-HIT to cluster the sequences for this genus, at 90% nucleotide identity:
+@@ -838,13 +838,13 @@ cd-hit -i Clostridium.fsa -o Clostridium
+ 3 - Parse the cluster output and tabulate the results using the specific Virulence gene DB compatible script:
+ 
+ ```
+-python VFDB_cdhit_to_csv.py --cluster_file Clostridium_cdhit90.clstr --infile Clostridium.fsa --outfile Clostridium_cdhit90.csv
++python3 VFDB_cdhit_to_csv.py --cluster_file Clostridium_cdhit90.clstr --infile Clostridium.fsa --outfile Clostridium_cdhit90.csv
+ ```
+ 
+ 4 - Convert the resulting csv table to a SRST2-compatible sequence database using:
+ 
+ ```
+-python csv_to_gene_db.py -t Clostridium_cdhit90.csv -o Clostridium_VF_clustered.fasta -s 5
++python3 csv_to_gene_db.py -t Clostridium_cdhit90.csv -o Clostridium_VF_clustered.fasta -s 5
+ ```
+     
+ The output file, `Clostridium_VF_clustered.fasta`, should now be ready to use with srst2 (`--gene_db Clostridium_VF_clustered.fasta`).
+--- a/database_clustering/README.md
++++ b/database_clustering/README.md
+@@ -59,7 +59,7 @@ cdhit-est -i rawseqs.fasta -o rawseqs_cd
+ 
+ 2 - Parse the cluster output and tabulate the results, check for inconsistencies between gene names and the sequence clusters, and generate individual fasta files for each cluster to facilitate further checking:
+ 
+-python cdhit_to_csv.py --cluster_file rawseqs_cdhit90.clstr --infasta raw_sequences.fasta --outfile rawseqs_clustered.csv
++python3 cdhit_to_csv.py --cluster_file rawseqs_cdhit90.clstr --infasta raw_sequences.fasta --outfile rawseqs_clustered.csv
+ 
+ For comparing gene names to cluster assignments, this script assumes very basic nomenclature of the form gene-allele, ie a gene symbol followed by '-' followed by some more specific allele designation. E.g. adk-1, blaCTX-M-15. The full name of the gene (adk-1, blaCTX-M-15) will be stored as the allele, and the bit before the '-' will be stored as the name of the gene cluster (adk, blaCTX). This won't always give you exactly what you want, because there really are no standards for gene nomenclature! But it will work for many cases, and you can always modify the script if you need to parse names in a different way. Note though that this only affects how sensible the gene cluster nomenclature is going to be in your srst2 results, and will not affect the behaviour of the clustering (which is purely sequence based using cdhit) or srst2 (which will assign a top scoring allele per cluster, the cluster name may not be perfect but the full allele name will always be reported anyway).
+ 
+@@ -89,11 +89,11 @@ To type these virulence genes using SRST
+ 
+ gunzip VFDB_setB_nt.fas.gz
+ 
+-python VFDBgenus.py --infile VFDB_setB_nt.fas --genus Clostridium
++python3 VFDBgenus.py --infile VFDB_setB_nt.fas --genus Clostridium
+ 
+ or, to get all available genera in separate files:
+ 
+-python VFDBgenus.py --infile VFDB_setB_nt.fas
++python3 VFDBgenus.py --infile VFDB_setB_nt.fas
+ 
+ - Run CD-HIT to cluster the sequences for this genus, at 90% nucleotide identity:
+ 
+@@ -101,8 +101,8 @@ cd-hit-est -i Clostridium.fsa -o Clostri
+ 
+ - Parse the cluster output and tabulate the results using the specific Virulence gene DB compatible script:
+ 
+-python VFDB_cdhit_to_csv.py --cluster_file Clostridium_cdhit90.clstr --infile Clostridium.fsa --outfile Clostridium_cdhit90.csv
++python3 VFDB_cdhit_to_csv.py --cluster_file Clostridium_cdhit90.clstr --infile Clostridium.fsa --outfile Clostridium_cdhit90.csv
+ 
+ - Convert the resulting csv table to a SRST2-comptaible sequence database using:
+ 
+-python csv_to_gene_db.py -t Clostridium_cdhit90.csv -o Clostridium_VF_clustered.fasta -s 5
++python3 csv_to_gene_db.py -t Clostridium_cdhit90.csv -o Clostridium_VF_clustered.fasta -s 5
+--- a/database_clustering/align_plot_tree_min3.py
++++ b/database_clustering/align_plot_tree_min3.py
+@@ -14,7 +14,7 @@ for f in os.listdir(fasta_directory):
+ 	if not f.startswith("."):
+             if f.endswith(".fsa") or f.endswith(".fasta"):
+ 		files.append('"'+os.path.join(fasta_directory,f)+'"') # need to surround with " " due to () in filenames
+-print "Number of input files:", len(files)
++print("Number of input files:", len(files))
+ 
+ # Run muscle on each fasta file to produce an alignment for each
+ # Save the alignment file names so that they can be used in R ape
+--- a/database_clustering/csv_to_gene_db.py
++++ b/database_clustering/csv_to_gene_db.py
+@@ -45,18 +45,18 @@ if __name__ == "__main__":
+ 	if options.output_file == "":
+ 		DoError("Please specify output fasta file using -o")
+ 	if options.seq_col != "":
+-		print "Reading DNA sequences from table, column" + options.seq_col
++		print("Reading DNA sequences from table, column" + options.seq_col)
+ 		seqid_col = int(options.seq_col)
+ 	elif options.fasta_file != "":
+ 		if options.headers_col == "":
+ 			DoError("Please specify which column of the table contains identifiers that match the headers in the fasta file")
+ 		seqs_file_col = int(options.headers_col)
+-		print "Reading DNA sequences from fasta file: " + options.fasta_file
++		print("Reading DNA sequences from fasta file: " + options.fasta_file)
+ 		for record in SeqIO.parse(open(options.fasta_file, "r"), "fasta"):
+ 			input_seqs[record.id] = record.seq
+ 			
+ 	else:
+-		print DoError("Where are the sequences? If they are in the table, specify which column using -s. Otherwise provide a fasta file of sequence using -f and specify which column contains sequence identifiers that match the fasta headers, using -h")
++		print(DoError("Where are the sequences? If they are in the table, specify which column using -s. Otherwise provide a fasta file of sequence using -f and specify which column contains sequence identifiers that match the fasta headers, using -h"))
+ 
+ 	# read contents of a table and print as fasta
+ 	f = file(options.table_file,"r")
+@@ -82,7 +82,7 @@ if __name__ == "__main__":
+ 				if seqs_file_id in input_seqs:
+ 					record = SeqRecord(input_seqs[seqs_file_id],id=db_id, description=db_id)
+ 				else:
+-					print "Warning, couldn't find a sequence in the fasta file matching this id: " + seqs_file_id
++					print("Warning, couldn't find a sequence in the fasta file matching this id: " + seqs_file_id)
+ 				
+ 			else:
+ 				"??"
+--- a/database_clustering/get_all_vfdb.sh
++++ b/database_clustering/get_all_vfdb.sh
+@@ -1,7 +1,7 @@
+ #!/bin/bash
+ #this is a utility bash script that automates generation of all the VFDB gene databases for use with srst2.py
+-#script assumes you already have python, and cd-hit installed somewhere on the $PATH
+-#this script MUST be in the same folder as all the other database_clustering python scripts
++#script assumes you already have python3, and cd-hit installed somewhere on the $PATH
++#this script MUST be in the same folder as all the other database_clustering python3 scripts
+ #example usage:
+ #/srst2/database_clustering/get_all_vfdb.sh ./CP_VFs.ffn ./VFDB
+ 
+@@ -12,7 +12,7 @@ fi
+ 
+ VFDBFILE=$(readlink -e $1)
+ OUTPUTFOLDER=$2
+-#get the srst2/database_clustering folder where all the other python scripts live side-by-side with this one
++#get the srst2/database_clustering folder where all the other python3 scripts live side-by-side with this one
+ DBCLUSTERINGSCRIPTFOLDER=$(dirname $(readlink -e $0))
+ 
+ #if the specified output folder doesn't exist, then create it
+@@ -22,7 +22,7 @@ fi
+ cd ${OUTPUTFOLDER}
+ 
+ #extract virulence genes from all available genera into separate files
+-python ${DBCLUSTERINGSCRIPTFOLDER}/VFDBgenus.py --infile ${VFDBFILE}
++python3 ${DBCLUSTERINGSCRIPTFOLDER}/VFDBgenus.py --infile ${VFDBFILE}
+ 
+ #loop over each genus' *.fsa file and generate the gene database fasta file
+ for FSAFILE in *.fsa; do
+@@ -36,10 +36,10 @@ for FSAFILE in *.fsa; do
+   cd-hit -i ${FILENAME} -o ${GENUS}/${GENUS}_cdhit90 -c 0.9 > ${GENUS}/${GENUS}_cdhit90.stdout
+ 
+   #Parse the cluster output and tabulate the results using the specific Virulence gene DB compatible script:
+-  python ${DBCLUSTERINGSCRIPTFOLDER}/VFDB_cdhit_to_csv.py --cluster_file ${GENUS}/${GENUS}_cdhit90.clstr --infile ${FILENAME} --outfile ${GENUS}/${GENUS}_cdhit90.csv
++  python3 ${DBCLUSTERINGSCRIPTFOLDER}/VFDB_cdhit_to_csv.py --cluster_file ${GENUS}/${GENUS}_cdhit90.clstr --infile ${FILENAME} --outfile ${GENUS}/${GENUS}_cdhit90.csv
+ 
+   #Convert the resulting csv table to a SRST2-compatible sequence
+-  python ${DBCLUSTERINGSCRIPTFOLDER}/csv_to_gene_db.py -t ${GENUS}/${GENUS}_cdhit90.csv -o ${GENUS}/${GENUS}_VF_clustered.fasta -s 5
++  python3 ${DBCLUSTERINGSCRIPTFOLDER}/csv_to_gene_db.py -t ${GENUS}/${GENUS}_cdhit90.csv -o ${GENUS}/${GENUS}_VF_clustered.fasta -s 5
+ 
+   #move the original *.fsa file to the created genus subfolder
+   mv ${FILENAME} ${GENUS}/${FILENAME}
+--- a/database_clustering/get_genus_vfdb.sh
++++ b/database_clustering/get_genus_vfdb.sh
+@@ -1,6 +1,6 @@
+ #!/bin/bash
+ #this is a utility bash script that automates generation of a VFDB gene database for a specified genus for use with srst2.py
+-#script assumes you already have python, and cd-hit installed somewhere on the $PATH
++#script assumes you already have python3, and cd-hit installed somewhere on the $PATH
+ #example usage:
+ #/srst2/database_clustering/get_genus_vfdb.sh ./CP_VFs.ffn Bacillus ./VFDB
+ 
+@@ -12,7 +12,7 @@ fi
+ VFDBFILE=$(readlink -e $1)
+ GENUS=$2
+ OUTPUTFOLDER=$3
+-#get the srst2/database_clustering folder where all the other python scripts live side-by-side with this one
++#get the srst2/database_clustering folder where all the other python3 scripts live side-by-side with this one
+ DBCLUSTERINGSCRIPTFOLDER=$(dirname $(readlink -e $0))
+ 
+ #if the specified output folder doesn't exist, then create it
+@@ -24,13 +24,13 @@ cd ${OUTPUTFOLDER}
+ echo Generating virulence gene database for ${GENUS}
+ FILENAME=${GENUS}.fsa
+ #extract virulence genes from all available genera into separate files
+-python ${DBCLUSTERINGSCRIPTFOLDER}/VFDBgenus.py --infile ${VFDBFILE} --genus ${GENUS}
++python3 ${DBCLUSTERINGSCRIPTFOLDER}/VFDBgenus.py --infile ${VFDBFILE} --genus ${GENUS}
+ 
+ #Run CD-HIT to cluster the sequences for this genus, at 90% nucleotide identity
+ cd-hit -i ${FILENAME} -o ${GENUS}_cdhit90 -c 0.9 > ${GENUS}_cdhit90.stdout
+ 
+ #Parse the cluster output and tabulate the results using the specific Virulence gene DB compatible script:
+-python ${DBCLUSTERINGSCRIPTFOLDER}/VFDB_cdhit_to_csv.py --cluster_file ${GENUS}_cdhit90.clstr --infile ${FILENAME} --outfile ${GENUS}_cdhit90.csv
++python3 ${DBCLUSTERINGSCRIPTFOLDER}/VFDB_cdhit_to_csv.py --cluster_file ${GENUS}_cdhit90.clstr --infile ${FILENAME} --outfile ${GENUS}_cdhit90.csv
+ 
+ #Convert the resulting csv table to a SRST2-compatible sequence
+-python ${DBCLUSTERINGSCRIPTFOLDER}/csv_to_gene_db.py -t ${GENUS}_cdhit90.csv -o ${GENUS}_VF_clustered.fasta -s 5
++python3 ${DBCLUSTERINGSCRIPTFOLDER}/csv_to_gene_db.py -t ${GENUS}_cdhit90.csv -o ${GENUS}_VF_clustered.fasta -s 5
+--- a/scripts/analyseSRST2.py
++++ b/scripts/analyseSRST2.py
+@@ -1,6 +1,6 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+-# Python Version 2.7.5
++# Python3 Version
+ #
+ # Basic analysis of SRST2 output
+ # Authors - Kathryn Holt (kholt at unimelb.edu.au)
+@@ -62,7 +62,7 @@ def compile_results(args,mlst_results,db
+ 				if mlst_cols == 0:
+ 					mlst_header_string = test_string
+ 			else:
+-				test_string = mlst_result[mlst_result.keys()[0]] # no header line?
++				test_string = mlst_result[list(mlst_result.keys())[0]] # no header line?
+ 			test_string_split = test_string.split("\t")
+ 			this_mlst_cols = len(test_string)
+ 			
+@@ -107,7 +107,7 @@ def compile_results(args,mlst_results,db
+ 					if variable not in variable_list:
+ 						variable_list.append(variable)
+ 						
+-	print variable_list
++	print(variable_list)
+ 						
+ 	if "Sample" in sample_list:
+ 		sample_list.remove("Sample")
+@@ -177,8 +177,8 @@ def compile_results(args,mlst_results,db
+ 	
+ 	# log ST counts
+ 	if len(mlst_results_master) > 0:
+-		logging.info("Detected " + str(len(st_counts.keys())) + " STs: ")
+-		sts = st_counts.keys()
++		logging.info("Detected " + str(len(list(st_counts.keys()))) + " STs: ")
++		sts = list(st_counts.keys())
+ 		sts.sort()
+ 		for st in sts:
+ 			logging.info("ST" + st + "\t" + str(st_counts[st]))
+--- a/scripts/getmlst.py
++++ b/scripts/getmlst.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ '''
+ Download MLST datasets from this site: http://pubmlst.org/data/ by
+@@ -23,7 +23,7 @@ from argparse import ArgumentParser
+ import xml.dom.minidom as xml
+ import urllib2 as url
+ import re, os, glob
+-from urlparse import urlparse
++from urllib.parse import urlparse
+ 
+ def parse_args():
+ 	parser = ArgumentParser(description='Download MLST datasets by species'
+@@ -125,12 +125,12 @@ def main():
+ 		if info != None:
+ 			found_species.append(info)
+ 	if len(found_species) == 0:
+-		print "No species matched your query."
++		print("No species matched your query.")
+ 		exit(1)
+ 	if len(found_species) > 1:
+-		print "The following {} species match your query, please be more specific:".format(len(found_species))
++		print("The following {} species match your query, please be more specific:".format(len(found_species)))
+ 		for info in found_species:
+-			print info.name
++			print(info.name)
+ 		exit(2)
+ 
+ 	assert len(found_species) == 1
+@@ -176,22 +176,22 @@ def main():
+ 	log_file.close()
+ 	species_all_fasta_file.close()
+ 
+-	print "\n  For SRST2, remember to check what separator is being used in this allele database"
++	print("\n  For SRST2, remember to check what separator is being used in this allele database")
+ 	head = os.popen('head -n 1 ' + species_all_fasta_filename).read().rstrip()
+ 	m = re.match('>(.*)([_-])(\d*)',head).groups()
+ 	if len(m)==3:
+-		print
+-		print "  Looks like --mlst_delimiter '" + m[1] + "'"
+-		print
+-		print "  " + head + "  --> -->  ",
+-		print m
+-	print 
+-	print "  Suggested srst2 command for use with this MLST database:"
+-	print
+-	print "    srst2 --output test --input_pe *.fastq.gz --mlst_db " + species_name_underscores + '.fasta',
+-	print "--mlst_definitions " + format(profile_filename),
+-	print "--mlst_delimiter '" + m[1] + "'"
+-	print
++		print()
++		print("  Looks like --mlst_delimiter '" + m[1] + "'")
++		print()
++		print("  " + head + "  --> -->  ", end=' ')
++		print(m)
++	print() 
++	print("  Suggested srst2 command for use with this MLST database:")
++	print()
++	print("    srst2 --output test --input_pe *.fastq.gz --mlst_db " + species_name_underscores + '.fasta', end=' ')
++	print("--mlst_definitions " + format(profile_filename), end=' ')
++	print("--mlst_delimiter '" + m[1] + "'")
++	print()
+ 
+ 
+ if __name__ == '__main__':
+--- a/scripts/qsub_srst2.py
++++ b/scripts/qsub_srst2.py
+@@ -1,4 +1,4 @@
+-#!/usr/local/Modules/modulefiles/tools/python/2.7.6/bin/python2.7
++#!/usr/bin/python3
+ '''
+ This script generates SRST2 jobs for the Grid Engine (qsub) scheduling system
+ (http://gridscheduler.sourceforge.net/). It allows many samples to be processed in parallel. After
+@@ -98,7 +98,7 @@ def read_file_sets(args):
+ 						(baseName,read) = m.groups()
+ 						reverse_reads[baseName] = fastq
+ 					else:
+-						print "Could not determine forward/reverse read status for input file " + fastq
++						print("Could not determine forward/reverse read status for input file " + fastq)
+ 			else:
+ 				# matches default Illumina file naming format, e.g. m.groups() = ('samplename', '_S1', '_L001', '_R1', '_001')
+ 				baseName, read = m.groups()[0], m.groups()[3]
+@@ -107,8 +107,8 @@ def read_file_sets(args):
+ 				elif read == "_R2":
+ 					reverse_reads[baseName] = fastq
+ 				else:
+-					print "Could not determine forward/reverse read status for input file " + fastq
+-					print "  this file appears to match the MiSeq file naming convention (samplename_S1_L001_[R1]_001), but we were expecting [R1] or [R2] to designate read as forward or reverse?"
++					print("Could not determine forward/reverse read status for input file " + fastq)
++					print("  this file appears to match the MiSeq file naming convention (samplename_S1_L001_[R1]_001), but we were expecting [R1] or [R2] to designate read as forward or reverse?")
+ 					fileSets[file_name_before_ext] = fastq
+ 					num_single_readsets += 1
+ 		# store in pairs
+@@ -119,17 +119,17 @@ def read_file_sets(args):
+ 			else:
+ 				fileSets[sample] = [forward_reads[sample]] # no reverse found
+ 				num_single_readsets += 1
+-				print 'Warning, could not find pair for read:' + forward_reads[sample]
++				print('Warning, could not find pair for read:' + forward_reads[sample])
+ 		for sample in reverse_reads:
+ 			if sample not in fileSets:
+ 				fileSets[sample] = reverse_reads[sample] # no forward found
+ 				num_single_readsets += 1
+-				print 'Warning, could not find pair for read:' + reverse_reads[sample]
++				print('Warning, could not find pair for read:' + reverse_reads[sample])
+ 
+ 	if num_paired_readsets > 0:
+-		print 'Total paired readsets found:' + str(num_paired_readsets)
++		print('Total paired readsets found:' + str(num_paired_readsets))
+ 	if num_single_readsets > 0:
+-		print 'Total single reads found:' + str(num_single_readsets)
++		print('Total single reads found:' + str(num_single_readsets))
+ 
+ 	return fileSets
+ 
+@@ -139,7 +139,7 @@ class CommandError(Exception):
+ def run_command(command, **kwargs):
+ 	'Execute a shell command and check the exit status and any O/S exceptions'
+ 	command_str = ' '.join(command)
+-	print 'Running: {}'.format(command_str)
++	print('Running: {}'.format(command_str))
+ 	try:
+ 		exit_status = call(command, **kwargs)
+ 	except OSError as e:
+@@ -209,9 +209,9 @@ def bowtie_index(fasta_files):
+ 	for fasta in fasta_files:
+ 		built_index = fasta + '.1.bt2'
+ 		if os.path.exists(built_index):
+-			print 'Bowtie 2 index for {} is already built...'.format(fasta)
++			print('Bowtie 2 index for {} is already built...'.format(fasta))
+ 		else:
+-			print 'Building bowtie2 index for {}...'.format(fasta)
++			print('Building bowtie2 index for {}...'.format(fasta))
+ 			run_command([get_bowtie_execs()[1], fasta, fasta])
+ 
+ def get_samtools_exec():
+@@ -229,9 +229,9 @@ def samtools_index(fasta_files):
+ 	for fasta in fasta_files:
+ 		built_index = fasta + '.fai'
+ 		if os.path.exists(built_index):
+-			print 'Samtools index for {} is already built...'.format(fasta)
++			print('Samtools index for {} is already built...'.format(fasta))
+ 		else:
+-			print 'Building samtools faidx index for {}...'.format(fasta)
++			print('Building samtools faidx index for {}...'.format(fasta))
+ 			run_command([get_samtools_exec(), 'faidx', fasta])
+ 
+ def main():
+@@ -283,7 +283,7 @@ def main():
+ 		cmd += " " + args.other_args
+ 
+ 		# print and run command
+-		print cmd
++		print(cmd)
+ 		echo_for_cmd = ["echo", "-e", "%s" % cmd] # we need this for the Popen pipe
+ 		echocmdproc = subprocess.Popen(echo_for_cmd, stdout=subprocess.PIPE)
+ 		out = subprocess.check_output("qsub", stdin=echocmdproc.stdout)
+--- a/scripts/slurm_srst2.py
++++ b/scripts/slurm_srst2.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ '''
+ This script generates SRST2 jobs for the SLURM scheduling system (http://slurm.schedmd.com/). It
+ allows many samples to be processed in parallel. After they all complete, the results can be
+@@ -102,7 +102,7 @@ def read_file_sets(args):
+ 						(baseName,read) = m.groups()
+ 						reverse_reads[baseName] = fastq
+ 					else:
+-						print "Could not determine forward/reverse read status for input file " + fastq
++						print("Could not determine forward/reverse read status for input file " + fastq)
+ 			else:
+ 				# matches default Illumina file naming format, e.g. m.groups() = ('samplename', '_S1', '_L001', '_R1', '_001')
+ 				baseName, read = m.groups()[0], m.groups()[3]
+@@ -111,8 +111,8 @@ def read_file_sets(args):
+ 				elif read == "_R2":
+ 					reverse_reads[baseName] = fastq
+ 				else:
+-					print "Could not determine forward/reverse read status for input file " + fastq
+-					print "  this file appears to match the MiSeq file naming convention (samplename_S1_L001_[R1]_001), but we were expecting [R1] or [R2] to designate read as forward or reverse?"
++					print("Could not determine forward/reverse read status for input file " + fastq)
++					print("  this file appears to match the MiSeq file naming convention (samplename_S1_L001_[R1]_001), but we were expecting [R1] or [R2] to designate read as forward or reverse?")
+ 					fileSets[file_name_before_ext] = fastq
+ 					num_single_readsets += 1
+ 		# store in pairs
+@@ -123,17 +123,17 @@ def read_file_sets(args):
+ 			else:
+ 				fileSets[sample] = [forward_reads[sample]] # no reverse found
+ 				num_single_readsets += 1
+-				print 'Warning, could not find pair for read:' + forward_reads[sample]
++				print('Warning, could not find pair for read:' + forward_reads[sample])
+ 		for sample in reverse_reads:
+ 			if sample not in fileSets:
+ 				fileSets[sample] = reverse_reads[sample] # no forward found
+ 				num_single_readsets += 1
+-				print 'Warning, could not find pair for read:' + reverse_reads[sample]
++				print('Warning, could not find pair for read:' + reverse_reads[sample])
+ 
+ 	if num_paired_readsets > 0:
+-		print 'Total paired readsets found:' + str(num_paired_readsets)
++		print('Total paired readsets found:' + str(num_paired_readsets))
+ 	if num_single_readsets > 0:
+-		print 'Total single reads found:' + str(num_single_readsets)
++		print('Total single reads found:' + str(num_single_readsets))
+ 
+ 	return fileSets
+ 
+@@ -143,7 +143,7 @@ class CommandError(Exception):
+ def run_command(command, **kwargs):
+ 	'Execute a shell command and check the exit status and any O/S exceptions'
+ 	command_str = ' '.join(command)
+-	print 'Running: {}'.format(command_str)
++	print('Running: {}'.format(command_str))
+ 	try:
+ 		exit_status = call(command, **kwargs)
+ 	except OSError as e:
+@@ -213,9 +213,9 @@ def bowtie_index(fasta_files):
+ 	for fasta in fasta_files:
+ 		built_index = fasta + '.1.bt2'
+ 		if os.path.exists(built_index):
+-			print 'Bowtie 2 index for {} is already built...'.format(fasta)
++			print('Bowtie 2 index for {} is already built...'.format(fasta))
+ 		else:
+-			print 'Building bowtie2 index for {}...'.format(fasta)
++			print('Building bowtie2 index for {}...'.format(fasta))
+ 			run_command([get_bowtie_execs()[1], fasta, fasta])
+ 
+ def get_samtools_exec():
+@@ -233,9 +233,9 @@ def samtools_index(fasta_files):
+ 	for fasta in fasta_files:
+ 		built_index = fasta + '.fai'
+ 		if os.path.exists(built_index):
+-			print 'Samtools index for {} is already built...'.format(fasta)
++			print('Samtools index for {} is already built...'.format(fasta))
+ 		else:
+-			print 'Building samtools faidx index for {}...'.format(fasta)
++			print('Building samtools faidx index for {}...'.format(fasta))
+ 			run_command([get_samtools_exec(), 'faidx', fasta])
+ 
+ def main():
+@@ -292,10 +292,10 @@ def main():
+ 		cmd += " " + args.other_args
+ 
+ 		# print and run command
+-		print cmd
+-		print ''
++		print(cmd)
++		print('')
+ 		os.system('echo "' + cmd + '" | sbatch')
+-		print ''
++		print('')
+ 
+ if __name__ == '__main__':
+ 	main()
+--- a/scripts/srst2.py
++++ b/scripts/srst2.py
+@@ -1,7 +1,7 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ # SRST2 - Short Read Sequence Typer (v2)
+-# Python Version 2.7.5
++# Python3 Version 
+ #
+ # Authors - Michael Inouye (minouye at unimelb.edu.au), Harriet Dashnow (h.dashnow at gmail.com),
+ #	Kathryn Holt (kholt at unimelb.edu.au), Bernie Pope (bjpope at unimelb.edu.au)
+@@ -30,7 +30,7 @@ from itertools import groupby
+ from operator import itemgetter
+ from collections import OrderedDict
+ try:
+-	from version import srst2_version
++	from .version import srst2_version
+ except:
+ 	srst2_version = "version unknown"
+ 
+@@ -306,8 +306,8 @@ def parse_fai(fai_file,db_type,delimiter
+ 				gene_clusters.append(gene_cluster)
+ 
+ 	if len(delimiter_check) > 0:
+-		print "Warning! MLST delimiter is " + delimiter + " but these genes may violate the pattern and cause problems:"
+-		print ",".join(delimiter_check)
++		print("Warning! MLST delimiter is " + delimiter + " but these genes may violate the pattern and cause problems:")
++		print(",".join(delimiter_check))
+ 
+ 	return size, gene_clusters, unique_gene_symbols, unique_allele_symbols, gene_cluster_symbols
+ 
+@@ -505,12 +505,12 @@ def score_alleles(args, mapping_files_pr
+ 		avg_depth_allele, coverage_allele, mismatch_allele, indel_allele, missing_allele,
+ 		size_allele, next_to_del_depth_allele, run_type,unique_gene_symbols, unique_allele_symbols):
+ 	# sort into hash for each gene locus
+-	depth_by_gene = group_allele_dict_by_gene(dict( (allele,val) for (allele,val) in avg_depth_allele.items() \
++	depth_by_gene = group_allele_dict_by_gene(dict( (allele,val) for (allele,val) in list(avg_depth_allele.items()) \
+ 			if (run_type == "mlst") or (coverage_allele[allele] > args.min_coverage) ),
+ 			run_type,args,
+ 			unique_gene_symbols,unique_allele_symbols)
+ 	stat_depth_by_gene = dict(
+-			(gene,max(alleles.values())) for (gene,alleles) in depth_by_gene.items()
++			(gene,max(alleles.values())) for (gene,alleles) in list(depth_by_gene.items())
+ 			)
+ 	allele_to_gene = dict_of_dicts_inverted_ind(depth_by_gene)
+ 
+@@ -553,7 +553,7 @@ def score_alleles(args, mapping_files_pr
+ 			# Fit linear model to observed Pval distribution vs expected Pval distribution (QQ plot)
+ 			pvals.sort(reverse=True)
+ 			len_obs_pvals = len(pvals)
+-			exp_pvals = range(1, len_obs_pvals + 1)
++			exp_pvals = list(range(1, len_obs_pvals + 1))
+ 			exp_pvals2 = [-log(float(ep) / (len_obs_pvals + 1), 10) for ep in exp_pvals]
+ 
+ 			# Slope is score
+@@ -698,7 +698,7 @@ def run_bowtie(mapping_files_pre,sample_
+ 		try:
+ 			command += ['-u',str(int(args.stop_after))]
+ 		except ValueError:
+-			print "WARNING. You asked to stop after mapping '" + args.stop_after + "' reads. I don't understand this, and will map all reads. Please speficy an integer with --stop_after or leave this as default to map 1 million reads."
++			print("WARNING. You asked to stop after mapping '" + args.stop_after + "' reads. I don't understand this, and will map all reads. Please speficy an integer with --stop_after or leave this as default to map 1 million reads.")
+ 
+ 	if args.other:
+ 		x = args.other
+@@ -806,12 +806,12 @@ def calculate_ST(allele_scores, ST_db, g
+ 		try:
+ 			clean_st = ST_db[allele_string]
+ 		except KeyError:
+-			print "This combination of alleles was not found in the sequence type database:",
+-			print sample_name,
++			print("This combination of alleles was not found in the sequence type database:", end=' ')
++			print(sample_name, end=' ')
+ 			for gene in allele_scores:
+ 				(allele,diffs,depth_problems,divergence) = allele_scores[gene]
+-				print allele,
+-			print
++				print(allele, end=' ')
++			print()
+ 			clean_st = "NF"
+ 	else:
+ 		clean_st = "ND"
+@@ -848,7 +848,7 @@ def parse_ST_database(ST_filename,gene_n
+ 	ST_db = {} # key = allele string, value = ST
+ 	gene_names = []
+ 	num_gene_cols_expected = len(gene_names_from_fai)
+-	print "Attempting to read " + str(num_gene_cols_expected) + " loci from ST database " + ST_filename
++	print("Attempting to read " + str(num_gene_cols_expected) + " loci from ST database " + ST_filename)
+ 	with open(ST_filename) as f:
+ 		count = 0
+ 		for line in f:
+@@ -858,23 +858,23 @@ def parse_ST_database(ST_filename,gene_n
+ 				gene_names = line_split[1:min(num_gene_cols_expected+1,len(line_split))]
+ 				for g in gene_names_from_fai:
+ 					if g not in gene_names:
+-						print "Warning: gene " + g + " in database file isn't among the columns in the ST definitions: " + ",".join(gene_names)
+-						print " Any sequences with this gene identifer from the database will not be included in typing."
++						print("Warning: gene " + g + " in database file isn't among the columns in the ST definitions: " + ",".join(gene_names))
++						print(" Any sequences with this gene identifer from the database will not be included in typing.")
+ 						if len(line_split) == num_gene_cols_expected+1:
+ 							gene_names.pop() # we read too many columns
+ 							num_gene_cols_expected -= 1
+ 				for g in gene_names:
+ 					if g not in gene_names_from_fai:
+-						print "Warning: gene " + g + " in ST definitions file isn't among those in the database " + ",".join(gene_names_from_fai)
+-						print " This will result in all STs being called as unknown (but allele calls will be accurate for other loci)."
++						print("Warning: gene " + g + " in ST definitions file isn't among those in the database " + ",".join(gene_names_from_fai))
++						print(" This will result in all STs being called as unknown (but allele calls will be accurate for other loci).")
+ 			else:
+ 				ST = line_split[0]
+-				if ST not in ST_db.values():
++				if ST not in list(ST_db.values()):
+ 					ST_string = " ".join(line_split[1:num_gene_cols_expected+1])
+ 					ST_db[ST_string] = ST
+ 				else:
+-					print "Warning: this ST is not unique in the ST definitions file: " + ST
+-		print "Read ST database " + ST_filename + " successfully"
++					print("Warning: this ST is not unique in the ST definitions file: " + ST)
++		print("Read ST database " + ST_filename + " successfully")
+ 		return (ST_db, gene_names)
+ 
+ def get_allele_name_from_db(allele,run_type,args,unique_allele_symbols=False,unique_cluster_symbols=False):
+@@ -936,7 +936,7 @@ def group_allele_dict_by_gene(by_allele,
+ 
+ def dict_of_dicts_inverted_ind(dd):
+ 	res = dict()
+-	for (key,val) in dd.items():
++	for (key,val) in list(dd.items()):
+ 		res.update(dict((key2,key) for key2 in val))
+ 	return res
+ 
+@@ -946,7 +946,7 @@ def parse_scores(run_type,args,scores, h
+ 					unique_cluster_symbols,unique_allele_symbols, pileup_file):
+ 
+ 	# sort into hash for each gene locus
+-	scores_by_gene = group_allele_dict_by_gene(dict( (allele,val) for (allele,val) in scores.items() \
++	scores_by_gene = group_allele_dict_by_gene(dict( (allele,val) for (allele,val) in list(scores.items()) \
+ 			if coverage_allele[allele] > args.min_coverage ),
+ 			run_type,args,
+ 			unique_cluster_symbols,unique_allele_symbols)
+@@ -957,7 +957,7 @@ def parse_scores(run_type,args,scores, h
+ 	for gene in scores_by_gene:
+ 
+ 		gene_hash = scores_by_gene[gene]
+-		scores_sorted = sorted(gene_hash.iteritems(),key=operator.itemgetter(1)) # sort by score
++		scores_sorted = sorted(iter(gene_hash.items()),key=operator.itemgetter(1)) # sort by score
+ 		(top_allele,top_score) = scores_sorted[0]
+ 
+ 		# check if depth is adequate for confident call
+@@ -1462,7 +1462,7 @@ def map_fileSet_to_db(args, sample_name,
+ 			logging.info("Printing all MLST scores to " + scores_output_file)
+ 			scores_output = file(scores_output_file, 'w')
+ 			scores_output.write("Allele\tScore\tAvg_depth\tEdge1_depth\tEdge2_depth\tPercent_coverage\tSize\tMismatches\tIndels\tTruncated_bases\tDepthNeighbouringTruncation\tMmaxMAF\n")
+-			for allele in scores.keys():
++			for allele in list(scores.keys()):
+ 				score = scores[allele]
+ 				scores_output.write('\t'.join([allele, str(score), str(avg_depth_allele[allele]), \
+ 					str(hash_edge_depth[allele][0]), str(hash_edge_depth[allele][1]), \
+@@ -1547,7 +1547,7 @@ def compile_results(args,mlst_results,db
+ 				if mlst_cols == 0:
+ 					mlst_header_string = test_string
+ 			else:
+-				test_string = mlst_result[mlst_result.keys()[0]] # no header line?
++				test_string = mlst_result[list(mlst_result.keys())[0]] # no header line?
+ 			test_string_split = test_string.split("\t")
+ 			this_mlst_cols = len(test_string_split)
+ 			if (mlst_cols == 0) or (mlst_cols == this_mlst_cols):
+@@ -1637,8 +1637,8 @@ def compile_results(args,mlst_results,db
+ 
+ 	# log ST counts
+ 	if len(mlst_results_master) > 0:
+-		logging.info("Detected " + str(len(st_counts.keys())) + " STs: ")
+-		sts = st_counts.keys()
++		logging.info("Detected " + str(len(list(st_counts.keys()))) + " STs: ")
++		sts = list(st_counts.keys())
+ 		sts.sort()
+ 		for st in sts:
+ 			logging.info("ST" + st + "\t" + str(st_counts[st]))
+@@ -1656,9 +1656,9 @@ def main():
+ 		if not os.path.exists(output_dir):
+ 			try:
+ 				os.makedirs(output_dir)
+-				print "Created directory " + output_dir + " for output"
++				print("Created directory " + output_dir + " for output")
+ 			except:
+-				print "Error. Specified output as " + args.output + " however the directory " + output_dir + " does not exist and our attempt to create one failed."
++				print("Error. Specified output as " + args.output + " however the directory " + output_dir + " does not exist and our attempt to create one failed.")
+ 
+ 	if args.log is True:
+ 		logfile = args.output + ".log"
+@@ -1703,9 +1703,9 @@ def main():
+ 		if not args.mlst_definitions:
+ 
+ 			# print warning to screen to alert user, may want to stop and restart
+-			print "Warning, MLST allele sequences were provided without ST definitions:"
+-			print " allele sequences: " + str(args.mlst_db)
+-			print " these will be mapped and scored, but STs can not be calculated"
++			print("Warning, MLST allele sequences were provided without ST definitions:")
++			print(" allele sequences: " + str(args.mlst_db))
++			print(" these will be mapped and scored, but STs can not be calculated")
+ 
+ 			# log
+ 			logging.info("Warning, MLST allele sequences were provided without ST definitions:")
+--- a/setup.py
++++ b/setup.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ from distutils.core import setup
+ 
+--- a/tests/test_slurm_srst2.py
++++ b/tests/test_slurm_srst2.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import os
+ import sys
+--- a/tests/test_srst2.py
++++ b/tests/test_srst2.py
+@@ -1,11 +1,11 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import os
+ import sys
+ import unittest
+ 
+ from mock import MagicMock, patch
+-from StringIO import StringIO
++from io import StringIO
+ 
+ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'scripts')))
+ 


=====================================
debian/patches/series
=====================================
@@ -2,3 +2,4 @@ check_command_line_arguments.patch
 add_usr_bin_python_to_scripts.patch
 fix_test.patch
 fix_grep_call.patch
+2to3.patch


=====================================
debian/rules
=====================================
@@ -5,20 +5,20 @@
 include /usr/share/dpkg/default.mk
 
 %:
-	dh $@ --with python2
+	dh $@ --with python3 --buildsystem=pybuild
 
 override_dh_auto_build:
 	dh_auto_build
-	markdown_py -f README.html README.md
-	markdown_py -f README_database_clustering.html database_clustering/README.md
+	markdown README.md > README.html
+	markdown database_clustering/README.md > database_clustering_README.html
 
 override_dh_install:
 	dh_install
 	mv debian/$(DEB_SOURCE)/usr/lib/*/dist-packages/$(DEB_SOURCE)/* debian/$(DEB_SOURCE)/usr/share/$(DEB_SOURCE)
 	rm -rf debian/*/usr/bin/*.py debian/$(DEB_SOURCE)/usr/lib/*/dist-packages/
-	# fix line endings to make sure Python interpreter will be found
+	# fix line endings to make sure Python3 interpreter will be found
 	find debian/*/usr/share -name "VFDB*" -exec dos2unix \{\} \;
-	sed -i '1s:^#!/usr/local.*python[.0-9]*$$:#!/usr/bin/python:' debian/$(DEB_SOURCE)/usr/share/$(DEB_SOURCE)/qsub_srst2.py
+	sed -i '1s:^#!/usr/local.*python[.0-9]*$$:#!/usr/bin/python3:' debian/$(DEB_SOURCE)/usr/share/$(DEB_SOURCE)/qsub_srst2.py
 
 override_dh_fixperms:
 	dh_fixperms
@@ -31,13 +31,9 @@ override_dh_fixperms:
 
 override_dh_auto_test:
 ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
-	cd tests && python test_slurm_srst2.py && python test_srst2.py 
+	cd tests && python3 test_slurm_srst2.py && python3 test_srst2.py
 endif
 
 override_dh_installdocs:
 	dh_installdocs
 	sed -i "s?sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__).*scripts.*?sys.path.append(os.path.abspath('/usr/share/$(DEB_SOURCE)'))?" debian/$(DEB_SOURCE)/usr/share/doc/$(DEB_SOURCE)/tests/*.py
-
-# README.html contains a really maintained changelog
-#override_dh_installchangelogs:
-#	dh_installchangelogs CHANGES.txt


=====================================
debian/tests/control
=====================================
@@ -1,3 +1,3 @@
 Tests: run-unit-test
-Depends: @, python-mock
+Depends: @, python3-mock
 Restrictions: allow-stderr


=====================================
debian/tests/run-unit-test
=====================================
@@ -2,13 +2,13 @@
 
 pkg=srst2
 
-if [ "$ADTTMP" = "" ] ; then
-  ADTTMP=`mktemp -d /tmp/${pkg}-test.XXXXXX`
+if [ "$AUTOPKGTEST_TMP" = "" ] ; then
+  AUTOPKGTEST_TMP=`mktemp -d /tmp/${pkg}-test.XXXXXX`
 fi
-cd $ADTTMP
-cp /usr/share/doc/$pkg/tests/* $ADTTMP
+cd $AUTOPKGTEST_TMP
+cp /usr/share/doc/$pkg/tests/* $AUTOPKGTEST_TMP
 find . -name "*.gz" -exec gunzip \{\} \;
 for runtest in *.py ; do
-    python $runtest
+    python3 $runtest
 done
 # rm -rf *


=====================================
debian/upstream/metadata
=====================================
@@ -1,26 +1,28 @@
 Reference:
- Author: >
-  Michael Inouye and Harriet Dashnow and Lesley-Ann Raven and
-  Mark B Schultz and Bernard J Pope and Takehiro Tomita and Justin Zobel and
-  Kathryn E Holt
- Title: >
-  SRST2: Rapid genomic surveillance for public health and
-  hospital microbiology labs
- Journal: Genome Medicine
- Year: 2014
- Volume: 6
- Number: 11
- Pages: 90
- DOI: 10.1186/s13073-014-0090-6
- PMID: 25422674
- URL: http://www.genomemedicine.com/content/6/11/90
- eprint: http://www.genomemedicine.com/content/pdf/s13073-014-0090-6.pdf
+  Author: >
+    Michael Inouye and Harriet Dashnow and Lesley-Ann Raven and
+    Mark B Schultz and Bernard J Pope and Takehiro Tomita and Justin Zobel and
+    Kathryn E Holt
+  Title: >
+    SRST2: Rapid genomic surveillance for public health and
+    hospital microbiology labs
+  Journal: Genome Medicine
+  Year: 2014
+  Volume: 6
+  Number: 11
+  Pages: 90
+  DOI: 10.1186/s13073-014-0090-6
+  PMID: 25422674
+  URL: http://www.genomemedicine.com/content/6/11/90
+  eprint: http://www.genomemedicine.com/content/pdf/s13073-014-0090-6.pdf
 Registry:
- - Name: OMICtools
-   Entry: OMICS_12777
- - Name: conda:bioconda
-   Entry: srst2
- - Name: bio.tools
-   Entry: NA
- - Name: SciCrunch
-   Entry: NA
+- Name: OMICtools
+  Entry: OMICS_12777
+- Name: conda:bioconda
+  Entry: srst2
+- Name: bio.tools
+  Entry: NA
+- Name: SciCrunch
+  Entry: NA
+Repository: https://github.com/katholt/srst2.git
+Repository-Browse: https://github.com/katholt/srst2



View it on GitLab: https://salsa.debian.org/med-team/srst2/compare/be915ca9eb95cd43c3a40dfc05c4c55bddaa396b...7ea98208ce2f4b5fb2c3237f49d4e6cacfa170d8

-- 
View it on GitLab: https://salsa.debian.org/med-team/srst2/compare/be915ca9eb95cd43c3a40dfc05c4c55bddaa396b...7ea98208ce2f4b5fb2c3237f49d4e6cacfa170d8
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20191211/741434a9/attachment-0001.html>


More information about the debian-med-commit mailing list