[med-svn] [Git][med-team/pbh5tools][master] 14 commits: Imported Upstream version 0.8.0+dfsg
Andreas Tille
gitlab at salsa.debian.org
Sat Dec 7 08:35:54 GMT 2019
Andreas Tille pushed to branch master at Debian Med / pbh5tools
Commits:
8611b48f by Afif Elghraoui at 2015-11-15T11:02:36Z
Imported Upstream version 0.8.0+dfsg
- - - - -
491db261 by Andreas Tille at 2018-07-13T13:45:49Z
New upstream version 0.8.0+git20170929.58d54ff+dfsg
- - - - -
d2a82d5d by Andreas Tille at 2019-12-07T08:09:59Z
routine-update: New upstream version
- - - - -
7385d38e by Andreas Tille at 2019-12-07T08:10:00Z
New upstream version 0.8.0+git20181212.9fa8fc4+dfsg
- - - - -
8c20fd8c by Andreas Tille at 2019-12-07T08:10:08Z
Update upstream source from tag 'upstream/0.8.0+git20181212.9fa8fc4+dfsg'
Update to upstream version '0.8.0+git20181212.9fa8fc4+dfsg'
with Debian dir 23e4fc91e5a3f564af1869e80ec766531903b2f7
- - - - -
52308172 by Andreas Tille at 2019-12-07T08:10:08Z
routine-update: debhelper-compat 12
- - - - -
b2dcbefe by Andreas Tille at 2019-12-07T08:10:12Z
routine-update: Standards-Version: 4.4.1
- - - - -
ee109406 by Andreas Tille at 2019-12-07T08:10:13Z
routine-update: Secure URI in copyright format
- - - - -
a73a1d97 by Andreas Tille at 2019-12-07T08:10:47Z
R-U: DEB_BUILD_OPTIONS allow override_dh_auto_test
- - - - -
d7fa9593 by Andreas Tille at 2019-12-07T08:10:47Z
R-U: Build-Depends: s/python-sphinx/python3-sphinx/
- - - - -
8519addf by Andreas Tille at 2019-12-07T08:10:48Z
Remove patches enable-nosetests that are missing from debian/patches/series.
Fixes lintian: patch-file-present-but-not-mentioned-in-series
See https://lintian.debian.org/tags/patch-file-present-but-not-mentioned-in-series.html for more details.
- - - - -
0f61803d by Andreas Tille at 2019-12-07T08:29:29Z
Use 2to3 to convert from Python2 to Python3
- - - - -
dea55103 by Andreas Tille at 2019-12-07T08:31:01Z
Add myself to Uploaders
- - - - -
7c969cce by Andreas Tille at 2019-12-07T08:34:41Z
Convert packaging to Python3
- - - - -
12 changed files:
- debian/changelog
- − debian/compat
- debian/control
- debian/copyright
- + debian/patches/2to3.patch
- − debian/patches/enable-nosetests
- debian/patches/multiarch-module-path.patch
- debian/patches/series
- debian/python-pbh5tools.NEWS → debian/python3-pbh5tools.NEWS
- debian/python-pbh5tools.examples → debian/python3-pbh5tools.examples
- debian/rules
- doc/index.rst
Changes:
=====================================
debian/changelog
=====================================
@@ -1,3 +1,20 @@
+pbh5tools (0.8.0+git20181212.9fa8fc4+dfsg-2) UNRELEASED; urgency=medium
+
+ * Afif Elghraoui removed himself from Uploaders
+ * Add myself to Uploaders
+ * New upstream version
+ * debhelper-compat 12
+ * Standards-Version: 4.4.1
+ * Secure URI in copyright format
+ * Respect DEB_BUILD_OPTIONS in override_dh_auto_test target
+ * Build-Depends: s/python-sphinx/python3-sphinx/
+ * Remove patches enable-nosetests that are missing from
+ debian/patches/series.
+ * Use 2to3 to convert from Python2 to Python3
+ Closes: #937256
+
+ -- Andreas Tille <tille at debian.org> Sat, 07 Dec 2019 09:23:30 +0100
+
pbh5tools (0.8.0+git20170929.58d54ff+dfsg-1) unstable; urgency=medium
* Team upload.
=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11
=====================================
debian/control
=====================================
@@ -1,17 +1,18 @@
Source: pbh5tools
Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.org>
+Uploaders: Andreas Tille <tille at debian.org>
Section: science
Priority: optional
-Build-Depends: debhelper (>= 11~),
+Build-Depends: debhelper-compat (= 12),
dh-python,
- python,
- python-setuptools,
- python-pbcore,
- python-sphinx,
- python-nose,
- python-h5py,
- python-cram
-Standards-Version: 4.1.5
+ python3,
+ python3-setuptools,
+ python3-pbcore,
+ python3-sphinx,
+ python3-nose <!nocheck>,
+ python3-h5py <!nocheck>,
+ python3-cram <!nocheck>
+Standards-Version: 4.4.1
Vcs-Browser: https://salsa.debian.org/med-team/pbh5tools
Vcs-Git: https://salsa.debian.org/med-team/pbh5tools.git
Homepage: https://github.com/PacificBiosciences/pbh5tools
@@ -19,9 +20,9 @@ Homepage: https://github.com/PacificBiosciences/pbh5tools
Package: pbh5tools
Architecture: all
Depends: ${misc:Depends},
- ${python:Depends},
- python-pbh5tools (>= ${binary:Version}),
- python-pkg-resources
+ ${python3:Depends},
+ python3-pbh5tools (>= ${binary:Version}),
+ python3-pkg-resources
Description: tools for manipulating Pacific Biosciences HDF5 files
This package provides functionality for manipulating and extracting data
from cmp.h5 and bas.h5 files produced by the Pacific Biosciences sequencers.
@@ -30,20 +31,20 @@ Description: tools for manipulating Pacific Biosciences HDF5 files
.
This package is part of the SMRTAnalysis suite.
-Package: python-pbh5tools
+Package: python3-pbh5tools
Architecture: any
Section: python
Depends: ${shlibs:Depends},
${misc:Depends},
- ${python:Depends}
-Description: tools for manipulating Pacific Biosciences HDF5 files -- Python 2 library
+ ${python3:Depends}
+Description: tools for manipulating Pacific Biosciences HDF5 files -- Python 3 library
This package provides functionality for manipulating and extracting data
from cmp.h5 and bas.h5 files produced by the Pacific Biosciences sequencers.
cmp.h5 files contain alignment information while bas.h5 files contain
base-call information.
.
pbh5tools is part of the SMRTAnalysis suite. This package provides the
- Python 2 backend library
+ Python 3 backend library
Package: python-pbh5tools-doc
Architecture: all
=====================================
debian/copyright
=====================================
@@ -1,4 +1,4 @@
-Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: pbh5tools
Source: https://github.com/PacificBiosciences/pbh5tools
Files-Excluded: doc/pacbio-theme/*
=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,841 @@
+Description: Use 2to3 to convert from Python2 to Python3
+Bug-Debian: https://bugs.debian.org/937256
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Sat, 07 Dec 2019 09:23:30 +0100
+
+--- a/Makefile
++++ b/Makefile
+@@ -4,17 +4,17 @@ SHELL = /bin/bash -e
+ all: build install
+
+ build:
+- python setup.py build --executable="/usr/bin/env python"
++ python3 setup.py build --executable="/usr/bin/python3"
+
+ bdist:
+- python setup.py build --executable="/usr/bin/env python"
+- python setup.py bdist --formats=egg
++ python3 setup.py build --executable="/usr/bin/python3"
++ python3 setup.py bdist --formats=egg
+
+ install:
+- python setup.py install
++ python3 setup.py install
+
+ develop:
+- python setup.py develop
++ python3 setup.py develop
+
+ test: examples
+ find tests -name "*.py" | xargs nosetests -v
+--- a/bin/bash5tools.py
++++ b/bin/bash5tools.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ #################################################################################
+ # Copyright (c) 2011-2013, Pacific Biosciences of California, Inc.
+ #
+@@ -121,12 +121,12 @@ class BasH5ToolsRunner(PBToolRunner):
+ inBasH5 = BasH5Reader(self.args.inFile)
+
+ if not inBasH5.hasConsensusBasecalls and self.args.readType == "ccs":
+- print "Input file %s contains no CCS reads." % self.args.inFile
++ print("Input file %s contains no CCS reads." % self.args.inFile)
+ sys.exit(-1)
+
+ if not inBasH5.hasRawBasecalls and self.args.readType in ["unrolled", "subreads"]:
+- print "Input file %s contains no %s reads" % (self.args.inFile,
+- self.args.readType)
++ print("Input file %s contains no %s reads" % (self.args.inFile,
++ self.args.readType))
+ sys.exit(-1)
+
+ movieName = inBasH5.movieName
+@@ -145,7 +145,7 @@ class BasH5ToolsRunner(PBToolRunner):
+ elif inBasH5.hasConsensusBasecalls:
+ readType = 'ccs'
+ else:
+- print "Input bas.h5 file has neither CCS nor subread data"
++ print("Input bas.h5 file has neither CCS nor subread data")
+ sys.exit(-1)
+ else:
+ readType = self.args.readType
+--- a/bin/cmph5tools.py
++++ b/bin/cmph5tools.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ #################################################################################
+ # Copyright (c) 2011-2013, Pacific Biosciences of California, Inc.
+ #
+@@ -204,23 +204,23 @@ class CmpH5ToolsRunner(PBMultiToolRunner
+ outFile = self.args.outCsv)
+
+ elif cmd == 'listMetrics':
+- print '--- Metrics:'
+- print "\t\n".join(DocumentedMetric.list())
+- print '\n--- Statistics:'
+- print "\t\n".join(DocumentedStatistic.list())
++ print('--- Metrics:')
++ print("\t\n".join(DocumentedMetric.list()))
++ print('\n--- Statistics:')
++ print("\t\n".join(DocumentedStatistic.list()))
+
+ elif cmd == 'equal':
+ ret = cmpH5Equal(self.args.inCmp1, self.args.inCmp2)
+ if not ret[0]:
+- print >> sys.stderr, ret[1]
++ print(ret[1], file=sys.stderr)
+ return 1
+ else:
+ return 0
+
+ elif cmd == 'summarize':
+ for inCmp in self.args.inCmps:
+- print "".join(["-"] * 40)
+- print cmpH5Summarize(inCmp)
++ print("".join(["-"] * 40))
++ print(cmpH5Summarize(inCmp))
+
+ elif cmd == 'validate':
+ if cmpH5Validate(self.args.inCmp):
+--- a/doc/conf.py
++++ b/doc/conf.py
+@@ -40,8 +40,8 @@ source_suffix = '.rst'
+ master_doc = 'index'
+
+ # General information about the project.
+-project = u'pbh5tools'
+-copyright = u'2011, devnet at pacificbiosciences.com'
++project = 'pbh5tools'
++copyright = '2011, devnet at pacificbiosciences.com'
+
+ # The version info for the project you're documenting, acts as replacement for
+ # |version| and |release|, also used in various other places throughout the
+@@ -182,8 +182,8 @@ latex_elements = {
+ # Grouping the document tree into LaTeX files. List of tuples
+ # (source start file, target name, title, author, documentclass [howto/manual]).
+ latex_documents = [
+- ('index', 'pbh5tools.tex', u'pbh5tools Documentation',
+- u'devnet at pacificbiosciences.com', 'manual'),
++ ('index', 'pbh5tools.tex', 'pbh5tools Documentation',
++ 'devnet at pacificbiosciences.com', 'manual'),
+ ]
+
+ # The name of an image file (relative to this directory) to place at the top of
+@@ -212,8 +212,8 @@ latex_documents = [
+ # One entry per manual page. List of tuples
+ # (source start file, name, description, authors, manual section).
+ man_pages = [
+- ('index', 'pbh5tools', u'pbh5tools Documentation',
+- [u'devnet at pacificbiosciences.com'], 1)
++ ('index', 'pbh5tools', 'pbh5tools Documentation',
++ ['devnet at pacificbiosciences.com'], 1)
+ ]
+
+ # If true, show URL addresses after external links.
+@@ -226,7 +226,7 @@ man_pages = [
+ # (source start file, target name, title, author,
+ # dir menu entry, description, category)
+ texinfo_documents = [
+- ('index', 'pbh5tools', u'pbh5tools Documentation', u'devnet at pacificbiosciences.com',
++ ('index', 'pbh5tools', 'pbh5tools Documentation', 'devnet at pacificbiosciences.com',
+ 'pbh5tools', 'One line description of project.', 'Miscellaneous'),
+ ]
+
+--- a/pbh5tools/CmpH5Compare.py
++++ b/pbh5tools/CmpH5Compare.py
+@@ -37,7 +37,7 @@ import h5py as H5
+ from pbcore.io import CmpH5Reader
+ from pbh5tools.PBH5ToolsException import PBH5ToolsException
+ from pbh5tools.Metrics import *
+-from mlab import rec2csv, rec2txt
++from .mlab import rec2csv, rec2txt
+
+ def cmpH5Equal(inCmp1, inCmp2):
+ """Compare two cmp.h5 files for equality. Here equality means the
+@@ -63,7 +63,7 @@ def cmpH5Summarize(inCmp, movieSummary =
+ tstr = "filename: %s\nversion: %s\nn reads: %d\nn refs: " + \
+ "%d\nn movies: %d\nn bases: %d\navg rl: %d\navg acc: %g"
+
+- rl,acc,mov = zip(*[(r.readLength,r.accuracy,r.movieInfo[0]) for r in reader ])
++ rl,acc,mov = list(zip(*[(r.readLength,r.accuracy,r.movieInfo[0]) for r in reader ]))
+
+ summaryStr = (tstr % (os.path.basename(reader.file.filename), reader.version, len(reader),
+ len(reader.referenceInfoTable), len(set(mov)), NP.sum(rl),
+--- a/pbh5tools/CmpH5Format.py
++++ b/pbh5tools/CmpH5Format.py
+@@ -66,10 +66,10 @@ class CmpH5Format:
+ self.STROBE_NUMBER, self.MOLECULE_ID, self.READ_START, self.READ_END,
+ self.MAP_QV, self.N_MATCHES, self.N_MISMATCHES, self.N_INSERTIONS,
+ self.N_DELETIONS, self.OFFSET_BEGIN, self.OFFSET_END, self.N_BACK,
+- self.N_OVERLAP) = range(0, 22)
++ self.N_OVERLAP) = list(range(0, 22))
+
+ self.extraTables = ['/'.join([self.ALN_INFO, x]) for x in
+- cmpH5[self.ALN_INFO].keys()
++ list(cmpH5[self.ALN_INFO].keys())
+ if not x == self.ALN_INDEX_NAME]
+ # sorting
+ self.INDEX_ATTR = "Index"
+--- a/pbh5tools/CmpH5Merge.py
++++ b/pbh5tools/CmpH5Merge.py
+@@ -37,6 +37,7 @@ import numpy as NP
+ from pbh5tools.PBH5ToolsException import PBH5ToolsException
+ from pbh5tools.CmpH5Format import CmpH5Format
+ from pbh5tools.CmpH5Utils import copyAttributes, deleteAttrIfExists
++from functools import reduce
+
+ def makeRefName(rID):
+ return "ref%06d" % rID
+@@ -176,8 +177,7 @@ def cmpH5Merge(inFiles, outFile, referen
+ # check for consistency of things like barcode and edna/z score
+ # datasets.
+ hasBarcode = all([ fmt.BARCODE_INFO in z for z in inCmps ])
+- extraDatasets = [set(filter(lambda x : not x == fmt.ALN_INDEX_NAME,
+- z[fmt.ALN_INFO].keys())) for z in inCmps ]
++ extraDatasets = [set([x for x in list(z[fmt.ALN_INFO].keys()) if not x == fmt.ALN_INDEX_NAME]) for z in inCmps ]
+ extraDatasets = reduce(set.intersection, extraDatasets)
+
+ def filterPrint(x):
+@@ -186,7 +186,7 @@ def cmpH5Merge(inFiles, outFile, referen
+ return False
+ else:
+ return True
+- inCmps = filter(filterPrint, inCmps)
++ inCmps = list(filter(filterPrint, inCmps))
+
+ if not len(inCmps):
+ raise PBH5ToolsException("merge", "No non-empty files to merge.")
+@@ -217,11 +217,11 @@ def cmpH5Merge(inFiles, outFile, referen
+
+ # we are going to map the ref ids into the globaly unique
+ # refInfoIDs.
+- refIDMap = dict(zip(cmpH5[fmt.REF_GROUP_ID].value,
+- cmpH5[fmt.REF_GROUP_INFO_ID].value))
++ refIDMap = dict(list(zip(cmpH5[fmt.REF_GROUP_ID].value,
++ cmpH5[fmt.REF_GROUP_INFO_ID].value)))
+
+- refPathMap = dict(zip(cmpH5[fmt.REF_GROUP_INFO_ID].value,
+- [os.path.basename(k) for k in cmpH5[fmt.REF_GROUP_PATH]]))
++ refPathMap = dict(list(zip(cmpH5[fmt.REF_GROUP_INFO_ID].value,
++ [os.path.basename(k) for k in cmpH5[fmt.REF_GROUP_PATH]])))
+
+ # make a map from this cmpH5's movies to the new movie ID.
+ movieMap = {}
+@@ -233,7 +233,7 @@ def cmpH5Merge(inFiles, outFile, referen
+ raise PBH5ToolsException("merge", "Error processing movies.")
+
+ for rID in refInfoIDs:
+- if rID not in refIDMap.values():
++ if rID not in list(refIDMap.values()):
+ logging.info("Skipping reference with no reads.")
+ continue
+ if selectedReferences is not None:
+@@ -243,7 +243,7 @@ def cmpH5Merge(inFiles, outFile, referen
+
+ # compute new reference ID.
+ aIdx = cmpH5[fmt.ALN_INDEX].value
+- refID = {x:y for y,x in refIDMap.iteritems()}[rID]
++ refID = {x:y for y,x in refIDMap.items()}[rID]
+ refName = makeRefName(rID)
+
+ # which reads go to this reference.
+@@ -259,15 +259,15 @@ def cmpH5Merge(inFiles, outFile, referen
+
+ # make a map between old and new IDs
+ uAlnIDs = NP.unique(aIdx[:,fmt.ALN_ID])
+- alnIDMap = dict(zip(uAlnIDs, NP.array(range(0, len(uAlnIDs))) +
+- alnIDBegin))
++ alnIDMap = dict(list(zip(uAlnIDs, NP.array(list(range(0, len(uAlnIDs)))) +
++ alnIDBegin)))
+ alnGroup = {k:v for k,v in zip(cmpH5[fmt.ALN_GROUP_ID].value,
+ cmpH5[fmt.ALN_GROUP_PATH].value) if \
+ k in uAlnIDs}
+ newAlnGroup = [(alnIDMap[k],
+ "/%s/%s-%d" % (refName, os.path.basename(alnGroup[k]),
+ alnIDMap[k]),
+- alnGroup[k]) for k in alnGroup.keys()]
++ alnGroup[k]) for k in list(alnGroup.keys())]
+
+ # Set the new ALN_ID vals in the ALN_INDEX.
+ aIdx[:,fmt.ALN_ID] = NP.array([alnIDMap[aIdx[i,fmt.ALN_ID]] for i in
+@@ -318,7 +318,7 @@ def cmpH5Merge(inFiles, outFile, referen
+ dtype = inCmps[0][fmt.REF_GROUP_INFO_ID].dtype)
+
+ # reset the IDs
+- outCmp[fmt.ALN_INDEX][:,fmt.ID] = range(1, outCmp[fmt.ALN_INDEX].shape[0] + 1)
++ outCmp[fmt.ALN_INDEX][:,fmt.ID] = list(range(1, outCmp[fmt.ALN_INDEX].shape[0] + 1))
+ # reset the molecule IDs
+ outCmp[fmt.ALN_INDEX][:,fmt.MOLECULE_ID] = \
+ ((NP.max(outCmp[fmt.ALN_INDEX][:,fmt.MOLECULE_ID]) *
+@@ -328,7 +328,7 @@ def cmpH5Merge(inFiles, outFile, referen
+ # close the sucker.
+ outCmp.close()
+
+- except Exception, e:
++ except Exception as e:
+ try:
+ # remove the file as it won't be correct
+ if os.path.exists(outFile):
+--- a/pbh5tools/CmpH5Select.py
++++ b/pbh5tools/CmpH5Select.py
+@@ -61,7 +61,7 @@ def cmpH5Select(inCmpFile, outCmp, idxs
+ where = where,
+ groupBy = groupBy,
+ groupByCsv = groupByCsv )
+- keys = idxVecs.keys()
++ keys = list(idxVecs.keys())
+
+ ## XXX: Should the resultant files be sorted?
+ if len(keys) == 1:
+@@ -81,7 +81,7 @@ def doSelect(inCmpFile, outCmpFile, idxs
+ ids = outCmp[fmt.ALN_INDEX][:,alnIdxID]
+ nds = '/'.join([groupName, idName])
+ msk = NP.array([x in ids for x in inCmp[nds].value]) # got to be an NP.array
+- for dsName in inCmp[groupName].keys():
++ for dsName in list(inCmp[groupName].keys()):
+ copyDataset('/'.join([groupName, dsName]), inCmp, outCmp,
+ msk, fmt)
+
+@@ -102,12 +102,12 @@ def doSelect(inCmpFile, outCmpFile, idxs
+
+ # copy over the AlnIndex and other AlnInfo elements
+ # correpsonding to idxs to new file.
+- for dsName in inCmp[fmt.ALN_INFO].keys():
++ for dsName in list(inCmp[fmt.ALN_INFO].keys()):
+ copyDataset('/'.join([fmt.ALN_INFO, dsName]), inCmp, outCmp, idxs, fmt)
+
+ # reset the ALN_ID
+ outCmp[fmt.ALN_INDEX][:,fmt.ID] = \
+- NP.array(range(1, outCmp[fmt.ALN_INDEX].shape[0] + 1))
++ NP.array(list(range(1, outCmp[fmt.ALN_INDEX].shape[0] + 1)))
+
+ # trim the other datasets
+ trimDataset(fmt.ALN_GROUP, fmt.ALN_ID, inCmp, outCmp, fmt)
+@@ -115,7 +115,7 @@ def doSelect(inCmpFile, outCmpFile, idxs
+ # trimDataset(fmt.MOVIE_INFO, fmt.MOVIE_ID, inCmp, outCmp, fmt)
+ # copy Ref,Movie dataset whole
+ for groupName in [fmt.REF_GROUP,fmt.MOVIE_INFO]:
+- for dsName in inCmp[groupName].keys():
++ for dsName in list(inCmp[groupName].keys()):
+ copyDataset('/'.join([groupName,dsName]), inCmp, outCmp, None, fmt)
+
+ # other groups will go over whole hog
+@@ -124,7 +124,7 @@ def doSelect(inCmpFile, outCmpFile, idxs
+ copyGroup(fmt.BARCODE_INFO, inCmp, outCmp)
+
+ # now we copy over the actual data
+- for i in xrange(0, outCmp[fmt.ALN_GROUP_ID].shape[0]):
++ for i in range(0, outCmp[fmt.ALN_GROUP_ID].shape[0]):
+ # figure out what reads are in this group.
+ agID = outCmp[fmt.ALN_GROUP_ID][i]
+ agPT = outCmp[fmt.ALN_GROUP_PATH][i]
+@@ -134,13 +134,13 @@ def doSelect(inCmpFile, outCmpFile, idxs
+ offEnd = alnIdx[whReads, fmt.OFFSET_END]
+ totalSize = NP.sum((offEnd - offBegin) + 1) # 0 in between
+
+- for dsName in inCmp[agPT].keys():
++ for dsName in list(inCmp[agPT].keys()):
+ fullPath = '/'.join([agPT, dsName])
+ newDs = outCmp.create_dataset(fullPath, shape = (totalSize,),
+ dtype = inCmp[fullPath].dtype)
+ origDs = inCmp[fullPath]
+ cs = 0
+- for j in xrange(0, len(whReads)):
++ for j in range(0, len(whReads)):
+ newEnd = cs + offEnd[j] - offBegin[j]
+ newDs[cs:newEnd] = origDs[offBegin[j]:offEnd[j]]
+ outCmp[fmt.ALN_INDEX][whReads[j],fmt.OFFSET_BEGIN] = cs
+@@ -158,7 +158,7 @@ def doSelect(inCmpFile, outCmpFile, idxs
+ logging.debug("Closing output cmp.h5 file.")
+ outCmp.close()
+
+- except Exception, e:
++ except Exception as e:
+ logging.exception(e)
+ try:
+ os.remove(outCmpFile)
+--- a/pbh5tools/CmpH5Sort.py
++++ b/pbh5tools/CmpH5Sort.py
+@@ -42,6 +42,7 @@ from pbh5tools.PBH5ToolsException import
+ from pbh5tools.CmpH5Format import CmpH5Format
+
+ import pbcore.io.rangeQueries as RQ
++from functools import reduce
+
+ def numberWithinRange(s, e, vec):
+ """
+@@ -51,7 +52,7 @@ def numberWithinRange(s, e, vec):
+ """
+ lI = RQ.leftmostBinSearch(vec, s)
+ rI = RQ.rightmostBinSearch(vec, e)
+- return(len(filter(lambda x : s <= x < e, vec[lI:rI])))
++ return(len([x for x in vec[lI:rI] if s <= x < e]))
+
+
+ def computeIndices(tStart, tEnd):
+@@ -145,7 +146,7 @@ def __pathExists(h5, path):
+ try:
+ h5[path]
+ return True
+- except Exception, E:
++ except Exception as E:
+ return False
+
+ def __repackDataArrays(cH5, format, fixedMem = False, maxDatasetSize = 2**31 - 1):
+@@ -153,7 +154,7 @@ def __repackDataArrays(cH5, format, fixe
+ Flatten read groups according to an indexed cmp.h5 file.
+ """
+ alnGroups = [x for x in cH5[format.ALN_GROUP_PATH]]
+- pulseDatasets = [cH5[x].keys() for x in alnGroups]
++ pulseDatasets = [list(cH5[x].keys()) for x in alnGroups]
+ uPulseDatasets = sorted(list(reduce(lambda x,y: set.union(set(x), set(y)), pulseDatasets)))
+
+ # check to make sure that all aln groups have the same datasets -
+@@ -170,13 +171,13 @@ def __repackDataArrays(cH5, format, fixe
+ raise PBH5ToolsException("sort", "Datasets must agree:\n" + ",".join(spd) +
+ "\nvs\n" + ",".join(uPulseDatasets))
+
+- readGroupPaths = dict(zip(cH5[format.ALN_GROUP_ID],
+- [x for x in cH5[format.ALN_GROUP_PATH]]))
+- refGroupPaths = dict(zip(cH5[format.REF_GROUP_ID],
+- [x for x in cH5[format.REF_GROUP_PATH]]))
+- uPDAndType = dict(zip(uPulseDatasets,
+- [cH5[readGroupPaths.values()[0]][z].dtype
+- for z in uPulseDatasets]))
++ readGroupPaths = dict(list(zip(cH5[format.ALN_GROUP_ID],
++ [x for x in cH5[format.ALN_GROUP_PATH]])))
++ refGroupPaths = dict(list(zip(cH5[format.REF_GROUP_ID],
++ [x for x in cH5[format.REF_GROUP_PATH]])))
++ uPDAndType = dict(list(zip(uPulseDatasets,
++ [cH5[list(readGroupPaths.values())[0]][z].dtype
++ for z in uPulseDatasets])))
+
+ ## XXX : this needs to be augmented with some saftey on not
+ ## loading too much data. - set a bound on the number of elts in
+@@ -185,7 +186,7 @@ def __repackDataArrays(cH5, format, fixe
+
+ def getData(read, ds, start, end):
+ key = "/".join((readGroupPaths[read[format.ALN_ID]], ds))
+- if not pdsCache.has_key(key):
++ if key not in pdsCache:
+ logging.debug("Cacheing: %s" % key)
+ h5DS = cH5[key]
+ smallEnoughToFitInMemory = ((h5DS.len() * h5DS.dtype.itemsize)/1024**3) < 12
+@@ -200,7 +201,7 @@ def __repackDataArrays(cH5, format, fixe
+
+ def chooseAlnGroupNames(gID, readBlocks, start = 1):
+ rGroup = cH5[refGroupPaths[gID]]
+- currentGroups = rGroup.keys()
++ currentGroups = list(rGroup.keys())
+ pref = 'rg' + str(start) + '-'
+ newNames = [ pref + str(i) for i,j in enumerate(readBlocks) ]
+ if any([ nn in currentGroups for nn in newNames ]):
+@@ -217,7 +218,7 @@ def __repackDataArrays(cH5, format, fixe
+ currentAlnID = 1
+ refGroupAlnGroups = []
+
+- for row in xrange(0, offsets.shape[0]):
++ for row in range(0, offsets.shape[0]):
+ logging.info("Processing reference: %d of %d" %
+ (row + 1, offsets.shape[0]))
+
+@@ -235,7 +236,7 @@ def __repackDataArrays(cH5, format, fixe
+ readBlocks = []
+ if tSize >= maxDatasetSize:
+ lastStart = 0
+- for i in xrange(0, len(clens)):
++ for i in range(0, len(clens)):
+ if clens[i]-clens[lastStart] >= maxDatasetSize:
+ readBlocks.append((lastStart, i))
+ lastStart = i
+@@ -259,7 +260,7 @@ def __repackDataArrays(cH5, format, fixe
+ newDS = NP.zeros(dsLength, dtype = uPDAndType[pulseDataset])
+
+ currentStart = 0
+- for readIdx in xrange(readBlock[0], readBlock[1]):
++ for readIdx in range(readBlock[0], readBlock[1]):
+ read = reads[readIdx, ]
+ gStart, gEnd = currentStart, currentStart + alens[readIdx]
+ newDS[gStart:gEnd] = getData(read, pulseDataset,
+@@ -271,7 +272,7 @@ def __repackDataArrays(cH5, format, fixe
+ newGroup.create_dataset(pulseDataset, data = newDS,
+ dtype = uPDAndType[pulseDataset],
+ maxshape = (None, ), chunks = (16384,))
+- logging.debug("flushing:" + ",".join(pdsCache.keys()))
++ logging.debug("flushing:" + ",".join(list(pdsCache.keys())))
+ pdsCache = {}
+
+
+@@ -279,7 +280,7 @@ def __repackDataArrays(cH5, format, fixe
+ ## once you have moved all data for a readBlock you can change
+ ## the offsets.
+ currentStart = 0
+- for readIdx in xrange(readBlock[0], readBlock[1]):
++ for readIdx in range(readBlock[0], readBlock[1]):
+ read = reads[readIdx, ]
+ gStart, gEnd = currentStart, currentStart + alens[readIdx]
+ reads[readIdx, format.OFFSET_BEGIN] = gStart
+@@ -299,21 +300,21 @@ def __repackDataArrays(cH5, format, fixe
+ refGroupAlnGroups.append("/".join((refGroupPaths[groupID], ngn)))
+
+ ## re-identify them.
+- sAI[:,format.ID] = range(1, sAI.shape[0] + 1)
++ sAI[:,format.ID] = list(range(1, sAI.shape[0] + 1))
+ assert(len(refGroupAlnGroups) == currentAlnID - 1)
+
+ logging.info("Writing new AlnGroupPath values.")
+ del(cH5[format.ALN_GROUP_PATH])
+ del(cH5[format.ALN_GROUP_ID])
+- cH5.create_dataset(format.ALN_GROUP_PATH, data = map(str, refGroupAlnGroups),
++ cH5.create_dataset(format.ALN_GROUP_PATH, data = list(map(str, refGroupAlnGroups)),
+ ## XXX : unicode.
+ dtype = H5.special_dtype(vlen = str), maxshape = (None,),
+ chunks = (256,))
+- cH5.create_dataset(format.ALN_GROUP_ID, data = range(1, currentAlnID),
++ cH5.create_dataset(format.ALN_GROUP_ID, data = list(range(1, currentAlnID)),
+ dtype = "int32", maxshape = (None,), chunks = (256,))
+ logging.info("Wrote new AlnGroupPath values.")
+
+- for rg in readGroupPaths.values():
++ for rg in list(readGroupPaths.values()):
+ ## this should never be false, however, MC has produced
+ ## files in the past where there are duplicate paths with
+ ## different IDs and therefore you'll error here (probably
+@@ -379,7 +380,7 @@ def cmpH5Sort(inFile, outFile, tmpDir, d
+ logging.info("Read cmp.h5 with version %s" % format.VERSION)
+
+ aI = cH5[format.ALN_INDEX]
+- originalAttrs = aI.attrs.items()
++ originalAttrs = list(aI.attrs.items())
+
+ ## empty is a special case. In general, h5py handles
+ ## zero-length slices poorly and therefore I don't want to
+@@ -396,8 +397,8 @@ def cmpH5Sort(inFile, outFile, tmpDir, d
+ # scope of a cmp.h5 file. We need to map format.REF_ID to its
+ # REF_INFO_ID which is stable over merge/sort/split
+ # operations.
+- refIdToInfoId = dict(zip(cH5[format.REF_GROUP_ID].value,
+- cH5[format.REF_GROUP_INFO_ID]))
++ refIdToInfoId = dict(list(zip(cH5[format.REF_GROUP_ID].value,
++ cH5[format.REF_GROUP_INFO_ID])))
+ refInfoIds = NP.array([refIdToInfoId[i] for i in aI[:,format.REF_ID]])
+ aord = NP.lexsort([aI[:,format.TARGET_END], aI[:,format.TARGET_START],
+ refInfoIds])
+@@ -482,7 +483,7 @@ def cmpH5Sort(inFile, outFile, tmpDir, d
+ maxshape = tuple([None for x in eTable.shape]))
+ logging.info("Sorted dataset: %s" % extraTable)
+ logging.info("Writing attributes")
+- for k in originalAttrs.keys():
++ for k in list(originalAttrs.keys()):
+ # this block is necessary because I need to
+ # convert the dataset into a string dataset from
+ # object because an apparent h5py limitation.
+@@ -499,7 +500,7 @@ def cmpH5Sort(inFile, outFile, tmpDir, d
+ ## set this flag for before.
+ success = True
+
+- except Exception, E:
++ except Exception as E:
+ logging.error(E)
+ logging.exception(E)
+
+--- a/pbh5tools/CmpH5Stats.py
++++ b/pbh5tools/CmpH5Stats.py
+@@ -31,11 +31,11 @@
+ import numpy as NP
+ import sys
+
+-from mlab import rec2csv, rec2txt
++from .mlab import rec2csv, rec2txt
+ from pbh5tools.Metrics import *
+
+ def prettyPrint(res):
+- print rec2txt(res, padding = 20, precision = 2)
++ print(rec2txt(res, padding = 20, precision = 2))
+
+ def makeTblFromInput(exprStr, defaultValue):
+ if not exprStr:
+--- a/pbh5tools/CmpH5Utils.py
++++ b/pbh5tools/CmpH5Utils.py
+@@ -47,10 +47,10 @@ def deleteIfExists(ds, nm):
+ del ds[nm]
+
+ def copyAttributes(inDs, outDs):
+- for k in inDs.attrs.keys():
++ for k in list(inDs.attrs.keys()):
+ logging.debug("copying attribute: %s" % k)
+ elt = inDs.attrs[k]
+- if isinstance(elt, basestring):
++ if isinstance(elt, str):
+ # h5py wants to simplify things down, so I think that this
+ # is a possibility.
+ # preserve numpy string type if possible
+--- a/pbh5tools/Metrics.py
++++ b/pbh5tools/Metrics.py
+@@ -144,7 +144,7 @@ def processClass(cls, name, bases, dct):
+ ignoreRes = ['^Default', '^Metric$', '^Statistic$', '^Factor$',
+ '^FactorStatistic']
+
+- if not any(map(lambda x : re.match(x, name), ignoreRes)):
++ if not any([re.match(x, name) for x in ignoreRes]):
+ if '__init__' in dct:
+ # if it has an init it takes arguments which define the
+ # metric.
+@@ -179,7 +179,7 @@ class DocumentedMetric(type):
+
+ @staticmethod
+ def list():
+- return filter(lambda x : x, DocumentedMetric.Metrics)
++ return [x for x in DocumentedMetric.Metrics if x]
+
+ class DocumentedStatistic(type):
+ Statistics = []
+@@ -190,10 +190,9 @@ class DocumentedStatistic(type):
+
+ @staticmethod
+ def list():
+- return filter(lambda x : x, DocumentedStatistic.Statistics)
++ return [x for x in DocumentedStatistic.Statistics if x]
+
+-class Statistic(Expr):
+- __metaclass__ = DocumentedStatistic
++class Statistic(Expr, metaclass=DocumentedStatistic):
+ def __init__(self, metric):
+ self.metric = metric
+
+@@ -208,9 +207,7 @@ class Statistic(Expr):
+ def __str__(self):
+ return self.__class__.__name__ + '(' + str(self.metric) + ')'
+
+-class Metric(Expr):
+- __metaclass__ = DocumentedMetric
+-
++class Metric(Expr, metaclass=DocumentedMetric):
+ def eval(self, cmpH5, idx):
+ return self.produce(cmpH5, idx)
+
+@@ -232,7 +229,7 @@ class Tbl(object):
+ for a in self.cols:
+ yield (a, self.cols[a])
+ def eval(self, cmpH5, idx):
+- return [(a, self.cols[a].eval(cmpH5, idx)) for a in self.cols.keys()]
++ return [(a, self.cols[a].eval(cmpH5, idx)) for a in list(self.cols.keys())]
+
+ def split(x, f):
+ # I'm thinking it is faster to do the allocation of the NP array
+@@ -240,15 +237,15 @@ def split(x, f):
+ assert(len(x) == len(f))
+ levels = NP.unique(f)
+ counts = {k:0 for k in levels}
+- for i in xrange(0, len(x)):
++ for i in range(0, len(x)):
+ counts[f[i]] += 1
+- results = { k:NP.zeros(v, dtype = int) for k,v in counts.items() }
+- for i in xrange(0, len(x)):
++ results = { k:NP.zeros(v, dtype = int) for k,v in list(counts.items()) }
++ for i in range(0, len(x)):
+ k = f[i]
+ results[k][counts[k] - 1] = x[i]
+ counts[k] -= 1
+ # reverse it.
+- return { k:v[::-1] for k,v in results.items() }
++ return { k:v[::-1] for k,v in list(results.items()) }
+
+
+ def toRecArray(res):
+@@ -349,9 +346,7 @@ class Round(Statistic):
+ ## Additionally, FactorStatistics can be computed using a group by -
+ ## it is only the case that you need this in a where where you need
+ ## this new concept.
+-class ByFactor(Metric):
+- __metaclass__ = DocumentedMetric
+-
++class ByFactor(Metric, metaclass=DocumentedMetric):
+ def __init__(self, metric, factor, statistic):
+ self.metric = metric
+ self.factor = factor
+@@ -359,9 +354,9 @@ class ByFactor(Metric):
+
+ def produce(self, cmpH5, idx):
+ r = self.metric.eval(cmpH5, idx)
+- fr = split(range(len(idx)), self.factor.eval(cmpH5, idx))
++ fr = split(list(range(len(idx))), self.factor.eval(cmpH5, idx))
+ res = NP.zeros(len(idx), dtype = NP.int)
+- for v in fr.values():
++ for v in list(fr.values()):
+ res[v] = self.statistic.f(r[v])
+ return res
+
+@@ -468,8 +463,8 @@ class _MoleculeId(Factor):
+
+ class _MoleculeName(Factor):
+ def produce(self, cmpH5, idx):
+- molecules = zip(cmpH5.alignmentIndex['MovieID'][idx],
+- cmpH5.alignmentIndex['HoleNumber'][idx])
++ molecules = list(zip(cmpH5.alignmentIndex['MovieID'][idx],
++ cmpH5.alignmentIndex['HoleNumber'][idx]))
+ return NP.array(['%s_%s' % (m,h) for m,h in molecules])
+
+ class _Strand(Factor):
+@@ -553,15 +548,15 @@ DefaultSortBy = Tbl(alignmentIdx =
+ def query(reader, what = DefaultWhat, where = DefaultWhere,
+ groupBy = DefaultGroupBy, groupByCsv = None,
+ sortBy = DefaultSortBy, limit = None):
+- idxs = NP.where(where.eval(reader, range(0, len(reader))))[0]
++ idxs = NP.where(where.eval(reader, list(range(0, len(reader)))))[0]
+ if groupByCsv:
+ groupBy = groupCsv(groupByCsv, idxs, reader)
+ else:
+ groupBy = groupBy.eval(reader, idxs)
+ results = {}
+
+- for k,v in split(idxs, groupBy).items():
++ for k,v in list(split(idxs, groupBy).items()):
+ sortVals = sortBy.eval(reader, v)
+- sortIdxs = v[NP.lexsort(map(lambda z : z[1], sortVals)[::-1])][:limit]
++ sortIdxs = v[NP.lexsort([z[1] for z in sortVals][::-1])][:limit]
+ results[k] = what.eval(reader, sortIdxs)
+ return results
+--- a/pbh5tools/cbook.py
++++ b/pbh5tools/cbook.py
+@@ -35,7 +35,7 @@ from the Python Cookbook -- hence the na
+ """
+ def is_string_like(obj):
+ 'Return True if *obj* looks like a string'
+- if isinstance(obj, (str, unicode)):
++ if isinstance(obj, str):
+ return True
+ # numpy strings are subclass of str, ma strings are not
+ if ma.isMaskedArray(obj):
+--- a/pbh5tools/mlab.py
++++ b/pbh5tools/mlab.py
+@@ -47,7 +47,7 @@ A collection of helper methods for numpy
+ """
+
+ import csv, os, copy
+-import cbook
++from . import cbook
+ import numpy as np
+
+ # a series of classes for describing the format intentions of various rec views
+@@ -320,7 +320,7 @@ def rec2txt(r, header=None, padding=3, p
+
+
+ if ntype==np.int or ntype==np.int16 or ntype==np.int32 or ntype==np.int64 or ntype==np.int8 or ntype==np.int_:
+- length = max(len(colname),np.max(map(len,map(str,column))))
++ length = max(len(colname),np.max(list(map(len,list(map(str,column))))))
+ return 1, length+padding, "%d" # right justify
+
+ # JDH: my powerbook does not have np.float96 using np 1.3.0
+@@ -337,7 +337,7 @@ def rec2txt(r, header=None, padding=3, p
+ """
+ if ntype==np.float or ntype==np.float32 or ntype==np.float64 or (hasattr(np, 'float96') and (ntype==np.float96)) or ntype==np.float_:
+ fmt = "%." + str(precision) + "f"
+- length = max(len(colname),np.max(map(len,map(lambda x:fmt%x,column))))
++ length = max(len(colname),np.max(list(map(len,[fmt%x for x in column]))))
+ return 1, length+padding, fmt # right justify
+
+
+@@ -345,7 +345,7 @@ def rec2txt(r, header=None, padding=3, p
+ length = max(len(colname),column.itemsize)
+ return 1, length+padding, "%s" # left justify // JHB changed the 0 to a 1
+
+- return 0, max(len(colname),np.max(map(len,map(str,column))))+padding, "%s"
++ return 0, max(len(colname),np.max(list(map(len,list(map(str,column))))))+padding, "%s"
+
+ if header is None:
+ header = r.dtype.names
+--- a/tests/cram/bash5tools.t
++++ b/tests/cram/bash5tools.t
+@@ -3,8 +3,8 @@
+
+ Set up some vars ...
+
+- $ INH5=`python -c "from pbcore import data; print data.getBasH5s()[0]"`
+- $ MOVIENAME=$(basename `python -c "from pbcore import data; print data.getBasH5s()[0][:-7]"`)
++ $ INH5=`python3 -c "from pbcore import data; print(data.getBasH5s()[0])"`
++ $ MOVIENAME=$(basename `python3 -c "from pbcore import data; print(data.getBasH5s()[0][:-7])"`)
+ $ CMD="bash5tools.py $INH5"
+
+ $ $CMD --readType=ccs
+--- a/tests/cram/merge.t
++++ b/tests/cram/merge.t
+@@ -1,4 +1,4 @@
+- $ export INH5=`python -c "from pbcore import data ; print data.getCmpH5()"`
++ $ export INH5=`python3 -c "from pbcore import data ; print(data.getCmpH5())"`
+ $ cmph5tools.py select $INH5 --outFile left.cmp.h5 --idx 0 1 2 3
+ $ echo $?
+ 0
+--- a/tests/cram/select.t
++++ b/tests/cram/select.t
+@@ -1,7 +1,7 @@
+ $ . $TESTDIR/portability.sh
+
+ Set up basic commands
+- $ INH5=`python -c "from pbcore import data ; print data.getCmpH5()"`
++ $ INH5=`python3 -c "from pbcore import data ; print(data.getCmpH5())"`
+ $ CMD="cmph5tools.py stats $INH5"
+ Test basic output
+ $ $CMD --what "ReadStart" --limit 5
+--- a/tests/cram/sort.t
++++ b/tests/cram/sort.t
+@@ -1,4 +1,4 @@
+- $ export INH5=`python -c "from pbcore import data ; print data.getCmpH5()"`
++ $ export INH5=`python3 -c "from pbcore import data ; print(data.getCmpH5())"`
+ $ cmph5tools.py sort --deep --outFile tmp.cmp.h5 $INH5
+ $ echo $?
+ 0
+@@ -11,7 +11,7 @@
+ $ cmph5tools.py sort --outFile ftmp.cmp.h5 $INH5
+ $ echo $?
+ 0
+- $ python -c "from pbcore.io import CmpH5Reader; a = CmpH5Reader('tmp.cmp.h5'); b = CmpH5Reader('ftmp.cmp.h5'); print(all([a[i] == b[i] for i in xrange(len(a))]));"
++ $ python3 -c "from pbcore.io import CmpH5Reader; a = CmpH5Reader('tmp.cmp.h5'); b = CmpH5Reader('ftmp.cmp.h5'); print(all([a[i] == b[i] for i in xrange(len(a))]));"
+ True
+ $ cmph5tools.py sort --outFile ptmp.cmp.h5 --deep --usePythonIndexer $INH5
+ $ echo $?
+--- a/tests/cram/stats.t
++++ b/tests/cram/stats.t
+@@ -1,7 +1,7 @@
+ $ . $TESTDIR/portability.sh
+
+ Set up inputs and basic command string.
+- $ INH5=`python -c "from pbcore import data ; print data.getCmpH5()"`
++ $ INH5=`python3 -c "from pbcore import data ; print(data.getCmpH5())"`
+ $ CMD="cmph5tools.py stats $INH5"
+
+ Print Readlength to stdout
+--- a/tests/cram/valid.t
++++ b/tests/cram/valid.t
+@@ -1,2 +1,2 @@
+- $ export INH5=`python -c "from pbcore import data ; print data.getCmpH5()"`
++ $ export INH5=`python3 -c "from pbcore import data ; print(data.getCmpH5())"`
+ $ cmph5tools.py validate $INH5
+--- a/tests/test_cmph5lib_CmpH5Sort.py
++++ b/tests/test_cmph5lib_CmpH5Sort.py
+@@ -11,19 +11,19 @@ import pbcore.io.rangeQueries as RQ
+ from pbcore.io import CmpH5Reader
+
+ def brute_force_number_in_range(s, e, vec):
+- return(len(filter(lambda x : s <= x < e, vec)))
++ return(len([x for x in vec if s <= x < e]))
+
+ def generate_positions(size, coverage, lScale = 50):
+ NN = int(size*coverage)
+ tS = random.randint(0, size, NN)
+- tE = tS + array(map(int, random.exponential(lScale, NN) + 1))
++ tE = tS + array(list(map(int, random.exponential(lScale, NN) + 1)))
+ ar = array([tS, tE]).transpose()
+ ar = ar[lexsort((tE, tS)),]
+ return(ar)
+
+ def brute_force_search(tStart, tEnd, nBack, nOverlap, start, end):
+ toKeep = array([False]*len(tStart))
+- res = array(range(0, len(tStart)))
++ res = array(list(range(0, len(tStart))))
+
+ for i in range(0, len(tStart)):
+ # four cases to deal with.
=====================================
debian/patches/enable-nosetests deleted
=====================================
@@ -1,20 +0,0 @@
-Description: Configure nosetests to run with setup.py test
-Author: Afif Elghraoui <afif at ghraoui.name>
-Forwarded: not-needed
-Last-Update: 2015-08-12
---- python-pbh5tools.orig/setup.py
-+++ python-pbh5tools/setup.py
-@@ -26,6 +26,7 @@
- 'bin/cmph5tools.py'],
- packages = find_packages("."),
- package_dir = {'':'.'},
-+ test_suite = 'nose.collector',
- ext_modules=[Extension('pbh5tools/ci', ['pbh5tools/ci.c'],
- extra_compile_args=["-O3","-shared"])],
- zip_safe = False,
---- /dev/null
-+++ python-pbh5tools/setup.cfg
-@@ -0,0 +1,3 @@
-+[nosetests]
-+verbosity=2
-+where=tests
=====================================
debian/patches/multiarch-module-path.patch
=====================================
@@ -2,9 +2,9 @@ Author: Afif Elghraoui
Last-Update: 2016-03-20 00:45:22 -0700
Description: Enable multiarch modules
---- python-pbh5tools.orig/pbh5tools/Indexer.py
-+++ python-pbh5tools/pbh5tools/Indexer.py
-@@ -32,10 +32,11 @@
+--- a/pbh5tools/Indexer.py
++++ b/pbh5tools/Indexer.py
+@@ -32,10 +32,11 @@ import ctypes
import os
import numpy
import pkg_resources
=====================================
debian/patches/series
=====================================
@@ -6,3 +6,4 @@ ignore-cram-results.patch
fix-type-in-test.patch
multiarch-module-path.patch
# enable-nosetests
+2to3.patch
=====================================
debian/python-pbh5tools.NEWS → debian/python3-pbh5tools.NEWS
=====================================
=====================================
debian/python-pbh5tools.examples → debian/python3-pbh5tools.examples
=====================================
=====================================
debian/rules
=====================================
@@ -8,7 +8,7 @@ BINDIR=$(CURDIR)/debian/pbh5tools/usr/bin
export PYBUILD_NAME = pbh5tools
%:
- LC_ALL=C.UTF-8 dh $@ --with python2 --buildsystem=pybuild
+ LC_ALL=C.UTF-8 dh $@ --with python3 --buildsystem=pybuild
override_dh_auto_build:
dh_auto_build
@@ -19,13 +19,15 @@ override_dh_install:
# We do this here rather than in .install files so that debhelper
# takes care of the #!/usr/bin/env lines for us
mkdir -p $(BINDIR)
- mv debian/python-pbh5tools/usr/bin/bash5tools.py $(BINDIR)/bash5tools
- mv debian/python-pbh5tools/usr/bin/cmph5tools.py $(BINDIR)/cmph5tools
+ mv debian/python3-pbh5tools/usr/bin/bash5tools.py $(BINDIR)/bash5tools
+ mv debian/python3-pbh5tools/usr/bin/cmph5tools.py $(BINDIR)/cmph5tools
override_dh_auto_test:
+ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
PYBUILD_SYSTEM=custom \
PYBUILD_TEST_ARGS="PATH=$(CURDIR)/build/scripts-2.7:$$PATH $(MAKE) test" \
dh_auto_test
+endif
override_dh_auto_clean:
dh_auto_clean
=====================================
doc/index.rst
=====================================
@@ -28,6 +28,14 @@ To install ``pbh5tools``, run the following command from the ``pbh5tools`` root
python setup.py install
+If you do not have `root` or `sudo` permissions, you can install locally by:
+
+1. Installing pysam, numpy, Cython, and h5py to your home directory.
+ pip install --user --upgrade numpy h5py pysam cython
+2. Running
+ python setup.py install --user
+
+
####################
Tool: bash5tools.py
####################
@@ -80,10 +88,10 @@ Usage
Examples
--------
-Extracting all Raw reads from ``input.bas.h5`` without any filtering
-and exporting to FASTA (``myreads.fasta``): ::
+Extracting all subreads reads from ``input.bas.h5`` without any filtering
+and exporting to a FASTA file named ``myreads.fasta``: ::
- python bash5tools.py input.bas.h5 --outFilePrefix myreads --outType fasta --readType Raw
+ python bash5tools.py --outFilePrefix myreads --outType fasta --readType subreads input.bas.h5
Extracting all CCS reads from ``input.bas.h5`` that have read lengths
larger than 100 and exporting to FASTQ (``myreads.fastq``): ::
View it on GitLab: https://salsa.debian.org/med-team/pbh5tools/compare/b1343d6bd06db3eb08e689a0732a2f1e54469026...7c969ccec60aa22eb7735e988967e18e2fbc0a0a
--
View it on GitLab: https://salsa.debian.org/med-team/pbh5tools/compare/b1343d6bd06db3eb08e689a0732a2f1e54469026...7c969ccec60aa22eb7735e988967e18e2fbc0a0a
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20191207/d3f9e90f/attachment-0001.html>
More information about the debian-med-commit
mailing list