[med-svn] [Git][med-team/tophat][master] 9 commits: Use 2to3 to port to Python3
Andreas Tille
gitlab at salsa.debian.org
Fri Oct 11 10:09:32 BST 2019
Andreas Tille pushed to branch master at Debian Med / tophat
Commits:
536bdf52 by Andreas Tille at 2019-10-11T07:58:38Z
Use 2to3 to port to Python3
- - - - -
acc496f8 by Andreas Tille at 2019-10-11T08:47:09Z
debhelper-compat 12
- - - - -
cb0a23d8 by Andreas Tille at 2019-10-11T08:47:15Z
Standards-Version: 4.4.1
- - - - -
3bfee215 by Andreas Tille at 2019-10-11T08:47:15Z
Remove trailing whitespace in debian/changelog
- - - - -
0115fe4c by Andreas Tille at 2019-10-11T08:47:16Z
Remove trailing whitespace in debian/control
- - - - -
32d6ceb1 by Andreas Tille at 2019-10-11T08:47:20Z
Set upstream metadata fields: Bug-Submit.
- - - - -
621a5618 by Andreas Tille at 2019-10-11T08:47:21Z
Remove obsolete fields Name, Contact from debian/upstream/metadata.
- - - - -
69afa00f by Andreas Tille at 2019-10-11T08:47:21Z
Rely on pre-initialized dpkg-architecture variables.
Fixes lintian: debian-rules-sets-dpkg-architecture-variable
See https://lintian.debian.org/tags/debian-rules-sets-dpkg-architecture-variable.html for more details.
- - - - -
f14dd955 by Andreas Tille at 2019-10-11T09:08:31Z
More 2to3 stuff, d/control changed from python to python3
- - - - -
7 changed files:
- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3.patch
- debian/patches/series
- debian/rules
- debian/upstream/metadata
Changes:
=====================================
debian/changelog
=====================================
@@ -1,3 +1,17 @@
+tophat (2.1.1+dfsg1-3) UNRELEASED; urgency=medium
+
+ * Use 2to3 to port to Python3
+ Closes: #938677
+ * debhelper-compat 12
+ * Standards-Version: 4.4.1
+ * Remove trailing whitespace in debian/changelog
+ * Remove trailing whitespace in debian/control
+ * Set upstream metadata fields: Bug-Submit.
+ * Remove obsolete fields Name, Contact from debian/upstream/metadata.
+ * Rely on pre-initialized dpkg-architecture variables.
+
+ -- Andreas Tille <tille at debian.org> Fri, 11 Oct 2019 09:54:16 +0200
+
tophat (2.1.1+dfsg1-2) unstable; urgency=medium
* Fix Makefile.am
@@ -46,9 +60,9 @@ tophat (2.1.1+dfsg-4) unstable; urgency=medium
tophat (2.1.1+dfsg-3) unstable; urgency=medium
* Team upload.
-
+
[ Nadiya Sitdykova ]
- * Add autopkgtest test-suite
+ * Add autopkgtest test-suite
[ Andreas Tille ]
* debhelper 10 (use --no-parallel to avoid build problems)
@@ -103,9 +117,9 @@ tophat (2.0.13+dfsg-1) unstable; urgency=medium
[ Andreas Tille ]
* New upstream version
* Adapted patches
-
+
[ Alexandre Mestiashvili]
- * debian/patches/hardening4samtols.patch: add hardening flags for
+ * debian/patches/hardening4samtols.patch: add hardening flags for
embedded copy of samtools
-- Alexandre Mestiashvili <alex at biotec.tu-dresden.de> Sun, 26 Oct 2014 15:53:02 +0100
@@ -137,9 +151,9 @@ tophat (2.0.12+dfsg-1) unstable; urgency=medium
[ Alexandre Mestiashvili ]
* d/watch: new url
* Imported Upstream version 2.0.12+dfsg
- * d/patches/fix_build_w_seqan1.4.patch: patch from Manuel Holtgrewe
+ * d/patches/fix_build_w_seqan1.4.patch: patch from Manuel Holtgrewe
resolving #733352.
- removed const_ness_part1.patch as the new patch solves the problem
+ removed const_ness_part1.patch as the new patch solves the problem
completely
Closes: #733352
* debian/patches/fix_includes_path.patch: exclude SeqAn-1.3 from configure.ac
@@ -167,7 +181,7 @@ tophat (2.0.10-1) unstable; urgency=low
tophat (2.0.9-1) unstable; urgency=low
* d/get-orig-source: place tarballs to ../tarballs/
- * d/control: use canonical vcs fields, Standards-Version: 3.9.4
+ * d/control: use canonical vcs fields, Standards-Version: 3.9.4
* d/changelog: updated source field
* refreshed patches
* Imported Upstream version 2.0.9
@@ -204,7 +218,7 @@ tophat (2.0.6-1) unstable; urgency=low
tophat (2.0.5-1) unstable; urgency=low
[ Alexandre Mestiashvili ]
- * Imported Upstream version 2.0.5
+ * Imported Upstream version 2.0.5
-- Alexandre Mestiashvili <alex at biotec.tu-dresden.de> Fri, 05 Oct 2012 10:56:26 +0200
@@ -289,14 +303,14 @@ tophat (1.3.3-1) UNRELEASED; urgency=low
TODO:
* src/SeqAn-1.2 should be excluded
- - tophat depends on seqan library which already exists in debian
-
+ - tophat depends on seqan library which already exists in debian
+
[Alexandre Mestiashvili]
- * New upstream release
+ * New upstream release
* Removed dh-make template from watch file
- * Added initial copyright data , removed templates , added Source
+ * Added initial copyright data , removed templates , added Source
* debian/compat version 8
- * debian/control
+ * debian/control
- Maintainer: Debian Med Packaging Team
<debian-med-packaging at lists.alioth.debian.org>
- DM-Upload-Allowed: yes
@@ -306,9 +320,9 @@ tophat (1.3.3-1) UNRELEASED; urgency=low
- added build-dependency quilt
- added seqan-dev as dependency
* debian/rules
- - removed dh-make template
+ - removed dh-make template
* debian/control Added python dependency to binary package
- * debian/control Description shouldn't start with package name
+ * debian/control Description shouldn't start with package name
* debian/rules removed quilt patch management
* Added DEP3 headers to patches.
* Added Pre-Depends: dpkg (>= 1.15.6) (xz compression) Fixed syntax-error in debian/copyright
=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11
=====================================
debian/control
=====================================
@@ -5,16 +5,16 @@ Uploaders: Carlos Borroto <carlos.borroto at gmail.com>,
Andreas Tille <tille at debian.org>
Section: science
Priority: optional
-Build-Depends: debhelper (>= 11~),
+Build-Depends: debhelper-compat (= 12),
libbam-dev,
zlib1g-dev,
- python,
- seqan-dev (>= 1.4),
+ python3,
+ seqan-dev,
libboost-system-dev,
libboost-thread-dev,
help2man,
bowtie
-Standards-Version: 4.2.0
+Standards-Version: 4.4.1
Vcs-Browser: https://salsa.debian.org/med-team/tophat
Vcs-Git: https://salsa.debian.org/med-team/tophat.git
Homepage: http://ccb.jhu.edu/software/tophat
@@ -23,14 +23,14 @@ Package: tophat
Architecture: any
Depends: ${shlibs:Depends},
${misc:Depends},
- python,
+ python3,
bowtie2 | bowtie,
samtools (>= 1.5)
Suggests: cufflinks
Description: fast splice junction mapper for RNA-Seq reads
TopHat aligns RNA-Seq reads to mammalian-sized genomes using the ultra
high-throughput short read aligner Bowtie, and then analyzes the
- mapping results to identify splice junctions between exons.
+ mapping results to identify splice junctions between exons.
TopHat is a collaborative effort between the University of Maryland
Center for Bioinformatics and Computational Biology and the
University of California, Berkeley Departments of Mathematics and
=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,1081 @@
+Description: Use 2to3 to port to Python3
+Bug-Debian: https://bugs.debian.org/938677
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Fri, 11 Oct 2019 09:54:16 +0200
+
+--- a/src/tophat.py
++++ b/src/tophat.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+
+ # encoding: utf-8
+ """
+@@ -195,11 +195,11 @@ GFF_T_VER = 209 #GFF parser version
+
+ #mapping types:
+
+-_reads_vs_G, _reads_vs_T, _segs_vs_G, _segs_vs_J = range(1,5)
++_reads_vs_G, _reads_vs_T, _segs_vs_G, _segs_vs_J = list(range(1,5))
+
+ # execution resuming stages (for now, execution can be resumed only for stages
+ # after the pre-filter and transcriptome searches):
+-_stage_prep, _stage_map_start, _stage_map_segments, _stage_find_juncs, _stage_juncs_db, _stage_map2juncs, _stage_tophat_reports, _stage_alldone = range(1,9)
++_stage_prep, _stage_map_start, _stage_map_segments, _stage_find_juncs, _stage_juncs_db, _stage_map2juncs, _stage_tophat_reports, _stage_alldone = list(range(1,9))
+ stageNames = ["start", "prep_reads", "map_start", "map_segments", "find_juncs", "juncs_db", "map2juncs", "tophat_reports", "alldone"]
+ # 0 1 2 3 4 5 6 7 , 8
+ runStages = dict([(stageNames[st], st) for st in range(0, 9)])
+@@ -214,7 +214,7 @@ def getResumeStage(rlog):
+ #first line must be the actual tophat command used
+ thcmd=None
+ try:
+- thcmd = flog.next()
++ thcmd = next(flog)
+ except StopIteration:
+ die("Error: cannot resume, run.log is empty.")
+ oldargv=thcmd.split()
+@@ -257,7 +257,7 @@ def doResume(odir):
+ best_stage = r0stage
+ best_argv = r0argv[:]
+ if best_stage == _stage_alldone:
+- print >> sys.stderr, "Nothing to resume."
++ print("Nothing to resume.", file=sys.stderr)
+ sys.exit(1)
+
+ global resumeStage
+@@ -266,7 +266,7 @@ def doResume(odir):
+
+ def setRunStage(stnum):
+ global currentStage
+- print >> run_log, "#>"+stageNames[stnum]+":"
++ print("#>"+stageNames[stnum]+":", file=run_log)
+ currentStage = stnum
+
+ def init_logger(log_fname):
+@@ -1000,7 +1000,7 @@ class TopHatParams:
+ "b2-score-min=",
+ "b2-D=",
+ "b2-R="])
+- except getopt.error, msg:
++ except getopt.error as msg:
+ raise Usage(msg)
+
+ self.splice_constraints.parse_options(opts)
+@@ -1023,7 +1023,7 @@ class TopHatParams:
+ # option processing
+ for option, value in opts:
+ if option in ("-v", "--version"):
+- print "TopHat v%s" % (get_version())
++ print("TopHat v%s" % (get_version()))
+ sys.exit(0)
+ if option in ("-h", "--help"):
+ raise Usage(use_message)
+@@ -1224,9 +1224,9 @@ def th_log(out_str):
+ tophat_logger.info(out_str)
+
+ def th_logp(out_str=""):
+- print >> sys.stderr, out_str
++ print(out_str, file=sys.stderr)
+ if tophat_log:
+- print >> tophat_log, out_str
++ print(out_str, file=tophat_log)
+
+ def die(msg=None):
+ if msg is not None:
+@@ -1253,7 +1253,7 @@ def prepare_output_dir():
+ else:
+ try:
+ os.makedirs(tmp_dir)
+- except OSError, o:
++ except OSError as o:
+ die("\nError creating directory %s (%s)" % (tmp_dir, o))
+
+
+@@ -1287,7 +1287,7 @@ def check_bowtie_index(idx_prefix, is_bo
+ os.path.exists(idx_rev_1) and \
+ os.path.exists(idx_rev_2):
+ if os.path.exists(idx_prefix + ".1.ebwt") and os.path.exists(idx_prefix + ".1.bt2"):
+- print >> sys.stderr, bwtbotherr
++ print(bwtbotherr, file=sys.stderr)
+ return
+ else:
+ if is_bowtie2:
+@@ -1317,7 +1317,7 @@ def check_bowtie_index(idx_prefix, is_bo
+ os.path.exists(bwtidx_env+idx_rev_1) and \
+ os.path.exists(bwtidx_env+idx_rev_2):
+ if os.path.exists(bwtidx_env + idx_prefix + ".1.ebwt") and os.path.exists(bwtidx_env + idx_prefix + ".1.bt2"):
+- print >> sys.stderr, bwtbotherr
++ print(bwtbotherr, file=sys.stderr)
+ return
+ else:
+ die(bwtidxerr)
+@@ -1350,7 +1350,7 @@ def bowtie_idx_to_fa(idx_prefix, is_bowt
+ die(fail_str+"Error: bowtie-inspect returned an error\n"+log_tail(logging_dir + "bowtie_inspect_recons.log"))
+
+ # Bowtie not found
+- except OSError, o:
++ except OSError as o:
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ die(fail_str+"Error: bowtie-inspect not found on this system. Did you forget to include it in your PATH?")
+
+@@ -1406,7 +1406,7 @@ def get_bowtie_version():
+ while len(bowtie_version)<4:
+ bowtie_version.append(0)
+ return bowtie_version
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: bowtie not found on this system"
+@@ -1471,7 +1471,7 @@ def get_index_sam_header(params, idx_pre
+ else:
+ preamble.append(line)
+
+- print >> bowtie_sam_header_file, "@HD\tVN:1.0\tSO:coordinate"
++ print("@HD\tVN:1.0\tSO:coordinate", file=bowtie_sam_header_file)
+ if read_params.read_group_id and read_params.sample_id:
+ rg_str = "@RG\tID:%s\tSM:%s" % (read_params.read_group_id,
+ read_params.sample_id)
+@@ -1490,20 +1490,20 @@ def get_index_sam_header(params, idx_pre
+ if read_params.seq_platform:
+ rg_str += "\tPL:%s" % read_params.seq_platform
+
+- print >> bowtie_sam_header_file, rg_str
++ print(rg_str, file=bowtie_sam_header_file)
+
+ if not params.keep_fasta_order:
+ sq_dict_lines.sort(lambda x,y: cmp(x[0],y[0]))
+
+ for [name, line] in sq_dict_lines:
+- print >> bowtie_sam_header_file, line
+- print >> bowtie_sam_header_file, "@PG\tID:TopHat\tVN:%s\tCL:%s" % (get_version(), run_cmd)
++ print(line, file=bowtie_sam_header_file)
++ print("@PG\tID:TopHat\tVN:%s\tCL:%s" % (get_version(), run_cmd), file=bowtie_sam_header_file)
+
+ bowtie_sam_header_file.close()
+ temp_sam_header_file.close()
+ return bowtie_sam_header_filename
+
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: bowtie not found on this system"
+@@ -1559,7 +1559,7 @@ def get_samtools_version():
+ samtools_version_arr.append(0)
+
+ return version_match.group(), samtools_version_arr
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: samtools not found on this system"
+@@ -1668,7 +1668,7 @@ class FastxReader:
+ if seq_len != qstrlen :
+ raise ValueError("Length mismatch between sequence and quality strings "+ \
+ "for %s (%i vs %i)." % (seqid, seq_len, qstrlen))
+- except ValueError, err:
++ except ValueError as err:
+ die("\nError encountered parsing file "+self.fname+":\n "+str(err))
+ #return the record
+ self.numrecords+=1
+@@ -1705,7 +1705,7 @@ class FastxReader:
+ if seq_len < 3:
+ raise ValueError("Read %s too short (%i)." \
+ % (seqid, seq_len))
+- except ValueError, err:
++ except ValueError as err:
+ die("\nError encountered parsing fasta file "+self.fname+"\n "+str(err))
+ #return the record and continue
+ self.numrecords+=1
+@@ -1744,7 +1744,7 @@ def fa_write(fhandle, seq_id, seq):
+ """
+ line_len = 60
+ fhandle.write(">" + seq_id + "\n")
+- for i in xrange(len(seq) / line_len + 1):
++ for i in range(len(seq) / line_len + 1):
+ start = i * line_len
+ #end = (i+1) * line_len if (i+1) * line_len < len(seq) else len(seq)
+ if (i+1) * line_len < len(seq):
+@@ -2085,7 +2085,7 @@ def prep_reads(params, l_reads_list, l_q
+ if not use_bam: shell_cmd += ' >' +kept_reads_filename
+ retcode = None
+ try:
+- print >> run_log, shell_cmd
++ print(shell_cmd, file=run_log)
+ if do_use_zpacker:
+ filter_proc = subprocess.Popen(prep_cmd,
+ stdout=subprocess.PIPE,
+@@ -2108,7 +2108,7 @@ def prep_reads(params, l_reads_list, l_q
+ if retcode:
+ die(fail_str+"Error running 'prep_reads'\n"+log_tail(log_fname))
+
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)
+ die(errmsg+"\n"+log_tail(log_fname))
+
+@@ -2197,7 +2197,7 @@ def bowtie(params,
+ os.remove(unmapped_reads_fifo)
+ try:
+ os.mkfifo(unmapped_reads_fifo)
+- except OSError, o:
++ except OSError as o:
+ die(fail_str+"Error at mkfifo("+unmapped_reads_fifo+'). '+str(o))
+
+ # Launch Bowtie
+@@ -2231,7 +2231,7 @@ def bowtie(params,
+ unm_zipcmd=[ params.system_params.zipper ]
+ unm_zipcmd.extend(params.system_params.zipper_opts)
+ unm_zipcmd+=['-c']
+- print >> run_log, ' '.join(unm_zipcmd)+' < '+ unmapped_reads_fifo + ' > '+ unmapped_reads_out + ' & '
++ print(' '.join(unm_zipcmd)+' < '+ unmapped_reads_fifo + ' > '+ unmapped_reads_out + ' & ', file=run_log)
+ fifo_pid=os.fork()
+ if fifo_pid==0:
+ def on_sig_exit(sig, func=None):
+@@ -2440,7 +2440,7 @@ def bowtie(params,
+ bowtie_proc.stdout.close()
+ pipeline_proc = fix_order_proc
+
+- print >> run_log, shellcmd
++ print(shellcmd, file=run_log)
+ retcode = None
+ if pipeline_proc:
+ pipeline_proc.communicate()
+@@ -2457,7 +2457,7 @@ def bowtie(params,
+ pass
+ if retcode:
+ die(fail_str+"Error running:\n"+shellcmd)
+- except OSError, o:
++ except OSError as o:
+ die(fail_str+"Error: "+str(o))
+
+ # Success
+@@ -2491,7 +2491,7 @@ def get_gtf_juncs(gff_annotation):
+
+ gtf_juncs_cmd=[prog_path("gtf_juncs"), gff_annotation]
+ try:
+- print >> run_log, " ".join(gtf_juncs_cmd), " > "+gtf_juncs_out_name
++ print(" ".join(gtf_juncs_cmd), " > "+gtf_juncs_out_name, file=run_log)
+ retcode = subprocess.call(gtf_juncs_cmd,
+ stderr=gtf_juncs_log,
+ stdout=gtf_juncs_out)
+@@ -2503,7 +2503,7 @@ def get_gtf_juncs(gff_annotation):
+ die(fail_str+"Error: GTF junction extraction failed with err ="+str(retcode))
+
+ # cvg_islands not found
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: gtf_juncs not found on this system"
+@@ -2528,13 +2528,13 @@ def build_juncs_bwt_index(is_bowtie2, ex
+ bowtie_build_cmd += [external_splice_prefix + ".fa",
+ external_splice_prefix]
+ try:
+- print >> run_log, " ".join(bowtie_build_cmd)
++ print(" ".join(bowtie_build_cmd), file=run_log)
+ retcode = subprocess.call(bowtie_build_cmd,
+ stdout=bowtie_build_log)
+
+ if retcode != 0:
+ die(fail_str+"Error: Splice sequence indexing failed with err ="+ str(retcode))
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: bowtie-build not found on this system"
+@@ -2580,7 +2580,7 @@ def build_juncs_index(is_bowtie2,
+ fusions_file_list,
+ reference_fasta]
+ try:
+- print >> run_log, " ".join(juncs_db_cmd) + " > " + external_splices_out_name
++ print(" ".join(juncs_db_cmd) + " > " + external_splices_out_name, file=run_log)
+ retcode = subprocess.call(juncs_db_cmd,
+ stderr=juncs_db_log,
+ stdout=external_splices_out)
+@@ -2588,7 +2588,7 @@ def build_juncs_index(is_bowtie2,
+ if retcode != 0:
+ die(fail_str+"Error: Splice sequence retrieval failed with err ="+str(retcode))
+ # juncs_db not found
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: juncs_db not found on this system"
+@@ -2621,14 +2621,14 @@ def build_idx_from_fa(is_bowtie2, fasta_
+ bwt_idx_path]
+ try:
+ th_log("Building Bowtie index from " + os.path.basename(fasta_fname))
+- print >> run_log, " ".join(bowtie_idx_cmd)
++ print(" ".join(bowtie_idx_cmd), file=run_log)
+ retcode = subprocess.call(bowtie_idx_cmd,
+ stdout=open(os.devnull, "w"),
+ stderr=open(os.devnull, "w"))
+ if retcode != 0:
+ die(fail_str + "Error: Couldn't build bowtie index with err = "
+ + str(retcode))
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)+"\n"
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ errmsg+="Error: bowtie-build not found on this system"
+@@ -2639,7 +2639,7 @@ def build_idx_from_fa(is_bowtie2, fasta_
+ # Print out the sam header, embedding the user's specified library properties.
+ # FIXME: also needs SQ dictionary lines
+ def write_sam_header(read_params, sam_file):
+- print >> sam_file, "@HD\tVN:1.0\tSO:coordinate"
++ print("@HD\tVN:1.0\tSO:coordinate", file=sam_file)
+ if read_params.read_group_id and read_params.sample_id:
+ rg_str = "@RG\tID:%s\tSM:%s" % (read_params.read_group_id,
+ read_params.sample_id)
+@@ -2658,8 +2658,8 @@ def write_sam_header(read_params, sam_fi
+ if read_params.seq_platform:
+ rg_str += "\tPL:%s" % read_params.seq_platform
+
+- print >> sam_file, rg_str
+- print >> sam_file, "@PG\tID:TopHat\tVN:%s\tCL:%s" % (get_version(), run_cmd)
++ print(rg_str, file=sam_file)
++ print("@PG\tID:TopHat\tVN:%s\tCL:%s" % (get_version(), run_cmd), file=sam_file)
+
+ # Write final TopHat output, via tophat_reports and wiggles
+ def compile_reports(params, sam_header_filename, ref_fasta, mappings, readfiles, gff_annotation):
+@@ -2727,7 +2727,7 @@ def compile_reports(params, sam_header_f
+ report_cmd.append(right_reads)
+
+ try:
+- print >> run_log, " ".join(report_cmd)
++ print(" ".join(report_cmd), file=run_log)
+ report_proc=subprocess.call(report_cmd,
+ preexec_fn=subprocess_setup,
+ stderr=report_log)
+@@ -2753,8 +2753,7 @@ def compile_reports(params, sam_header_f
+ bam_parts[i],
+ "-o",
+ sorted_bam_parts[i]]
+-
+- print >> run_log, " ".join(bamsort_cmd)
++ print(" ".join(bamsort_cmd), file=run_log)
+
+ if i + 1 < num_bam_parts:
+ pid = os.fork()
+@@ -2791,7 +2790,7 @@ def compile_reports(params, sam_header_f
+ if params.report_params.convert_bam:
+ bammerge_cmd += ["%s.bam" % accepted_hits]
+ bammerge_cmd += bam_parts
+- print >> run_log, " ".join(bammerge_cmd)
++ print(" ".join(bammerge_cmd), file=run_log)
+ subprocess.call(bammerge_cmd,
+ stderr=open(logging_dir + "reports.merge_bam.log", "w"))
+ else: #make .sam
+@@ -2807,7 +2806,7 @@ def compile_reports(params, sam_header_f
+ stderr=open(logging_dir + "accepted_hits_bam_to_sam.log", "w"))
+ merge_proc.stdout.close()
+ shellcmd = " ".join(bammerge_cmd) + " | " + " ".join(bam2sam_cmd)
+- print >> run_log, shellcmd
++ print(shellcmd, file=run_log)
+ sam_proc.communicate()
+ retcode = sam_proc.returncode
+ if retcode:
+@@ -2820,7 +2819,7 @@ def compile_reports(params, sam_header_f
+ #just convert to .sam
+ bam2sam_cmd = [samtools_path, "view", "-h", accepted_hits+".bam"]
+ shellcmd = " ".join(bam2sam_cmd) + " > " + accepted_hits + ".sam"
+- print >> run_log, shellcmd
++ print(shellcmd, file=run_log)
+ r = subprocess.call(bam2sam_cmd,
+ stdout=open(accepted_hits + ".sam", "w"),
+ stderr=open(logging_dir + "accepted_hits_bam_to_sam.log", "w"))
+@@ -2828,7 +2827,7 @@ def compile_reports(params, sam_header_f
+ die(fail_str+"Error running: "+shellcmd)
+ os.remove(accepted_hits+".bam")
+
+- except OSError, o:
++ except OSError as o:
+ die(fail_str+"Error: "+str(o)+"\n"+log_tail(log_fname))
+
+ try:
+@@ -2851,7 +2850,7 @@ def compile_reports(params, sam_header_f
+ merge_cmd=[prog_path("bam_merge"), "-Q",
+ "--sam-header", sam_header_filename, um_merged]
+ merge_cmd += um_parts
+- print >> run_log, " ".join(merge_cmd)
++ print(" ".join(merge_cmd), file=run_log)
+ ret = subprocess.call( merge_cmd,
+ stderr=open(logging_dir + "bam_merge_um.log", "w") )
+ if ret != 0:
+@@ -2859,7 +2858,7 @@ def compile_reports(params, sam_header_f
+ for um_part in um_parts:
+ os.remove(um_part)
+
+- except OSError, o:
++ except OSError as o:
+ die(fail_str+"Error: "+str(o)+"\n"+log_tail(log_fname))
+
+ return junctions
+@@ -2945,15 +2944,15 @@ def split_reads(reads_filename,
+ while seg_num + 1 < len(offsets):
+ f = out_segf[seg_num].file
+ seg_seq = read_seq[last_seq_offset+color_offset:offsets[seg_num + 1]+color_offset]
+- print >> f, "%s|%d:%d:%d" % (read_name,last_seq_offset,seg_num, len(offsets) - 1)
++ print("%s|%d:%d:%d" % (read_name,last_seq_offset,seg_num, len(offsets) - 1), file=f)
+ if color:
+- print >> f, "%s%s" % (read_seq_temp[last_seq_offset], seg_seq)
++ print("%s%s" % (read_seq_temp[last_seq_offset], seg_seq), file=f)
+ else:
+- print >> f, seg_seq
++ print(seg_seq, file=f)
+ if not fasta:
+ seg_qual = read_qual[last_seq_offset:offsets[seg_num + 1]]
+- print >> f, "+"
+- print >> f, seg_qual
++ print("+", file=f)
++ print(seg_qual, file=f)
+ seg_num += 1
+ last_seq_offset = offsets[seg_num]
+
+@@ -3049,7 +3048,7 @@ def junctions_from_closures(params,
+ left_maps,
+ right_maps])
+ try:
+- print >> run_log, ' '.join(juncs_cmd)
++ print(' '.join(juncs_cmd), file=run_log)
+ retcode = subprocess.call(juncs_cmd,
+ stderr=juncs_log)
+
+@@ -3057,7 +3056,7 @@ def junctions_from_closures(params,
+ if retcode != 0:
+ die(fail_str+"Error: closure-based junction search failed with err ="+str(retcode))
+ # cvg_islands not found
+- except OSError, o:
++ except OSError as o:
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ th_logp(fail_str + "Error: closure_juncs not found on this system")
+ die(str(o))
+@@ -3088,8 +3087,8 @@ def junctions_from_segments(params,
+ return [juncs_out, insertions_out, deletions_out, fusions_out]
+ th_log("Searching for junctions via segment mapping")
+ if params.coverage_search == True:
+- print >> sys.stderr, "\tCoverage-search algorithm is turned on, making this step very slow"
+- print >> sys.stderr, "\tPlease try running TopHat again with the option (--no-coverage-search) if this step takes too much time or memory."
++ print("\tCoverage-search algorithm is turned on, making this step very slow", file=sys.stderr)
++ print("\tPlease try running TopHat again with the option (--no-coverage-search) if this step takes too much time or memory.", file=sys.stderr)
+
+ left_maps = ','.join(left_seg_maps)
+ log_fname = logging_dir + "segment_juncs.log"
+@@ -3111,7 +3110,7 @@ def junctions_from_segments(params,
+ right_maps = ','.join(right_seg_maps)
+ segj_cmd.extend([right_reads, right_reads_map, right_maps])
+ try:
+- print >> run_log, " ".join(segj_cmd)
++ print(" ".join(segj_cmd), file=run_log)
+ retcode = subprocess.call(segj_cmd,
+ preexec_fn=subprocess_setup,
+ stderr=segj_log)
+@@ -3121,7 +3120,7 @@ def junctions_from_segments(params,
+ die(fail_str+"Error: segment-based junction search failed with err ="+str(retcode)+"\n"+log_tail(log_fname))
+
+ # cvg_islands not found
+- except OSError, o:
++ except OSError as o:
+ if o.errno == errno.ENOTDIR or o.errno == errno.ENOENT:
+ th_logp(fail_str + "Error: segment_juncs not found on this system")
+ die(str(o))
+@@ -3191,12 +3190,12 @@ def join_mapped_segments(params,
+ align_cmd.append(spliced_seg_maps)
+
+ try:
+- print >> run_log, " ".join(align_cmd)
++ print(" ".join(align_cmd), file=run_log)
+ ret = subprocess.call(align_cmd,
+ stderr=align_log)
+ if ret:
+ die(fail_str+"Error running 'long_spanning_reads':"+log_tail(log_fname))
+- except OSError, o:
++ except OSError as o:
+ die(fail_str+"Error: "+str(o))
+
+ # This class collects spliced and unspliced alignments for each of the
+@@ -3234,13 +3233,13 @@ def m2g_convert_coords(params, sam_heade
+
+ try:
+ th_log("Converting " + fbasename + " to genomic coordinates (map2gtf)")
+- print >> run_log, " ".join(m2g_cmd) + " > " + m2g_log
++ print(" ".join(m2g_cmd) + " > " + m2g_log, file=run_log)
+ ret = subprocess.call(m2g_cmd,
+ stdout=open(m2g_log, "w"),
+ stderr=open(m2g_err, "w"))
+ if ret != 0:
+ die(fail_str + " Error: map2gtf returned an error")
+- except OSError, o:
++ except OSError as o:
+ err_msg = fail_str + str(o)
+ die(err_msg + "\n")
+
+@@ -3269,17 +3268,17 @@ def gtf_to_fasta(params, trans_gtf, geno
+ g2f_err = logging_dir + "g2f.err"
+
+ try:
+- print >> run_log, " ".join(g2f_cmd)+" > " + g2f_log
++ print(" ".join(g2f_cmd)+" > " + g2f_log, file=run_log)
+ ret = subprocess.call(g2f_cmd,
+ stdout = open(g2f_log, "w"),
+ stderr = open(g2f_err, "w"))
+ if ret != 0:
+ die(fail_str + " Error: gtf_to_fasta returned an error.")
+- except OSError, o:
++ except OSError as o:
+ err_msg = fail_str + str(o)
+ die(err_msg + "\n")
+ fver = open(out_fver, "w", 0)
+- print >> fver, "%d %d %d" % (GFF_T_VER, os.path.getsize(trans_gtf), os.path.getsize(out_fname))
++ print("%d %d %d" % (GFF_T_VER, os.path.getsize(trans_gtf), os.path.getsize(out_fname)), file=fver)
+ fver.close()
+ return out_fname
+
+@@ -3391,7 +3390,7 @@ def get_preflt_data(params, ri, target_r
+ shell_cmd += ' >' + out_unmapped
+ retcode=0
+ try:
+- print >> run_log, shell_cmd
++ print(shell_cmd, file=run_log)
+ if do_use_zpacker:
+ prep_proc = subprocess.Popen(prep_cmd,
+ stdout=subprocess.PIPE,
+@@ -3414,7 +3413,7 @@ def get_preflt_data(params, ri, target_r
+ if retcode:
+ die(fail_str+"Error running 'prep_reads'\n"+log_tail(log_fname))
+
+- except OSError, o:
++ except OSError as o:
+ errmsg=fail_str+str(o)
+ die(errmsg+"\n"+log_tail(log_fname))
+ if not out_bam: um_reads.close()
+@@ -3809,7 +3808,7 @@ def get_version():
+ return "__VERSION__"
+
+ def mlog(msg):
+- print >> sys.stderr, "[DBGLOG]:"+msg
++ print("[DBGLOG]:"+msg, file=sys.stderr)
+
+ def test_input_file(filename):
+ try:
+@@ -3835,7 +3834,7 @@ def validate_transcriptome(params):
+ inf.close()
+ dlst = fline.split()
+ if len(dlst)>2:
+- tver, tgff_size, tfa_size = map(lambda f: int(f), dlst)
++ tver, tgff_size, tfa_size = [int(f) for f in dlst]
+ else:
+ return False
+ tlst=tfa+".tlst"
+@@ -3899,7 +3898,7 @@ def main(argv=None):
+ run_log = open(logging_dir + "run.log", "w", 0)
+ global run_cmd
+ run_cmd = " ".join(run_argv)
+- print >> run_log, run_cmd
++ print(run_cmd, file=run_log)
+
+ check_bowtie(params)
+ check_samtools()
+@@ -4097,7 +4096,7 @@ def main(argv=None):
+ th_log("A summary of the alignment counts can be found in %salign_summary.txt" % output_dir)
+ th_log("Run complete: %s elapsed" % formatTD(duration))
+
+- except Usage, err:
++ except Usage as err:
+ th_logp(sys.argv[0].split("/")[-1] + ": " + str(err.msg))
+ th_logp(" for detailed help see http://ccb.jhu.edu/software/tophat/manual.shtml")
+ return 2
+--- a/src/bed_to_juncs
++++ b/src/bed_to_juncs
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ # encoding: utf-8
+ """
+ bed_to_juncs.py
+@@ -32,7 +32,7 @@ def main(argv=None):
+ try:
+ try:
+ opts, args = getopt.getopt(argv[1:], "h", ["help"])
+- except getopt.error, msg:
++ except getopt.error as msg:
+ raise Usage(msg)
+
+ for option, value in opts:
+@@ -45,8 +45,8 @@ def main(argv=None):
+ cols = line.split()
+ line_num += 1
+ if len(cols) < 12:
+- print >> sys.stderr, "Warning: malformed line %d, missing columns" % line_num
+- print >> sys.stderr, "\t", line
++ print("Warning: malformed line %d, missing columns" % line_num, file=sys.stderr)
++ print("\t", line, file=sys.stderr)
+ continue
+ chromosome = cols[0]
+ orientation = cols[5]
+@@ -56,12 +56,12 @@ def main(argv=None):
+ right_pos = int(cols[1]) + block_starts[1]
+ #print "%s\t%d\t%d\t%s" % (chromosome, left_pos, right_pos, orientation)
+ counts = cols[4]
+- print "%s\t%d\t%d\t%s\t%s" % (chromosome, left_pos, right_pos, orientation, counts)
++ print("%s\t%d\t%d\t%s\t%s" % (chromosome, left_pos, right_pos, orientation, counts))
+
+
+- except Usage, err:
+- print >> sys.stderr, sys.argv[0].split("/")[-1] + ": " + str(err.msg)
+- print >> sys.stderr, "\t for help use --help"
++ except Usage as err:
++ print(sys.argv[0].split("/")[-1] + ": " + str(err.msg), file=sys.stderr)
++ print("\t for help use --help", file=sys.stderr)
+ return 2
+
+
+--- a/src/contig_to_chr_coords
++++ b/src/contig_to_chr_coords
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ # encoding: utf-8
+ """
+ contig_to_chr_coords.py
+@@ -30,7 +30,7 @@ def main(argv=None):
+ try:
+ try:
+ opts, args = getopt.getopt(argv[1:], "ho:vbg", ["help", "output=", "bed", "gff"])
+- except getopt.error, msg:
++ except getopt.error as msg:
+ raise Usage(msg)
+
+ arg_is_splice = False
+@@ -49,7 +49,7 @@ def main(argv=None):
+ arg_is_gff = True
+
+ if (arg_is_splice == False and arg_is_gff == False) or (arg_is_splice == True and arg_is_gff == True):
+- print >> sys.stderr, "Error: please specify either -b or -g, but not both"
++ print("Error: please specify either -b or -g, but not both", file=sys.stderr)
+ raise Usage(help_message)
+
+ if len(args) < 1:
+@@ -76,7 +76,7 @@ def main(argv=None):
+ gff_file = open(args[1])
+
+ lines = gff_file.readlines()
+- print lines[0],
++ print(lines[0], end=' ')
+ for line in lines[1:]:
+ line = line.strip()
+ cols = line.split('\t')
+@@ -97,7 +97,7 @@ def main(argv=None):
+ left_pos = contig[1] + int(cols[3])
+ right_pos = contig[1] + int(cols[4])
+
+- print "chr%s\tTopHat\tisland\t%d\t%d\t%s\t.\t.\t%s" % (chr_name, left_pos, right_pos, cols[5],cols[8])
++ print("chr%s\tTopHat\tisland\t%d\t%d\t%s\t.\t.\t%s" % (chr_name, left_pos, right_pos, cols[5],cols[8]))
+ #print >>sys.stderr, "%s\t%d\t%d\t%s\t%s\t%s\t%s" % (contig[0], left_pos, right_pos,cols[3],cols[6],cols[0],cols[1])
+
+
+@@ -105,7 +105,7 @@ def main(argv=None):
+ splice_file = open(args[1])
+
+ lines = splice_file.readlines()
+- print lines[0],
++ print(lines[0], end=' ')
+ for line in lines[1:]:
+ line = line.strip()
+ cols = line.split('\t')
+@@ -123,11 +123,11 @@ def main(argv=None):
+ left_pos = contig[1] + int(cols[1])
+ right_pos = contig[1] + int(cols[2])
+
+- print "chr%s\t%d\t%d\t%s\t0\t%s\t%s\t%s\t255,0,0\t2\t1,1\t%s" % (chr_name, left_pos, right_pos, cols[3],cols[5],left_pos, right_pos,cols[11])
++ print("chr%s\t%d\t%d\t%s\t0\t%s\t%s\t%s\t255,0,0\t2\t1,1\t%s" % (chr_name, left_pos, right_pos, cols[3],cols[5],left_pos, right_pos,cols[11]))
+ #print >>sys.stderr, "%s\t%d\t%d\t%s\t%s\t%s\t%s" % (contig[0], left_pos, right_pos,cols[3],cols[6],cols[0],cols[1])
+- except Usage, err:
+- print >> sys.stderr, sys.argv[0].split("/")[-1] + ": " + str(err.msg)
+- print >> sys.stderr, "\t for help use --help"
++ except Usage as err:
++ print(sys.argv[0].split("/")[-1] + ": " + str(err.msg), file=sys.stderr)
++ print("\t for help use --help", file=sys.stderr)
+ return 2
+
+
+--- a/src/sra_to_solid
++++ b/src/sra_to_solid
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+
+ """
+ sra_to_solid.py
+@@ -23,8 +23,8 @@ if __name__ == "__main__":
+ if expect_qual % 4 == 3:
+ line = line[1:]
+
+- print line
++ print(line)
+ expect_qual = (expect_qual + 1) % 4
+
+ else:
+- print use_message;
++ print(use_message);
+--- a/src/tophat-fusion-post
++++ b/src/tophat-fusion-post
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+
+
+ """
+@@ -116,12 +116,12 @@ class TopHatFusionParams:
+ "tex-table",
+ "gtf-file=",
+ "fusion-pair-dist="])
+- except getopt.error, msg:
++ except getopt.error as msg:
+ raise Usage(msg)
+
+ for option, value in opts:
+ if option in ("-v", "--version"):
+- print "TopHat v%s" % (get_version())
++ print("TopHat v%s" % (get_version()))
+ sys.exit(0)
+ if option in ("-h", "--help"):
+ raise Usage(use_message)
+@@ -242,7 +242,7 @@ def prepare_output_dir():
+ else:
+ try:
+ os.makedirs(tmp_dir)
+- except OSError, o:
++ except OSError as o:
+ die("\nError creating directory %s (%s)" % (tmp_dir, o))
+
+
+@@ -269,7 +269,7 @@ def check_samples():
+ if prev_list != curr_list:
+ sample_list = open(sample_list_filename, 'w')
+ for sample in curr_list:
+- print >> sample_list, sample
++ print(sample, file=sample_list)
+ sample_list.close()
+ return True
+ else:
+@@ -303,9 +303,9 @@ def map_fusion_kmer(bwt_idx_prefix, para
+ fusion_file.close()
+
+ fusion_seq_fa = open(output_dir + "fusion_seq.fa", 'w')
+- for seq in seq_dic.keys():
+- print >> fusion_seq_fa, ">%s" % seq
+- print >> fusion_seq_fa, seq
++ for seq in list(seq_dic.keys()):
++ print(">%s" % seq, file=fusion_seq_fa)
++ print(seq, file=fusion_seq_fa)
+
+ fusion_seq_fa.close()
+
+@@ -322,14 +322,14 @@ def map_fusion_kmer(bwt_idx_prefix, para
+ bwtout_file.close()
+
+ kmer_map = open(fusion_kmer_file_name, 'w')
+- for seq, chrs in bwt_dic.items():
+- print >> kmer_map, "%s\t%s" % (seq, ','.join(chrs))
++ for seq, chrs in list(bwt_dic.items()):
++ print("%s\t%s" % (seq, ','.join(chrs)), file=kmer_map)
+
+ kmer_map.close()
+
+- print >> sys.stderr, "[%s] Extracting 23-mer around fusions and mapping them using Bowtie" % right_now()
++ print("[%s] Extracting 23-mer around fusions and mapping them using Bowtie" % right_now(), file=sys.stderr)
+ if sample_update:
+- print >> sys.stderr, "\tsamples updated"
++ print("\tsamples updated", file=sys.stderr)
+
+ fusion_kmer_file_name = output_dir + "fusion_seq.map"
+
+@@ -547,7 +547,7 @@ def filter_fusion(bwt_idx_prefix, params
+ tmaps.add_map(1, startR, endR, '-', fusion.posR)
+ tmaps.add_map(1, startR, endR, '+', fusion.posR)
+
+- for m in tmaps.maps.values():
++ for m in list(tmaps.maps.values()):
+ if len(m)==0:
+ raise Exception()
+
+@@ -698,7 +698,7 @@ def filter_fusion(bwt_idx_prefix, params
+
+ return min_value
+
+- kmer_len = len(seq_chr_dic.keys()[0])
++ kmer_len = len(list(seq_chr_dic.keys())[0])
+ sample_name = fusion.split("/")[0][len("tophat_"):]
+
+ data = os.getcwd().split('/')[-1]
+@@ -894,7 +894,7 @@ def filter_fusion(bwt_idx_prefix, params
+
+ fusion_file.close()
+
+- print >> sys.stderr, "[%s] Filtering fusions" % right_now()
++ print("[%s] Filtering fusions" % right_now(), file=sys.stderr)
+
+ seq_chr_dic = {}
+ seq_chr_file = open(output_dir + "fusion_seq.map", 'r')
+@@ -976,7 +976,7 @@ def filter_fusion(bwt_idx_prefix, params
+ juncs_file = file + '/junctions.bed'
+ if not os.path.exists(juncs_file):
+ juncs_file = None
+- print >> sys.stderr, 'Warning: could not find juctions.bed (%s).' % (juncs_file)
++ print('Warning: could not find juctions.bed (%s).' % (juncs_file), file=sys.stderr)
+
+ ref_file = "refGene.txt"
+ if not os.path.exists(ref_file):
+@@ -987,20 +987,20 @@ def filter_fusion(bwt_idx_prefix, params
+ ens_file = None
+
+ if juncs_file is None and ref_file is None and ens_file is None:
+- print >> sys.stderr, 'Warning: neither junctions.bed nor ref/ensGene.txt found.'
++ print('Warning: neither junctions.bed nor ref/ensGene.txt found.', file=sys.stderr)
+
+ junctions = load_junctions(ref_file, ens_file, juncs_file)
+
+- print >> sys.stderr, "\tProcessing:", fusion_file
++ print("\tProcessing:", fusion_file, file=sys.stderr)
+ filter_fusion_impl(fusion_file, junctions, refGene_list, ensGene_list, seq_chr_dic, fusion_gene_list)
+
+ fusion_out_file = output_dir + "potential_fusion.txt"
+ output_file = open(fusion_out_file, 'w')
+ for fusion_gene in fusion_gene_list:
+ for line in fusion_gene:
+- print >> output_file, line
++ print(line, file=output_file)
+ output_file.close()
+- print >> sys.stderr, '\t%d fusions are output in %s' % (len(fusion_gene_list), fusion_out_file)
++ print('\t%d fusions are output in %s' % (len(fusion_gene_list), fusion_out_file), file=sys.stderr)
+
+
+
+@@ -1035,7 +1035,7 @@ def wait_pids(pids):
+
+
+ def do_blast(params):
+- print >> sys.stderr, "[%s] Blasting 50-mers around fusions" % right_now()
++ print("[%s] Blasting 50-mers around fusions" % right_now(), file=sys.stderr)
+
+ file_name = output_dir + "potential_fusion.txt"
+ blast_genomic_out = output_dir + "blast_genomic"
+@@ -1111,7 +1111,7 @@ def do_blast(params):
+ blast(blast_nt, seq, blast_nt_out)
+
+ if not os.path.exists(output_dir + "blast_nt/" + seq ):
+- print >> sys.stderr, "\t%d. %s" % (count, line[:-1])
++ print("\t%d. %s" % (count, line[:-1]), file=sys.stderr)
+ if params.num_threads <= 1:
+ work()
+ else:
+@@ -1437,7 +1437,7 @@ def read_dist(params):
+ for i in range(len(alignments_list)):
+ output_region(sample_name, alignments_list[i], reads_list[i])
+
+- print >> sys.stderr, "[%s] Generating read distributions around fusions" % right_now()
++ print("[%s] Generating read distributions around fusions" % right_now(), file=sys.stderr)
+
+ file_name = output_dir + "potential_fusion.txt"
+ file = open(file_name, 'r')
+@@ -1465,7 +1465,7 @@ def read_dist(params):
+
+ pids = [0 for i in range(params.num_threads)]
+
+- for sample_name, list in alignments_list.items():
++ for sample_name, list in list(alignments_list.items()):
+ bam_file_name = 'tophat_%s/accepted_hits.bam' % sample_name
+ if not os.path.exists(bam_file_name):
+ continue
+@@ -1473,11 +1473,11 @@ def read_dist(params):
+ increment = 50
+ for i in range((len(list) + increment - 1) / increment):
+ temp_list = list[i*increment:(i+1)*increment]
+- print >> sys.stderr, '\t%s (%d-%d)' % (sample_name, i*increment + 1, min((i+1)*increment, len(list)))
++ print('\t%s (%d-%d)' % (sample_name, i*increment + 1, min((i+1)*increment, len(list))), file=sys.stderr)
+
+ alignments_list = []
+ for fusion in temp_list:
+- print >> sys.stderr, '\t\t%s' % fusion
++ print('\t\t%s' % fusion, file=sys.stderr)
+ fusion = fusion.split()
+ alignments_list.append([fusion[0], int(fusion[1]), int(fusion[2]), fusion[3]])
+
+@@ -2010,7 +2010,7 @@ def generate_html(params):
+ num_fusions = 0
+ num_alternative_splicing_with_same_fusion = 0
+
+- for key, value in fusion_dic.items():
++ for key, value in list(fusion_dic.items()):
+ num_fusions += 1
+ num_alternative_splicing_with_same_fusion += (value - 1)
+
+@@ -2215,20 +2215,20 @@ def generate_html(params):
+ indices_list += c["index"]
+
+ indices_list = sorted(indices_list)
+- print >> sys.stderr, "\tnum of fusions: %d" % len(cluster_list)
++ print("\tnum of fusions: %d" % len(cluster_list), file=sys.stderr)
+
+ if params.tex_table:
+ tex_table_file_name = output_dir + 'table.tex'
+ tex_table_file = open(tex_table_file_name, 'w')
+- print >> tex_table_file, r'\documentclass{article}'
+- print >> tex_table_file, r'\usepackage{graphicx}'
+- print >> tex_table_file, r'\begin{document}'
+- print >> tex_table_file, r'\pagestyle{empty}'
+- print >> tex_table_file, r"\center{\scalebox{0.7}{"
+- print >> tex_table_file, r"\begin{tabular}{| c | c | c | c | c | c | c |}"
+- print >> tex_table_file, r"\hline"
+- print >> tex_table_file, r"SAMPLE ID & Fusion genes (left-right) & Chromosomes (left-right) & " + \
+- r"5$'$ position & 3$'$ position & Spanning reads & Spanning pairs \\"
++ print(r'\documentclass{article}', file=tex_table_file)
++ print(r'\usepackage{graphicx}', file=tex_table_file)
++ print(r'\begin{document}', file=tex_table_file)
++ print(r'\pagestyle{empty}', file=tex_table_file)
++ print(r"\center{\scalebox{0.7}{", file=tex_table_file)
++ print(r"\begin{tabular}{| c | c | c | c | c | c | c |}", file=tex_table_file)
++ print(r"\hline", file=tex_table_file)
++ print(r"SAMPLE ID & Fusion genes (left-right) & Chromosomes (left-right) & " + \
++ r"5$'$ position & 3$'$ position & Spanning reads & Spanning pairs \\", file=tex_table_file)
+
+ html_prev.append(r'<HTML>')
+ html_prev.append(r'<HEAD>')
+@@ -2261,7 +2261,7 @@ def generate_html(params):
+ chr1, chr2 = cluster["chr"].split('-')
+
+ if params.tex_table:
+- print >> tex_table_file, r'\hline'
++ print(r'\hline', file=tex_table_file)
+
+ html_post.append(r'<P><P><P><BR>')
+ html_post.append(r'%d. %s %s' % (c+1, cluster["chr"], cluster["dir"]))
+@@ -2287,16 +2287,16 @@ def generate_html(params):
+ else:
+ sample_name2 = "testes"
+
+- print >> tex_table_file, r'%s & %s & %s & %d & %d & %d & %d \\' % \
++ print(r'%s & %s & %s & %d & %d & %d & %d \\' % \
+ (sample_name2,
+ fusion["gene1"] + "-" + fusion["gene2"],
+ chr1[3:] + "-" + chr2[3:],
+ fusion["left_coord"],
+ fusion["right_coord"],
+ stats[0],
+- stats[1] + stats[2])
++ stats[1] + stats[2]), file=tex_table_file)
+
+- print >> txt_file, '%s\t%s\t%s\t%d\t%s\t%s\t%d\t%d\t%d\t%d\t%.2f' % \
++ print('%s\t%s\t%s\t%d\t%s\t%s\t%d\t%d\t%d\t%d\t%.2f' % \
+ (sample_name,
+ fusion["gene1"],
+ chr1,
+@@ -2307,7 +2307,7 @@ def generate_html(params):
+ stats[0],
+ stats[1],
+ stats[2],
+- fusion["score"])
++ fusion["score"]), file=txt_file)
+
+ html_post.append(r'<TR><TD ALIGN="LEFT"><a href="#fusion_%d">%s</a></TD>' % (i, sample_name))
+ html_post.append(r'<TD ALIGN="LEFT">%s</TD>' % fusion["gene1"])
+@@ -2534,7 +2534,7 @@ def generate_html(params):
+ chr2_exon_list = combine_exon_list(chr2_exon_list)
+
+ max_intron_length = 100
+- def calculate_chr_coord(exon_list, x = sys.maxint):
++ def calculate_chr_coord(exon_list, x = sys.maxsize):
+ exon = exon_list[0]
+ chr_length = exon[1] - exon[0] + 1
+ if x <= exon[1]:
+@@ -2784,19 +2784,19 @@ def generate_html(params):
+ html_body.append(r'<BODY text="#000000" bgcolor="#FFFFFF" onload="%s">' % fusion_gene_draw_str)
+
+ for line in html_prev + javascript + html_body + html_post:
+- print >> html_file, line
++ print(line, file=html_file)
+
+ html_file.close()
+ txt_file.close()
+
+ if params.tex_table:
+- print >> tex_table_file, r"\hline"
+- print >> tex_table_file, r"\end{tabular}}}"
+- print >> tex_table_file, r'\end{document}'
++ print(r"\hline", file=tex_table_file)
++ print(r"\end{tabular}}}", file=tex_table_file)
++ print(r'\end{document}', file=tex_table_file)
+ tex_table_file.close()
+ os.system("pdflatex --output-directory=%s %s" % (output_dir, tex_table_file_name))
+
+- print >> sys.stderr, "[%s] Reporting final fusion candidates in html format" % right_now()
++ print("[%s] Reporting final fusion candidates in html format" % right_now(), file=sys.stderr)
+
+ # Cufflinks-Fusion
+ fusion_gene_list = []
+@@ -2833,7 +2833,7 @@ def tmp_name():
+
+ def die(msg=None):
+ if msg is not None:
+- print >> sys.stderr, msg
++ print(msg, file=sys.stderr)
+ sys.exit(1)
+
+
+@@ -2881,8 +2881,8 @@ def main(argv=None):
+
+ bwt_idx_prefix = args[0]
+
+- print >> sys.stderr, "[%s] Beginning TopHat-Fusion post-processing run (v%s)" % (right_now(), get_version())
+- print >> sys.stderr, "-----------------------------------------------"
++ print("[%s] Beginning TopHat-Fusion post-processing run (v%s)" % (right_now(), get_version()), file=sys.stderr)
++ print("-----------------------------------------------", file=sys.stderr)
+
+ start_time = datetime.now()
+ prepare_output_dir()
+@@ -2907,16 +2907,16 @@ def main(argv=None):
+ run_log = open(logging_dir + "run.log", "w", 0)
+ global run_cmd
+ run_cmd = " ".join(argv)
+- print >> run_log, run_cmd
++ print(run_cmd, file=run_log)
+
+ finish_time = datetime.now()
+ duration = finish_time - start_time
+- print >> sys.stderr,"-----------------------------------------------"
+- print >> sys.stderr, "[%s] Run complete [%s elapsed]" % (right_now(), formatTD(duration))
++ print("-----------------------------------------------", file=sys.stderr)
++ print("[%s] Run complete [%s elapsed]" % (right_now(), formatTD(duration)), file=sys.stderr)
+
+- except Usage, err:
+- print >> sys.stderr, sys.argv[0].split("/")[-1] + ": " + str(err.msg)
+- print >> sys.stderr, "\tfor detailed help see http://tophat-fusion.sourceforge.net/manual.html"
++ except Usage as err:
++ print(sys.argv[0].split("/")[-1] + ": " + str(err.msg), file=sys.stderr)
++ print("\tfor detailed help see http://tophat-fusion.sourceforge.net/manual.html", file=sys.stderr)
+ return 2
+
+
+--- a/configure.ac
++++ b/configure.ac
+@@ -7,7 +7,7 @@ AC_CONFIG_HEADERS([config.h])
+ # AC_CONFIG_AUX_DIR([build-aux])
+ AM_INIT_AUTOMAKE
+
+-AC_ARG_VAR(PYTHON, [python program])
++AC_ARG_VAR(PYTHON, [python3 program])
+
+ # Make sure CXXFLAGS is defined so that AC_PROG_CXX doesn't set it.
+ CXXFLAGS="$CXXFLAGS"
+@@ -131,6 +131,6 @@ echo \
+ "
+
+ if test x"${PYTHON}" = x":" || ! test -x "${PYTHON}"; then
+- echo "WARNING! python was not found and is required to run tophat"
+- echo " Please install python and point configure to the installed location"
++ echo "WARNING! python3 was not found and is required to run tophat"
++ echo " Please install python3 and point configure to the installed location"
+ fi
=====================================
debian/patches/series
=====================================
@@ -4,3 +4,4 @@ fix-gcc6.patch
remove-convenience-copy-of-samtools.patch
fix-compatibility-with-recent-samtools.patch
fix_makefile.am.patch
+2to3.patch
=====================================
debian/rules
=====================================
@@ -8,7 +8,7 @@ mandir=$(CURDIR)/debian/$(DEB_SOURCE)/usr/share/man/man1
bindir=$(CURDIR)/debian/$(DEB_SOURCE)/usr/bin
# according to dpkg-architecture(1) this should be set to support manual invocation
-DEB_HOST_MULTIARCH ?= $(shell dpkg-architecture -qDEB_HOST_MULTIARCH)
+include /usr/share/dpkg/architecture.mk
%:
dh $@ --no-parallel
=====================================
debian/upstream/metadata
=====================================
@@ -1,5 +1,3 @@
-Contact: Cole Trapnell <cole at cs.umd.edu>
-Name: TopHat
Reference:
author: Cole Trapnell and Lior Pachter and Steven L. Salzberg
title: >
@@ -12,13 +10,13 @@ Reference:
DOI: 10.1093/bioinformatics/btp120
PMID: 19289445
URL: http://bioinformatics.oxfordjournals.org/content/25/9/1105.short
- eprint: "http://bioinformatics.oxfordjournals.org/content/\
- 25/9/1105.full.pdf+html"
+ eprint: "http://bioinformatics.oxfordjournals.org/content/25/9/1105.full.pdf+html"
license: Open Access
Registry:
- - Name: OMICtools
- Entry: OMICS_01257
- - Name: bio.tools
- Entry: tophat
- - Name: SciCrunch
- Entry: SCR_013035
+- Name: OMICtools
+ Entry: OMICS_01257
+- Name: bio.tools
+ Entry: tophat
+- Name: SciCrunch
+ Entry: SCR_013035
+Bug-Submit: tophat.cufflinks at gmail.com
View it on GitLab: https://salsa.debian.org/med-team/tophat/compare/c68101842742c8608b5640e4fad9a0381fdbfde5...f14dd955c6a7246cad81772c5897cd6f73d81d66
--
View it on GitLab: https://salsa.debian.org/med-team/tophat/compare/c68101842742c8608b5640e4fad9a0381fdbfde5...f14dd955c6a7246cad81772c5897cd6f73d81d66
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20191011/7b77ce42/attachment-0001.html>
More information about the debian-med-commit
mailing list