[med-svn] [Git][med-team/arden][master] 5 commits: Use 2to3 to port to Python3

Andreas Tille gitlab at salsa.debian.org
Fri Sep 13 09:09:46 BST 2019



Andreas Tille pushed to branch master at Debian Med / arden


Commits:
b1341df9 by Andreas Tille at 2019-09-13T07:53:11Z
Use 2to3 to port to Python3

- - - - -
69d081d5 by Andreas Tille at 2019-09-13T07:53:48Z
Packaging with Python3 modules

- - - - -
38f79849 by Andreas Tille at 2019-09-13T07:56:21Z
debhelper-compat 12

- - - - -
b3d7525a by Andreas Tille at 2019-09-13T07:56:34Z
Standards-Version: 4.4.0

- - - - -
a57e711a by Andreas Tille at 2019-09-13T08:09:17Z
Fix incomplete 2to3 patch

- - - - -


6 changed files:

- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3_new.patch
- debian/patches/series
- debian/rules


Changes:

=====================================
debian/changelog
=====================================
@@ -1,8 +1,14 @@
 arden (1.0-5) UNRELEASED; urgency=medium
 
+  [ Jelmer Vernooij ]
   * Use secure copyright file specification URI.
 
- -- Jelmer Vernooij <jelmer at debian.org>  Tue, 16 Oct 2018 22:13:51 +0000
+  [ Andreas Tille ]
+  * Use 2to3 to port to Python3
+  * debhelper-compat 12
+  * Standards-Version: 4.4.0
+
+ -- Andreas Tille <tille at debian.org>  Fri, 13 Sep 2019 09:51:05 +0200
 
 arden (1.0-4) unstable; urgency=medium
 


=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11


=====================================
debian/control
=====================================
@@ -3,10 +3,10 @@ Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.
 Uploaders: Andreas Tille <tille at debian.org>
 Section: science
 Priority: optional
-Build-Depends: debhelper (>= 11~),
+Build-Depends: debhelper-compat (= 12),
                dh-python,
-               python-all
-Standards-Version: 4.1.5
+               python3-all
+Standards-Version: 4.4.0
 Vcs-Browser: https://salsa.debian.org/med-team/arden
 Vcs-Git: https://salsa.debian.org/med-team/arden.git
 Homepage: http://sourceforge.net/projects/arden/
@@ -14,11 +14,11 @@ Homepage: http://sourceforge.net/projects/arden/
 Package: arden
 Architecture: all
 Depends: ${misc:Depends},
-         ${python:Depends},
-         python-scipy,
-         python-numpy,
-         python-htseq,
-         python-matplotlib
+         ${python3:Depends},
+         python3-scipy,
+         python3-numpy,
+         python3-htseq,
+         python3-matplotlib
 Description: specificity control for read alignments using an artificial reference
  ARDEN (Artificial Reference Driven Estimation of false positives in NGS
  data) is a novel benchmark that estimates error rates based on real


=====================================
debian/patches/2to3_new.patch
=====================================
@@ -0,0 +1,659 @@
+Description: Use 2to3 to port to Python3
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Fri, 13 Sep 2019 09:51:05 +0200
+
+--- a/README
++++ b/README
+@@ -42,7 +42,7 @@ Table of Contents
+  
+   4. Installation 
+  ---------------------------------------------------------------------------
+- ARDEN is a collection of python scripts and therefore needs no installation.
++ ARDEN is a collection of python3 scripts and therefore needs no installation.
+  It was built and tested on a Linux platform. However, for convenient access it
+  is recommended to do the following adjustments to your environment variables:
+ 	export PATH = $PATH:/path/to/ARDEN
+@@ -51,9 +51,10 @@ Table of Contents
+   5. Dependencies 
+  ---------------------------------------------------------------------------
+  ARDEN makes use of the following python packages, that are all mandatory
+- (ARDEN was developed under Python 2.7):
++ (ARDEN was developed under Python 2.7 but ported to Python3 by the Debian
++ Med team):
+  
+-	* Python 2.7, http://www.python.org/
++	* Python 3.7, http://www.python.org/
+ 	* NumPy 1.6.1, http://numpy.scipy.org/
+ 	* SciPy 0.10.0, http://www.scipy.org/
+ 	* HTSeq 0.5.3p9,	 https://pypi.python.org/pypi/HTSeq
+--- a/arden-analyze
++++ b/arden-analyze
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """
+ Created on Thu Aug 18  2012
+ 
+@@ -64,7 +64,7 @@ def TestFiles(input):
+             fobj.close()
+         except IOError:
+             # nope --> error
+-            print "error opening file %s. Does it exist?" % controlDic["mapper"][mapper][0][0]
++            print("error opening file %s. Does it exist?" % controlDic["mapper"][mapper][0][0])
+             error =1
+             
+         # do the same for the artificial reference mappings
+@@ -74,7 +74,7 @@ def TestFiles(input):
+                 fobj = open(artificial,"r")
+                 fobj.close()
+             except IOError:
+-                print "error opening file %s. Does it exist?" % artificial
++                print("error opening file %s. Does it exist?" % artificial)
+                 error =1
+     return(error)
+             
+@@ -118,20 +118,20 @@ def main():
+     controlDic = AM.readInput(input)
+    
+     print("#######################################################################################")
+-    print ("Input Settings:")
+-    print ("Inputfile:\t%s" % input)
+-    print ("Outputpath:\t%s" % outpath)
+-    print ("Reads:\t%s" % controlDic["fastqfile"])
+-    print ("UseInternalRank:\t%s" % UseInternalRank)
+-    print ("Phred:\t%s" % phred)
++    print("Input Settings:")
++    print("Inputfile:\t%s" % input)
++    print("Outputpath:\t%s" % outpath)
++    print("Reads:\t%s" % controlDic["fastqfile"])
++    print("UseInternalRank:\t%s" % UseInternalRank)
++    print("Phred:\t%s" % phred)
+     print("#######################################################################################")
+-    print ("\r\nStart...")
++    print("\r\nStart...")
+     error = TestFiles(input)
+     if error == 1:
+-        print "Program aborted. Check your inputfiles!"
++        print("Program aborted. Check your inputfiles!")
+         sys.exit()
+     else:
+-        print "Inputfiles checked successfully."
++        print("Inputfiles checked successfully.")
+     
+     # check if output dir exists; else create output 
+     if not os.path.exists(os.path.dirname(outpath)):
+@@ -164,8 +164,8 @@ def main():
+ 
+     
+     #### looping #####
+-    print ("########Progress (analysis):########")
+-    print ("Mapper\t\tOutputfile\t\tOverallAlignments (TP/FP)\t\t Time")
++    print("########Progress (analysis):########")
++    print("Mapper\t\tOutputfile\t\tOverallAlignments (TP/FP)\t\t Time")
+     for mapper in controlDic["mapper"].keys():
+         
+         
+@@ -175,7 +175,7 @@ def main():
+         # set refname
+         refname = controlDic["mapper"][mapper][0][0].split("/")[-1]
+         
+-        print ("%s" %mapper.upper()),
++        print("%s" %mapper.upper()),
+         
+         if UseInternalRank == 1:
+             rankDic = AM.GetOrderDictionary(controlDic["mapper"][mapper][0][0])
+@@ -199,17 +199,17 @@ def main():
+             resultdic["mreads"][outfile] = MappedReads
+             resultdic["id"].append(outfile)
+             
+-            print (outfile),
+-            print  ("\t%0.2f (%0.2f/%0.2f)" %(tp+fp,tp,fp)),
+-            print ("\t%0.2f sec." %(end-start))
++            print(outfile),
++            print("\t%0.2f (%0.2f/%0.2f)" %(tp+fp,tp,fp)),
++            print("\t%0.2f sec." %(end-start))
+     ##################################################################################################################################
+ 
+   
+     # start evaluation process. SAM files are not getting touched anymore
+-    print ("\r\n########Progress (evaluation):########")    
+-    print ("Mapper\tTP\tFP\tTime")
++    print("\r\n########Progress (evaluation):########")
++    print("Mapper\tTP\tFP\tTime")
+     for resultfile in resultdic["id"]:
+-        print resultfile.split("_")[0],
++        print(resultfile.split("_")[0],)
+         
+         #do sme name generation for this iteration
+         prefix   = outpath+resultfile.split(".")[0]
+@@ -222,8 +222,8 @@ def main():
+         tp =[i for i in dataArray if i.mr[0] ==1]
+         fp =[i for i in dataArray if i.mr[0] ==0]
+         
+-        print ("\t %d"  % resultdic["alngts"][resultfile][0]), # TP
+-        print ("\t %d"  % resultdic["alngts"][resultfile][1]), # FP
++        print("\t %d"  % resultdic["alngts"][resultfile][0]), # TP
++        print("\t %d"  % resultdic["alngts"][resultfile][1]), # FP
+         
+         start = time.time()
+         #try:
+@@ -233,7 +233,7 @@ def main():
+             
+             
+         end = time.time()
+-        print ("\t%0.2f" %(end-start))
++        print("\t%0.2f" %(end-start))
+     
+     # prepare labels and names for plotting the data
+     ids =[i for i in resultdic['id']]
+@@ -273,5 +273,5 @@ if __name__ == "__main__":
+     a = time.time()
+     main()
+     b = time.time()
+-    print ("Finished!")
+-    print ("Done in %0.2f seconds!" % (b-a))
+\ No newline at end of file
++    print("Finished!")
++    print("Done in %0.2f seconds!" % (b-a))
+--- a/arden-create
++++ b/arden-create
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """
+ Created 2012
+ Main script for generating an artificial reference genome. It  calls various modules from core.
+@@ -64,8 +64,8 @@ def printExample():
+     
+     """
+     print("\r\nEXAMPLES:\r\n")
+-    print ("Here are 3 simple examples with all kinds of different settings. The general input scheme is:")
+-    print ("usage: python createAR.py [options] [OUTPUTFOLDER] [INPUT FASTA])\r\n")
++    print("Here are 3 simple examples with all kinds of different settings. The general input scheme is:")
++    print("usage: python createAR.py [options] [OUTPUTFOLDER] [INPUT FASTA])\r\n")
+     print("Case 1:")
+     print("\t -f random.fasta (inputfile)")
+     print("\t -p /home/ART (outputpath)")
+@@ -150,7 +150,7 @@ def Create():
+ 
+     # adjust to complete path
+     #outputpath = os.path.abspath(outputpath+"/")
+-    print outputpath
++    print(outputpath)
+     
+     # create output directory if necessary
+     if not os.path.exists(os.path.dirname(outputpath)):
+@@ -170,7 +170,7 @@ def Create():
+     print("rev:\t\t" + str(rev))
+     print("random:\t\t" + str(random))
+     print("###################################################")
+-    print ("\r\nStart:")
++    print("\r\nStart:")
+     print("\tReading DNA...")
+     Reference = RAW.readdna(inputpath)
+     
+@@ -249,8 +249,8 @@ def Create():
+ a = time.time()
+ Create()
+ b = time.time()
+-print ("Finished!")
+-print ("Processed %d bases in %f seconds!" % (bases,b-a))
++print("Finished!")
++print("Processed %d bases in %f seconds!" % (bases,b-a))
+ 
+ 
+     
+--- a/arden-filter
++++ b/arden-filter
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """
+ Script for filtering alignments with deficient properties (rqs,gaps, mismatches). The script generates a new .SAM file from the input SAM file, where only alignments are
+ contained that fulfill certain quality thresholds.
+@@ -82,9 +82,9 @@ def writeFilteredReads(iddic,fastqfile,f
+             
+             counter+=1
+     
+-    #print fastqoutput
+-    print (str(len(iddic)) + " ids" )
+-    print (str(counter) + " reads to file written..." )
++    #print(fastqoutput)
++    print(str(len(iddic)) + " ids" )
++    print(str(counter) + " reads to file written..." )
+     fastqoutput.close()
+ 
+ def getMisGapRQ(alngt):
+@@ -174,16 +174,16 @@ def FilterSAM(samfile,MISM,GAPS,RQ,fsam)
+             else:
+                 counter2 +=1
+         
+-        #print ("%s\t%s\t%s\r\n" % (rqs,gaps,mismatches))
++        #print("%s\t%s\t%s\r\n" % (rqs,gaps,mismatches))
+                 
+         #test.write("%s\t%s\t%s\r\n" % (rqs,gaps,mismatches))
+                 
+-    print ("PASSED\tREMOVED\tOVERALL")
+-    print counter1,
+-    print "\t",
+-    print counter2,
+-    print "\t",
+-    print counter1+counter2
++    print("PASSED\tREMOVED\tOVERALL")
++    print(counter1,)
++    print("\t",)
++    print(counter2,)
++    print("\t",)
++    print(counter1+counter2)
+     
+ 
+ def readHeader(samfile):
+@@ -256,7 +256,7 @@ def filter():
+     print("MM <=:\t\t" +  str(mismatches))
+     print("GAPS <=:\t" + str(gaps))
+     print("###################################################")
+-    print ("Start...")
++    print("Start...")
+     a = time.time()    
+     header = readHeader(input)
+     
+@@ -269,7 +269,7 @@ def filter():
+     # close current Readfile
+     fsam.close()
+     b = time.time()
+-    print ("DONE in %f seconds!" %(b-a))
++    print("DONE in %f seconds!" %(b-a))
+ 
+         
+ if __name__ == "__main__":
+--- a/core/AnalyseMapping.py
++++ b/core/AnalyseMapping.py
+@@ -1,8 +1,8 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """ This script is used to evalulate the mapping results """
+ 
+ import HTSeq
+-import cPickle as pickle
++import pickle as pickle
+ import numpy as np
+ import time
+ import sys
+@@ -26,7 +26,7 @@ def savepickle(dictionaary, outputname):
+ 
+ def loadpickle(inputname):
+     dictionary = pickle.load(open(inputname + ".p"))
+-    print("Loaded " + inputname + ".pickle!")
++    print(("Loaded " + inputname + ".pickle!"))
+     return (dictionary)
+     
+ 
+@@ -201,7 +201,7 @@ def GetOrderDictionary(referenceSAM):
+         i += 1
+     
+     index = 0
+-    for k in xrange(i, len(readnames)):
++    for k in range(i, len(readnames)):
+         if readnames[k] in internalDic:
+             pass
+         else:
+@@ -331,10 +331,10 @@ def CompareAlignments(reflist, artlist,f
+          file.write(RefRead.toStrNoMem("ref"))
+     
+     # add 1 for every alignment from the artificial that is found on the reference
+-    for i in xrange(0, artlen):
++    for i in range(0, artlen):
+         # bool which decides if artificial in reference hit
+         ArtInRef = 0
+-        for j in xrange(0,reflen):
++        for j in range(0,reflen):
+             if artlist[i].start == reflist[j].start and artlist[i].end == reflist[j].end:
+                 ArtInRef = 1
+                 break
+@@ -455,7 +455,7 @@ def ReadSAMnoMem(ref, art, output, compa
+     while 1:
+         i += 1
+         if i == 100000:
+-            print "\t\t%f" % (time.time() - start),
++            print("\t\t%f" % (time.time() - start), end=' ')
+         
+         RefReadRank,ArtReadRank = getRanks(RefRead,ArtRead,rankdic)
+         
+@@ -465,7 +465,7 @@ def ReadSAMnoMem(ref, art, output, compa
+         #while (RefRead[0].id > ArtRead[0].id and hasNextLine != 0):
+         while (RefReadRank > ArtReadRank and hasNextLine != 0):
+             # in case of multiple hits, set all mr variables to  0 and get the best alignment
+-            for l in xrange(0, len(ArtRead)):
++            for l in range(0, len(ArtRead)):
+                 ArtRead[l].mr = [0]
+             
+             # write FP to file
+@@ -541,7 +541,7 @@ def ReadSAMnoMem(ref, art, output, compa
+         while (RefReadRank > ArtReadRank and hasNextLine != 0):
+             MappedReads+=1
+             # in case of multiple hits, set all mr variables to  0 and get the best alignment
+-            for l in xrange(0, len(ArtRead)):
++            for l in range(0, len(ArtRead)):
+                 ArtRead[l].mr = [0]
+     
+              # write current FP to file
+@@ -821,7 +821,7 @@ def extendReadDic(readdic):
+     """
+     internalnaming = 0
+     reverseReadDic = np.zeros(len(readdic), dtype='S50')
+-    for id in readdic.iterkeys():
++    for id in readdic.keys():
+         # i
+         readdic[id] = ReadID(internalnaming, readdic[id])
+         reverseReadDic[internalnaming] = id
+@@ -867,8 +867,8 @@ def CreateCompareList(Reference, ARG):
+     
+     
+     if (len(reference) != len(artificialreference)):
+-        print "first 10 letter:\t\t" + reference[0:10] + "..." + reference[-10:]
+-        print "last 10 letter:\t\t" + artificialreference[0:10] + "..." + artificialreference[-10:]
++        print("first 10 letter:\t\t" + reference[0:10] + "..." + reference[-10:])
++        print("last 10 letter:\t\t" + artificialreference[0:10] + "..." + artificialreference[-10:])
+         print ("Error! Two Sequences have different length! Try to add a line break after the last nucleotide.")
+         sys.exit(1)
+     else:
+@@ -921,7 +921,7 @@ def readSAMline(alignment, identifier, c
+         except:
+             gaps = 0
+             mism = 0
+-            print "Error Reading MDtag of %s.Setting gaps = 0,mism = 0" % readname
++            print("Error Reading MDtag of %s.Setting gaps = 0,mism = 0" % readname)
+         try:
+             nm = int(getNumberOf(tags, "NM"))
+         except:
+@@ -993,7 +993,7 @@ def ReadArtificialSAMfileHTSeq(art, comp
+     for alignment in fobj:
+         k += 1
+         if k % 1000000 == 0:
+-            print ("%d.." %(k/1000000)),
++            print(("%d.." %(k/1000000)), end=' ')
+         read, readname = isSaneAlignment(alignment, "art", compareList, readdic)
+         if read == 0:
+             pass
+@@ -1019,7 +1019,7 @@ def ReadArtificialSAMfileHTSeq(art, comp
+     fobj.close()
+     end = time.time()
+     #print("\r\n")
+-    print ("\t %f " % (end - start)),
++    print(("\t %f " % (end - start)), end=' ')
+     #print ("\tdone in %d seconds" % (end-start))
+     return(artdic)
+ 
+@@ -1039,7 +1039,7 @@ def writeToTabArt(ReadDic, outfile):
+     """
+ 
+     fobj = open(outfile, "a")
+-    for read in ReadDic.keys():
++    for read in list(ReadDic.keys()):
+        BestReadIndex = np.array(ReadDic[read].nm).argmin()
+        fobj.write("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\r\n" % (read, ReadDic[read].mr[BestReadIndex], ReadDic[read].subs[BestReadIndex], ReadDic[read].nm[BestReadIndex], ReadDic[read].rq, ReadDic[read].mq[BestReadIndex], ReadDic[read].start[BestReadIndex], ReadDic[read].end[BestReadIndex], ReadDic[read].gaps[BestReadIndex], ReadDic[read].mism[BestReadIndex]))
+     fobj.close()
+@@ -1065,7 +1065,7 @@ def writeToTabRef(RefArray, outfile, rev
+     """
+ 
+     fobj = open(outfile, "a")
+-    for i in xrange(0, len(RefArray)):
++    for i in range(0, len(RefArray)):
+         read = RefArray[i]
+         # check if entry == 0
+         if read == 0:
+--- a/core/FindOrfs.py
++++ b/core/FindOrfs.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ 
+ """
+@@ -26,7 +26,7 @@ def build_ORF(sequence,file_ORF,pdic):
+     
+     START,STOP,STARTrev,STOPrev = find_orfs(sequence,pdic)
+     t=open(file_ORF,'a')
+-    print "\t search forward orfs..."
++    print("\t search forward orfs...")
+     orf_counter = 0
+     orf_name = "forw"
+     for i in START:
+@@ -49,7 +49,7 @@ def build_ORF(sequence,file_ORF,pdic):
+     
+     orf_counter = 0
+     orf_name = "rev"
+-    print "\t search backward orfs..."
++    print("\t search backward orfs...")
+     for i in STARTrev:
+         for j in STOPrev:
+             #3 conditions
+@@ -66,7 +66,7 @@ def build_ORF(sequence,file_ORF,pdic):
+                 t.write('\n')
+                 pdic[i]="E"
+                 pdic[j]="S"
+-    print(str(temp+orf_counter))
++    print((str(temp+orf_counter)))
+     t.close()
+     
+     return (pdic)
+@@ -207,10 +207,10 @@ def find_orfs(genomeSequence,pdic):
+             else:
+                 break
+     ###test##################
+-    print("START codons : "  + str(len(start)))
+-    print("STOP codons : "  +str(len(stop)))
+-    print("revSTART codons : "  +str(len(startRev)))
+-    print("revSTOP codons : "  +str(len(stopRev)))
++    print(("START codons : "  + str(len(start))))
++    print(("STOP codons : "  +str(len(stop))))
++    print(("revSTART codons : "  +str(len(startRev))))
++    print(("revSTOP codons : "  +str(len(stopRev))))
+     
+     
+     # FORWARD
+--- a/core/InsertMutations.py
++++ b/core/InsertMutations.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """
+ Created 2012
+ core Script for the generation of the artificial reference genome
+@@ -16,7 +16,7 @@ unbalanced mutations, but this will caus
+ """
+ 
+ import random as r
+-import Prep as INI
++from . import Prep as INI
+ 
+ 
+ 
+@@ -34,7 +34,7 @@ def getMutation(AA,Codon):
+     """
+     temp_mutationlist = []
+     '''create a list of possible triplets within hamming distance 1 '''
+-    for item in INI.genetic_code.keys():
++    for item in list(INI.genetic_code.keys()):
+         isvalid = INI.isvalidtriplet(item,Codon)
+         ''' Hamming distance 1, AA is not equal to the given AA,forbid mutation to stopcodon '''
+         if (isvalid == True and AA !=INI.genetic_code[item] and INI.genetic_code[item]!="*"):
+@@ -280,9 +280,9 @@ def mutate_random(DNA,AminoAcid,distance
+             
+         # stats (INI.savepickle(pdic,header+"_pdic_e"))
+         print("\r\n########Some stats:########")
+-        print("DNA length:\t" + str(len(DNA)))
+-        print("max substitutions:\t" + str(len(DNA)/distance))
+-        print("#Balanced Mutations:\t" + str(succ_counter))
++        print(("DNA length:\t" + str(len(DNA))))
++        print(("max substitutions:\t" + str(len(DNA)/distance)))
++        print(("#Balanced Mutations:\t" + str(succ_counter)))
+         
+         
+         return ("".join(dna_list))
+\ No newline at end of file
+--- a/core/PlotData.py
++++ b/core/PlotData.py
+@@ -29,7 +29,7 @@ import pylab as p
+ 
+ def trapezoidal_rule(x, y):
+     """Approximates the integral through the points a,b"""
+-    index =  [i+1 for i in xrange(len(x)-1)]
++    index =  [i+1 for i in range(len(x)-1)]
+     xdiff = np.array([x[i]-x[i-1] for i in index])
+     ysum = np.array([y[i]+y[i-1] for i in index])
+     return(np.dot(xdiff,ysum)/2)
+@@ -105,7 +105,7 @@ def CalculateRoc2(dataArray,prefix,reads
+ 
+                             tempids = [m.id for m in temparray]
+                             uniquereads = {}
+-                            for i in xrange(0,len(tempids)):
++                            for i in range(0,len(tempids)):
+                                 uniquereads[tempids[i]] = ""
+ 
+                             mappedreads = len(uniquereads)
+@@ -191,7 +191,7 @@ def CalculateRoc2(dataArray,prefix,reads
+     
+     fobj =  open(prefix+"_roctable.txt","w")
+     fobj.write("RQ\tGAPS\tMM\tPTP\tFP\tP\tSn\t1-Sp\tF\r\n")
+-    for i in xrange(0,len(rocvector),1):
++    for i in range(0,len(rocvector),1):
+         temp = [str(k) for k in rocvector[i]]
+         tempstr = "\t".join(temp)
+         fobj.write(tempstr+"\r\n")
+--- a/core/Prep.py
++++ b/core/Prep.py
+@@ -7,7 +7,7 @@ Contains various help functions which in
+ 
+ @author: Sven Giese'''
+ 
+-import cPickle as pickle
++import pickle as pickle
+ import random
+ 
+ ''' INIT DICTIONARIES '''
+@@ -145,7 +145,7 @@ def savepickle(dictionary,outputname):
+     
+     """
+     pickle.dump( dictionary, open(outputname +".p", "wb" ) )
+-    print("Saved .pickle to: " + outputname +".p")
++    print(("Saved .pickle to: " + outputname +".p"))
+ 
+ def loadpickle(inputname):
+     """
+@@ -158,5 +158,5 @@ def loadpickle(inputname):
+     @return:  Dictionary containing start and end positions of ORFs.
+     """
+     dictionary= pickle.load( open(inputname ))#+".p" ) )
+-    print("Loaded "+inputname+" pickle!")
++    print(("Loaded "+inputname+" pickle!"))
+     return (dictionary)
+--- a/core/ReadAndWrite.py
++++ b/core/ReadAndWrite.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ '''
+ Created 2012
+ 
+@@ -61,12 +61,12 @@ def writeoverview(Ndic_G,aadic_G,Ndic_AR
+    
+     sum1 =0
+     sum2= 0
+-    for item in Ndic_G.keys():
++    for item in list(Ndic_G.keys()):
+         fobj.write(item +"\t"+str(Ndic_G[item])+"\t"+str(Ndic_AR[item])+"\t"+str(Ndic_G[item]-Ndic_AR[item])+"\n")
+         sum1 +=abs(Ndic_G[item]-Ndic_AR[item])
+     fobj.write(str(sum1)+"\n")
+     
+-    for item in aadic_G.keys():
++    for item in list(aadic_G.keys()):
+         fobj.write(item +"\t"+str(aadic_G[item])+"\t"+str(aadic_AR[item])+"\t"+str(aadic_G[item]-aadic_AR[item])+"\n")
+         sum2 +=abs(aadic_G[item]-aadic_AR[item])
+     fobj.write(str(sum2)+"\n")
+@@ -93,13 +93,13 @@ def nucleotide_dist_seq(seq,txt_file,sha
+    
+     if (shallwrite==1):
+         output_file=open(txt_file,'w')
+-        for item in Nndic.keys():
++        for item in list(Nndic.keys()):
+             Nndic[item]=Nndic[item]/float(s)
+             output_file.write(item + "\t" + str(Nndic[item])+"\n")
+             
+         output_file.close()
+     else:
+-         for item in Nndic.keys():
++         for item in list(Nndic.keys()):
+             Nndic[item]=Nndic[item]/float(s)
+     return (Nndic)    #N can be used for checking: should be the same number in real
+                                                                                     # and artificial chromosome
+@@ -129,13 +129,13 @@ def aa_dist_seq(seq,txt_file,shallwrite)
+     n=1
+     if (shallwrite==1):
+         output_file=open(txt_file,'w')
+-        for item in aadic.keys():
++        for item in list(aadic.keys()):
+             aadic[item]=aadic[item]/float(n)
+             output_file.write(item + "\t" + str(aadic[item])+"\n")
+             
+         output_file.close()
+     else:
+-        for item in aadic.keys():
++        for item in list(aadic.keys()):
+             aadic[item]=aadic[item]/float(n)
+             
+     return (aadic) 
+@@ -183,6 +183,6 @@ def gethammingdistance(original,artifici
+             
+         else:
+             not_hamming+=1
+-    print ("#hamming distance REF-ART\t"+ str(hamming))
+-    print ("avg. distance:\t" + str(len(original)/float(hamming)))
++    print(("#hamming distance REF-ART\t"+ str(hamming)))
++    print(("avg. distance:\t" + str(len(original)/float(hamming))))
+     print("###########################\r\n")
+\ No newline at end of file
+--- a/doc/core.AnalyseMapping-pysrc.html
++++ b/doc/core.AnalyseMapping-pysrc.html
+@@ -58,7 +58,7 @@
+ </table>
+ <h1 class="epydoc">Source Code for <a href="core.AnalyseMapping-module.html">Module core.AnalyseMapping</a></h1>
+ <pre class="py-src">
+-<a name="L1"></a><tt class="py-lineno">   1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/env python</tt> </tt>
++<a name="L1"></a><tt class="py-lineno">   1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/python3</tt> </tt>
+ <a name="L2"></a><tt class="py-lineno">   2</tt>  <tt class="py-line"><tt class="py-docstring">""" This script is used to evalulate the mapping results """</tt> </tt>
+ <a name="L3"></a><tt class="py-lineno">   3</tt>  <tt class="py-line"> </tt>
+ <a name="L4"></a><tt class="py-lineno">   4</tt>  <tt class="py-line"><tt class="py-keyword">import</tt> <tt class="py-name">HTSeq</tt> </tt>
+--- a/doc/core.FindOrfs-pysrc.html
++++ b/doc/core.FindOrfs-pysrc.html
+@@ -58,7 +58,7 @@
+ </table>
+ <h1 class="epydoc">Source Code for <a href="core.FindOrfs-module.html">Module core.FindOrfs</a></h1>
+ <pre class="py-src">
+-<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/env python</tt>
 </tt>
++<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/python3</tt>
 </tt>
+ <a name="L2"></a><tt class="py-lineno">  2</tt>  <tt class="py-line">
 </tt>
+ <a name="L3"></a><tt class="py-lineno">  3</tt>  <tt class="py-line">
 </tt>
+ <a name="L4"></a><tt class="py-lineno">  4</tt>  <tt class="py-line"><tt class="py-docstring">"""
</tt> </tt>
+--- a/doc/core.InsertMutations-pysrc.html
++++ b/doc/core.InsertMutations-pysrc.html
+@@ -58,7 +58,7 @@
+ </table>
+ <h1 class="epydoc">Source Code for <a href="core.InsertMutations-module.html">Module core.InsertMutations</a></h1>
+ <pre class="py-src">
+-<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/env python</tt>
 </tt>
++<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/python3</tt>
 </tt>
+ <a name="L2"></a><tt class="py-lineno">  2</tt>  <tt class="py-line"><tt class="py-docstring">"""
</tt> </tt>
+ <a name="L3"></a><tt class="py-lineno">  3</tt>  <tt class="py-line"><tt class="py-docstring">Created 2012
</tt> </tt>
+ <a name="L4"></a><tt class="py-lineno">  4</tt>  <tt class="py-line"><tt class="py-docstring">core Script for the generation of the artificial reference genome
</tt> </tt>
+--- a/doc/core.ReadAndWrite-pysrc.html
++++ b/doc/core.ReadAndWrite-pysrc.html
+@@ -58,7 +58,7 @@
+ </table>
+ <h1 class="epydoc">Source Code for <a href="core.ReadAndWrite-module.html">Module core.ReadAndWrite</a></h1>
+ <pre class="py-src">
+-<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/env python</tt>
 </tt>
++<a name="L1"></a><tt class="py-lineno">  1</tt>  <tt class="py-line"><tt class="py-comment">#!/usr/bin/python3</tt>
 </tt>
+ <a name="L2"></a><tt class="py-lineno">  2</tt>  <tt class="py-line"><tt class="py-docstring">'''
</tt> </tt>
+ <a name="L3"></a><tt class="py-lineno">  3</tt>  <tt class="py-line"><tt class="py-docstring">Created 2012
</tt> </tt>
+ <a name="L4"></a><tt class="py-lineno">  4</tt>  <tt class="py-line"><tt class="py-docstring">
</tt> </tt>


=====================================
debian/patches/series
=====================================
@@ -1 +1,2 @@
 spelling.patch
+2to3_new.patch


=====================================
debian/rules
=====================================
@@ -3,4 +3,4 @@
 # DH_VERBOSE := 1
 
 %:
-	dh $@ --with python2
+	dh $@ --with python3



View it on GitLab: https://salsa.debian.org/med-team/arden/compare/8d01d92569c89adbe2002dd069df55a944d28c09...a57e711a89edf2e9d77524c8551959a575a7e0bd

-- 
View it on GitLab: https://salsa.debian.org/med-team/arden/compare/8d01d92569c89adbe2002dd069df55a944d28c09...a57e711a89edf2e9d77524c8551959a575a7e0bd
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20190913/0b007993/attachment-0001.html>


More information about the debian-med-commit mailing list