[med-svn] [Git][med-team/hinge][master] 6 commits: Add myself to Uploaders

Andreas Tille gitlab at salsa.debian.org
Wed Sep 4 16:19:52 BST 2019



Andreas Tille pushed to branch master at Debian Med / hinge


Commits:
3a52b907 by Andreas Tille at 2019-09-04T14:50:32Z
Add myself to Uploaders

- - - - -
489d4d15 by Andreas Tille at 2019-09-04T15:05:05Z
Use 2to3 to port to Python3

- - - - -
8e4cd07d by Andreas Tille at 2019-09-04T15:05:25Z
debhelper-compat 12

- - - - -
c13f0c01 by Andreas Tille at 2019-09-04T15:05:29Z
Standards-Version: 4.4.0

- - - - -
713d7020 by Andreas Tille at 2019-09-04T15:17:14Z
There is no python3-configparser and it should not be needed

- - - - -
b21be0ee by Andreas Tille at 2019-09-04T15:18:56Z
TODO: Wait for python3-pbcore (see #938009)

- - - - -


5 changed files:

- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3.patch
- debian/patches/series


Changes:

=====================================
debian/changelog
=====================================
@@ -1,3 +1,15 @@
+hinge (0.5.0-5) UNRELEASED; urgency=medium
+
+  * Afif removed himself from Uploaders
+  * Add myself to Uploaders
+  * Use 2to3 to port to Python3
+    Closes: #936704
+  * debhelper-compat 12
+  * Standards-Version: 4.4.0
+  TODO: Wait for python3-pbcore (see #938009)
+
+ -- Andreas Tille <tille at debian.org>  Wed, 04 Sep 2019 16:50:08 +0200
+
 hinge (0.5.0-4) unstable; urgency=medium
 
   * Team upload.


=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-12


=====================================
debian/control
=====================================
@@ -1,8 +1,9 @@
 Source: hinge
 Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.org>
+Uploaders: Andreas Tille <tille at debian.org>
 Section: science
 Priority: optional
-Build-Depends: debhelper (>= 12~),
+Build-Depends: debhelper-compat (= 12),
                cmake,
                libspdlog-dev,
                libboost-dev,
@@ -12,18 +13,17 @@ Build-Depends: debhelper (>= 12~),
                pandoc,
 # Run-Time-Depends:
 # (to prevent building where not installable)
-               python,
+               python3,
                daligner,
                dazzdb,
                dascrubber,
-               python-numpy,
-               python-ujson,
-               python-configparser,
-               python-colormap,
-               python-pbcore,
-               python-networkx,
-               python-matplotlib
-Standards-Version: 4.3.0
+               python3-numpy,
+               python3-ujson,
+               python3-colormap,
+               python3-pbcore,
+               python3-networkx,
+               python3-matplotlib
+Standards-Version: 4.4.0
 Vcs-Browser: https://salsa.debian.org/med-team/hinge
 Vcs-Git: https://salsa.debian.org/med-team/hinge.git
 Homepage: https://github.com/HingeAssembler/HINGE
@@ -32,17 +32,16 @@ Package: hinge
 Architecture: any
 Depends: ${shlibs:Depends},
          ${misc:Depends},
-         python,
+         python3,
          daligner,
          dazzdb,
          dascrubber,
-         python-numpy,
-         python-ujson,
-         python-configparser,
-         python-colormap,
-         python-pbcore,
-         python-networkx,
-         python-matplotlib
+         python3-numpy,
+         python3-ujson,
+         python3-colormap,
+         python3-pbcore,
+         python3-networkx,
+         python3-matplotlib
 Description: long read genome assembler based on hinging
  HINGE is a genome assembler that seeks to achieve optimal repeat resolution
  by distinguishing repeats that can be resolved given the data from those that


=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,2955 @@
+Description: Use 2to3 to port to Python3
+Bug-Debian: https://bugs.debian.org/936704
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Wed, 04 Sep 2019 16:50:08 +0200
+
+--- a/README.md
++++ b/README.md
+@@ -5,7 +5,7 @@ Software accompanying  "HINGE: Long-Read
+ 
+ - Paper: http://genome.cshlp.org/content/early/2017/03/20/gr.216465.116.abstract
+ 
+-- An ipython notebook to reproduce results in the paper can be found in this [repository](https://github.com/govinda-kamath/HINGE-analyses).
++- An ipython3 notebook to reproduce results in the paper can be found in this [repository](https://github.com/govinda-kamath/HINGE-analyses).
+ 
+ CI Status: ![image](https://travis-ci.org/HingeAssembler/HINGE.svg?branch=master)
+ 
+@@ -59,9 +59,9 @@ In the pipeline described above, several
+ - cmake 3.x
+ - libhdf5
+ - boost
+-- Python 2.7
++- Python 3
+ 
+-The following python packages are necessary:
++The following python3 packages are necessary:
+ - numpy
+ - ujson
+ - configparser
+--- a/scripts/Visualise_graph.py
++++ b/scripts/Visualise_graph.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ # In[1]:
+ 
+@@ -7,7 +7,7 @@ import sys
+ 
+ # In[2]:
+ if len(sys.argv) >2:
+-    print "wrong usage.\n python Visualise_graph.py graph_edge_file [list_of_hinges]"
++    print("wrong usage.\n python3 Visualise_graph.py graph_edge_file [list_of_hinges]")
+ 
+ vertices=set()
+ with open (sys.argv[1]) as f:
+@@ -40,7 +40,7 @@ for vertex in vertices:
+ with open (sys.argv[1]) as f:
+     for lines in f:
+         lines1=lines.split()
+-        print lines1
++        print(lines1)
+         if len(lines1) < 5:
+             continue
+         #print lines1
+--- a/scripts/add_groundtruth.py
++++ b/scripts/add_groundtruth.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -13,20 +13,20 @@ except:
+ 
+ g = nx.read_graphml(graphml_file)
+ 
+-print nx.info(g)
++print(nx.info(g))
+ 
+ mapping_dict = {}
+ 
+ with open(groundtruth_file,'r') as f:
+     for num, line in enumerate(f.readlines()):
+-        m = map(int, line.strip().split())
++        m = list(map(int, line.strip().split()))
+         # mapping_dict[num] = [min(m), max(m), int(m[0]>m[1])]
+         mapping_dict[num] = [m[2],m[3],m[1]]
+         
+ #print mapping_dict
+ 
+ max_len=0
+-for num in mapping_dict.keys():
++for num in list(mapping_dict.keys()):
+     max_len=max(max_len,len(str(m[3])))
+ 
+ 
+--- a/scripts/add_groundtruth_json.py
++++ b/scripts/add_groundtruth_json.py
+@@ -8,7 +8,7 @@ graphml_file_w_groundtruth = sys.argv[3]
+ 
+ g = nx.read_graphml(graphml_file)
+ 
+-print nx.info(g)
++print(nx.info(g))
+ 
+ with open(groundtruth_file) as f:
+     read_dict=json.load(f)
+@@ -20,7 +20,7 @@ for read  in read_dict:
+             max_len=max(max_len,len(str(aln_info[0])))
+             max_len=max(max_len,len(str(aln_info[1])))
+         except:
+-            print 
++            print() 
+             raise
+ 
+ pow_mov=10**(max_len+1)
+--- a/scripts/clip_ends.py
++++ b/scripts/clip_ends.py
+@@ -8,7 +8,7 @@ out_file=sys.argv[2]+'.clipped'
+ 
+ with open(ground_truth) as f:
+     for line in f:
+-        m = map(int, line.strip().split())
++        m = list(map(int, line.strip().split()))
+         chr_lengths.setdefault(m[1],0)
+         chr_lengths[m[1]]= max(chr_lengths[m[1]], max(m[2],m[3]))
+ 
+@@ -18,7 +18,7 @@ reads_to_kill=set()
+ 
+ with open(ground_truth) as f:
+     for line in f:
+-        m = map(int, line.strip().split())
++        m = list(map(int, line.strip().split()))
+         read_left=min(m[2],m[3])
+         read_right=max(m[2],m[3])
+         read_chr=m[1]
+--- a/scripts/compute_n50_from_draft.py
++++ b/scripts/compute_n50_from_draft.py
+@@ -119,9 +119,9 @@ for nctc_name in os.listdir(fullpath):
+ 
+ 	data_dict[nctc_name] = [hinging_n50,hinging_comp_n50,hgap_n50]
+ 
+-	print count
+-	print count1
+-	print count2
++	print(count)
++	print(count1)
++	print(count2)
+ 
+ 
+ 
+--- a/scripts/condense_graph.py
++++ b/scripts/condense_graph.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -90,7 +90,7 @@ def run(filename, n_iter):
+     
+     f=open(filename)
+     line1=f.readline()
+-    print line1
++    print(line1)
+     f.close()
+     if len(line1.split()) !=2:
+ 	g=input1(filename)
+@@ -99,7 +99,7 @@ def run(filename, n_iter):
+     
+     
+     
+-    print nx.info(g)
++    print(nx.info(g))
+     
+     
+     for node in g.nodes():
+@@ -107,37 +107,37 @@ def run(filename, n_iter):
+         g.node[node]['read'] = node
+         
+         
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     for i in range(n_iter):
+         for node in g.nodes():
+             if g.in_degree(node) == 0:
+                 g.remove_node(node)
+     
+-        print nx.info(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        print(nx.info(g))
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     
+     g.graph['aval'] = 1000000000
+     
+     for i in range(5):
+         merge_simple_path(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+     try:
+         import ujson
+         mapping = ujson.load(open(filename.split('.')[0]+'.mapping.json'))
+         
+-        print 'get mapping'
++        print('get mapping')
+         
+         for node in g.nodes():
+             #print node
+-            if mapping.has_key(node):
++            if node in mapping:
+                 g.node[node]['aln_start'] = mapping[node][0]
+                 g.node[node]['aln_end'] = mapping[node][1]
+                 g.node[node]['aln_strand'] = mapping[node][2]
+@@ -151,8 +151,8 @@ def run(filename, n_iter):
+     
+     nx.write_graphml(g, filename.split('.')[0]+'_condensed.graphml')
+     
+-    print nx.number_weakly_connected_components(g)
+-    print nx.number_strongly_connected_components(g)
++    print(nx.number_weakly_connected_components(g))
++    print(nx.number_strongly_connected_components(g))
+     
+     
+ filename = sys.argv[1]
+--- a/scripts/condense_graph_and_annotate.py
++++ b/scripts/condense_graph_and_annotate.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -94,21 +94,21 @@ def run(filename, gt_file, n_iter):
+     
+     f=open(filename)
+     line1=f.readline()
+-    print line1
++    print(line1)
+     f.close()
+     if len(line1.split()) !=2:
+ 	   g=input1(filename)
+     else:
+ 	   g=input2(filename)
+     
+-    print str(len(g.nodes())) + " vertices in graph to begin with."
++    print(str(len(g.nodes())) + " vertices in graph to begin with.")
+ 
+     connected_components=[x for x in nx.weakly_connected_components(g)]
+     for component in connected_components:
+         if len(component) < 10:
+             g.remove_nodes_from(component)
+ 
+-    print str(len(g.nodes())) + " vertices in graph after removing components of at most "+str(LENGTH_THRESHOLD)+ " nodes."
++    print(str(len(g.nodes())) + " vertices in graph after removing components of at most "+str(LENGTH_THRESHOLD)+ " nodes.")
+ 
+     read_to_chr_map={}
+ 
+@@ -121,7 +121,7 @@ def run(filename, gt_file, n_iter):
+     else:
+         with open(gt_file,'r') as f:
+             for num, line in enumerate(f.readlines()):
+-                m = map(int, line.strip().split())
++                m = list(map(int, line.strip().split()))
+                 read_to_chr_map[m[0]]=m[1]   
+     
+     nodes_seen=set([x.split("_")[0] for x in g.nodes()])
+@@ -130,7 +130,7 @@ def run(filename, gt_file, n_iter):
+         read_to_chr_map.setdefault(int(node),-1)
+ 
+     #print nx.info(g)
+-    print "Num reads read : "+str(len(read_to_chr_map))
++    print("Num reads read : "+str(len(read_to_chr_map)))
+     
+     for node in g.nodes():
+         nodeid=int(node.split('_')[0])
+@@ -140,27 +140,27 @@ def run(filename, gt_file, n_iter):
+         #print str(nodeid), node,g.node[node]['chr']
+         
+         
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     for i in range(n_iter):
+         for node in g.nodes():
+             if g.in_degree(node) == 0:
+                 g.remove_node(node)
+     
+-        print nx.info(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        print(nx.info(g))
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     
+     g.graph['aval'] = 1000000000
+     
+     for i in range(5):
+         merge_simple_path(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+     h=nx.DiGraph()
+     h.add_nodes_from(g)
+@@ -168,9 +168,9 @@ def run(filename, gt_file, n_iter):
+     for node in g.nodes():
+         reads_in_node=[int(x.split('_')[0]) for x in g.node[node]['read'].split(':')]
+         try:
+-            chr_in_node=map(lambda x: read_to_chr_map[x], reads_in_node)
++            chr_in_node=[read_to_chr_map[x] for x in reads_in_node]
+         except:
+-            print reads_in_node,g.node[node]['read']
++            print(reads_in_node,g.node[node]['read'])
+             return
+         chr_in_node_set=set(chr_in_node)
+         if len(chr_in_node_set) ==1:
+@@ -187,8 +187,8 @@ def run(filename, gt_file, n_iter):
+     
+     nx.write_graphml(h, filename.split('.')[0]+'_condensed_annotated.graphml')
+     
+-    print nx.number_weakly_connected_components(h)
+-    print nx.number_strongly_connected_components(h)
++    print(nx.number_weakly_connected_components(h))
++    print(nx.number_strongly_connected_components(h))
+     
+ #
+ 
+--- a/scripts/condense_graph_annotate_clip_ends.py
++++ b/scripts/condense_graph_annotate_clip_ends.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -93,7 +93,7 @@ def run(filename, gt_file, n_iter):
+     
+     f=open(filename)
+     line1=f.readline()
+-    print line1
++    print(line1)
+     f.close()
+     if len(line1.split()) !=2:
+ 	   g=input1(filename)
+@@ -110,7 +110,7 @@ def run(filename, gt_file, n_iter):
+ 
+     with open(gt_file,'r') as f:
+         for num, line in enumerate(f.readlines()):
+-            m = map(int, line.strip().split())
++            m = list(map(int, line.strip().split()))
+             # mapping_dict[num] = [min(m), max(m), int(m[0]>m[1])]
+             read_to_chr_map[m[0]]= str(m[1])
+             mapping_dict[num] = m[1]
+@@ -119,10 +119,10 @@ def run(filename, gt_file, n_iter):
+             chr_lengths[m[1]] = max(chr_lengths[m[1]],max(m[2],m[3]))
+ 
+ 
+-    print nx.info(g)
++    print(nx.info(g))
+     
+-    print "Chromosome lenghts:"
+-    print chr_lengths
++    print("Chromosome lenghts:")
++    print(chr_lengths)
+ 
+     margin = 10000
+ 
+@@ -130,7 +130,7 @@ def run(filename, gt_file, n_iter):
+ 
+ 
+     #print nx.info(g)
+-    print "Num reads read : "+str(len(read_to_chr_map))
++    print("Num reads read : "+str(len(read_to_chr_map)))
+ 
+     for cur_edge in g.edges():
+         node0=int(cur_edge[0].split('_')[0])
+@@ -162,30 +162,30 @@ def run(filename, gt_file, n_iter):
+         g.node[node]['read'] = node
+         #print str(nodeid), node,g.node[node]['chr']
+ 
+-    print "Deleted nodes: "+str(del_count)
++    print("Deleted nodes: "+str(del_count))
+         
+         
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     for i in range(n_iter):
+         for node in g.nodes():
+             if g.in_degree(node) == 0:
+                 g.remove_node(node)
+     
+-        print nx.info(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        print(nx.info(g))
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     
+     g.graph['aval'] = 1000000000
+     
+     for i in range(5):
+         merge_simple_path(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+     h=nx.DiGraph()
+     h.add_nodes_from(g)
+@@ -200,9 +200,9 @@ def run(filename, gt_file, n_iter):
+     for node in g.nodes():
+         reads_in_node=[int(x.split('_')[0]) for x in g.node[node]['read'].split(':')]
+         try:
+-            chr_in_node=map(lambda x: read_to_chr_map[x], reads_in_node)
++            chr_in_node=[read_to_chr_map[x] for x in reads_in_node]
+         except:
+-            print reads_in_node,g.node[node]['read']
++            print(reads_in_node,g.node[node]['read'])
+             return
+         chr_in_node_set=set(chr_in_node)
+         if len(chr_in_node_set) ==1:
+@@ -221,11 +221,11 @@ def run(filename, gt_file, n_iter):
+         import ujson
+         mapping = ujson.load(open(filename.split('.')[0]+'.mapping.json'))
+         
+-        print 'get mapping'
++        print('get mapping')
+         
+         for node in h.nodes():
+             #print node
+-            if mapping.has_key(node):
++            if node in mapping:
+                 h.node[node]['aln_start'] = mapping[node][0]
+                 h.node[node]['aln_end'] = mapping[node][1]
+                 h.node[node]['aln_strand'] = mapping[node][2]
+@@ -242,8 +242,8 @@ def run(filename, gt_file, n_iter):
+     nx.write_graphml(h, filename.split('.')[0]+'_condensed_annotated.graphml')
+     nx.write_graphml(g, filename.split('.')[0]+'_G_condensed_annotated.graphml')
+     
+-    print nx.number_weakly_connected_components(h)
+-    print nx.number_strongly_connected_components(h)
++    print(nx.number_weakly_connected_components(h))
++    print(nx.number_strongly_connected_components(h))
+     
+ #
+ 
+--- a/scripts/condense_graph_create_gfa_compute_n50.py
++++ b/scripts/condense_graph_create_gfa_compute_n50.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -7,7 +7,7 @@ from collections import Counter
+ 
+ # This script condenses the graph down, creates a gfa with for the condensed graph, and computes the contig N50
+ 
+-# python condense_graph_create_gfa_compute_n50.py ecoli.edges
++# python3 condense_graph_create_gfa_compute_n50.py ecoli.edges
+ 
+ # The conditions in lines 23 and 24 are meant to prevent nodes corresponding to different strands to be merged 
+ # (and should be commented out if this is not desired, or if a json is not available)
+@@ -46,8 +46,8 @@ def merge_path(g,in_node,node,out_node):
+ 
+ 
+     if overlap1 > min(length0,length1):
+-        print "problem here:"
+-        print overlap1, length0, length1
++        print("problem here:")
++        print(overlap1, length0, length1)
+ 
+ 
+     g.add_node(str(node_id),length = length0+length1+length2 - overlap1 - overlap2, aln_strand = g.node[node]['aln_strand'])
+@@ -107,7 +107,7 @@ def de_clip(filename, n_iter):
+     # count = 0
+ 
+     with open(filename,'r') as f:
+-        for line in f.xreadlines():
++        for line in f:
+             l = line.strip().split()
+             #print l2
+             g.add_edge(l[0],l[1],overlap=int(l[2])/2)
+@@ -126,7 +126,7 @@ def de_clip(filename, n_iter):
+             g.node[l[1]]['length'] = node1end - node1start
+ 
+     
+-    print nx.info(g)
++    print(nx.info(g))
+ 
+     try:
+         import ujson
+@@ -134,11 +134,11 @@ def de_clip(filename, n_iter):
+         
+         # print mapping
+ 
+-        print 'get mapping'
++        print('get mapping')
+         
+         for node in g.nodes():
+             #print node
+-            if mapping.has_key(node):
++            if node in mapping:
+ 
+                 # alnstart = int(mapping[node][0])
+                 # alnend = int(mapping[node][1])
+@@ -164,34 +164,34 @@ def de_clip(filename, n_iter):
+ 
+ 
+ 
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     for i in range(n_iter):
+         for node in g.nodes():
+             if g.degree(node) < 2:
+                 g.remove_node(node)
+ 
+-        print nx.info(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        print(nx.info(g))
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+ 
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     
+     g.graph['aval'] = 1000000000
+     
+     for i in range(5):
+         merge_simple_path(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+        
+     
+     nx.write_graphml(g, filename.split('.')[0]+'.graphml')
+     
+-    print nx.number_weakly_connected_components(g)
+-    print nx.number_strongly_connected_components(g)
++    print(nx.number_weakly_connected_components(g))
++    print(nx.number_strongly_connected_components(g))
+ 
+ 
+     # Next we create the gfa file
+@@ -223,7 +223,7 @@ def de_clip(filename, n_iter):
+     for cur_node in g.nodes():
+         contig_lengths.append(g.node[cur_node]['length'])
+ 
+-    print "N50 = "+str(comp_n50(contig_lengths))
++    print("N50 = "+str(comp_n50(contig_lengths)))
+ 
+ 
+ 
+--- a/scripts/condense_graph_with_gt.py
++++ b/scripts/condense_graph_with_gt.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -96,7 +96,7 @@ def run(filename, gt_file, n_iter):
+     
+     f=open(filename)
+     line1=f.readline()
+-    print line1
++    print(line1)
+     f.close()
+     if len(line1.split()) !=2:
+ 	   g=input1(filename)
+@@ -107,11 +107,11 @@ def run(filename, gt_file, n_iter):
+ 
+     with open(gt_file,'r') as f:
+         for num, line in enumerate(f.readlines()):
+-            m = map(int, line.strip().split())
++            m = list(map(int, line.strip().split()))
+             # mapping_dict[num] = [min(m), max(m), int(m[0]>m[1])]
+             mapping_dict[num] = m[1]    
+     
+-    print nx.info(g)
++    print(nx.info(g))
+     
+     
+     for node in g.nodes():
+@@ -123,27 +123,27 @@ def run(filename, gt_file, n_iter):
+         #print str(nodeid), node,g.node[node]['chr']
+         
+         
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     for i in range(n_iter):
+         for node in g.nodes():
+             if g.in_degree(node) == 0:
+                 g.remove_node(node)
+     
+-        print nx.info(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        print(nx.info(g))
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     
+     g.graph['aval'] = 1000000000
+     
+     for i in range(5):
+         merge_simple_path(g)
+-        degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-        print Counter(degree_sequence)
++        degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++        print(Counter(degree_sequence))
+     
+     h=nx.DiGraph()
+     h.add_nodes_from(g)
+@@ -161,11 +161,11 @@ def run(filename, gt_file, n_iter):
+         import ujson
+         mapping = ujson.load(open(filename.split('.')[0]+'.mapping.json'))
+         
+-        print 'get mapping'
++        print('get mapping')
+         
+         for node in h.nodes():
+             #print node
+-            if mapping.has_key(node):
++            if node in mapping:
+                 h.node[node]['aln_start'] = mapping[node][0]
+                 h.node[node]['aln_end'] = mapping[node][1]
+                 h.node[node]['aln_strand'] = mapping[node][2]
+@@ -181,8 +181,8 @@ def run(filename, gt_file, n_iter):
+     
+     nx.write_graphml(h, filename.split('.')[0]+'_condensed.graphml')
+     
+-    print nx.number_weakly_connected_components(h)
+-    print nx.number_strongly_connected_components(h)
++    print(nx.number_weakly_connected_components(h))
++    print(nx.number_strongly_connected_components(h))
+     
+ #
+ 
+--- a/scripts/connected.py
++++ b/scripts/connected.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -13,7 +13,7 @@ def longest_path(G):
+             dist[node] = max(pairs)
+         else:
+             dist[node] = (0, node)
+-    node,(length,_)  = max(dist.items(), key=lambda x:x[1])
++    node,(length,_)  = max(list(dist.items()), key=lambda x:x[1])
+     path = []
+     while length > 0:
+         path.append(node)
+@@ -27,25 +27,25 @@ filename = sys.argv[1]
+ g = nx.DiGraph()
+ 
+ with open(filename,'r') as f:
+-    for line in f.xreadlines():
++    for line in f:
+         g.add_edge(*(line.strip().split('->')))
+ 
+ 
+-print nx.info(g)
+-degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-print Counter(degree_sequence)
++print(nx.info(g))
++degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++print(Counter(degree_sequence))
+ 
+ for i in range(15):
+     for node in g.nodes():
+         if g.in_degree(node) == 0:
+             g.remove_node(node)
+ 
+-    print nx.info(g)
++    print(nx.info(g))
+ 
+ #print nx.is_directed_acyclic_graph(g)
+ #print list(nx.simple_cycles(g))
+-degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-print Counter(degree_sequence)
++degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++print(Counter(degree_sequence))
+ 
+ #print nx.diameter(g)
+ 
+@@ -60,8 +60,8 @@ def rev(string):
+     #print edge
+     #print rev(edge[1]), rev(edge[0])
+ 
+-print nx.info(g)
+-print [len(item) for item in nx.weakly_connected_components(g)]
++print(nx.info(g))
++print([len(item) for item in nx.weakly_connected_components(g)])
+ 
+ 
+ nx.write_graphml(g, filename.split('.')[0]+'.graphml')
+--- a/scripts/correct_head.py
++++ b/scripts/correct_head.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys, os
+ from pbcore.io import FastaIO
+--- a/scripts/create_bandage_file.py
++++ b/scripts/create_bandage_file.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys 
+ import os
+--- a/scripts/create_hgraph.py
++++ b/scripts/create_hgraph.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import random
+@@ -42,8 +42,8 @@ def read_graph(filename,gt_file):
+     
+     nx.write_graphml(g, filename.split('.')[0]+'_hgraph.graphml')
+     
+-    print nx.number_weakly_connected_components(g)
+-    print nx.number_strongly_connected_components(g)
++    print(nx.number_weakly_connected_components(g))
++    print(nx.number_strongly_connected_components(g))
+   
+ 
+ if __name__ == "__main__":   
+--- a/scripts/create_hgraph_nogt.py
++++ b/scripts/create_hgraph_nogt.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import random
+@@ -28,8 +28,8 @@ def read_graph(filename):
+     
+     nx.write_graphml(g, filename.split('.')[0]+'_hgraph.graphml')
+     
+-    print nx.number_weakly_connected_components(g)
+-    print nx.number_strongly_connected_components(g)
++    print(nx.number_weakly_connected_components(g))
++    print(nx.number_strongly_connected_components(g))
+   
+ 
+ if __name__ == "__main__":   
+--- a/scripts/download_NCTC_pipeline.py
++++ b/scripts/download_NCTC_pipeline.py
+@@ -20,10 +20,10 @@ dest_dir = base_dir+bacterium_of_interes
+ 
+ os.system('mkdir -p '+dest_dir)
+ 
+-for run, file_list in bact_dict[bacterium_of_interest]['file_paths'].items():
++for run, file_list in list(bact_dict[bacterium_of_interest]['file_paths'].items()):
+     for file_path in  file_list:
+         cmd = cmd_base+file_path+' '+dest_dir
+-        print cmd
++        print(cmd)
+         os.system(cmd)
+ 
+ dest_fasta_name = dest_dir+bact_name
+@@ -35,16 +35,16 @@ bax_files = [x for x in os.listdir(dest_
+ for bax_file in bax_files:
+ 	dextract_cmd +=  " " + dest_dir+bax_file
+ 
+-print dextract_cmd
++print(dextract_cmd)
+ 
+ try:
+     subprocess.check_output(dextract_cmd.split())
+-    print 'dextract done. deleting .bax.h5 files'
++    print('dextract done. deleting .bax.h5 files')
+     os.system('rm '+dest_dir+'*.bax.h5')
+-    print 'removing .quiva files'
++    print('removing .quiva files')
+     os.system('rm '+dest_dir+'*.quiva')
+ except:
+-    print 'error'
++    print('error')
+ 
+ 
+ 
+--- a/scripts/draft_assembly.py
++++ b/scripts/draft_assembly.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -8,16 +8,16 @@ def linearize(filename):
+     graph_name = filename.split('.')[0]+'.graphml'
+     g = nx.read_graphml(graph_name)
+     
+-    print nx.info(g)
++    print(nx.info(g))
+     
+     # get first strong connected component
+     
+     con = list(nx.strongly_connected_component_subgraphs(g))
+     
+     con.sort(key = lambda x:len(x), reverse = True)
+-    print [len(item) for item in con]
++    print([len(item) for item in con])
+     
+-    print nx.info(con[0])
++    print(nx.info(con[0]))
+     
+     dfs_edges = list(nx.dfs_edges(con[0]))
+ 
+--- a/scripts/draft_assembly_not_perfect.py
++++ b/scripts/draft_assembly_not_perfect.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -8,16 +8,16 @@ def linearize(filename):
+     graph_name = filename.split('.')[0]+'.graphml'
+     g = nx.read_graphml(graph_name)
+     
+-    print nx.info(g)
++    print(nx.info(g))
+     
+     # get first strong connected component
+     
+     con = list(nx.strongly_connected_component_subgraphs(g))
+     
+     con.sort(key = lambda x:len(x), reverse = True)
+-    print [len(item) for item in con]
++    print([len(item) for item in con])
+     
+-    print nx.info(con[0])
++    print(nx.info(con[0]))
+     
+     dfs_edges = list(nx.dfs_edges(con[0]))
+ 
+--- a/scripts/draw2.py
++++ b/scripts/draw2.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -17,9 +17,9 @@ Qvd = ['a', 'b', 'c', 'd', 'e', 'f', 'g'
+     'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',
+     'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
+     'Y']
+-Qvv = range(len(Qvd))[::-1]
++Qvv = list(range(len(Qvd)))[::-1]
+ 
+-QVdict = dict(zip(Qvd,Qvv))
++QVdict = dict(list(zip(Qvd,Qvv)))
+ 
+ 
+ dbname = sys.argv[1]
+@@ -51,7 +51,7 @@ if len(sys.argv) < 7:
+ else:
+     rev = int(sys.argv[6])
+ 
+-print 'rev', rev
++print('rev', rev)
+ 
+ for i in range(len(qv)):
+     qx.append(i*ts)
+@@ -80,7 +80,7 @@ for item in aln:
+         aln_group.append(item)
+     
+ num = len(alns)
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ #print [len(item) for item in alns]
+ #print [item[0:3] for item in aln]
+--- a/scripts/draw2_pileup.py
++++ b/scripts/draw2_pileup.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -30,13 +30,13 @@ path = '/data/pacbio_assembly/AwesomeAss
+ aln = []
+ for i,e in enumerate(rst):
+     n = e[0]
+-    print i,n
++    print(i,n)
+     li = list(util.get_alignments_mapping(path+'ecoli', path + 'ecoli.ref', path +'ecoli.ecoli.ref.las', [n]))
+     if (len(li) > 0):
+         item = sorted(li, key=lambda x:x[4] - x[3], reverse = True)[0]
+         aln.append(item)
+ 
+-print aln[0:20]
++print(aln[0:20])
+ 
+ #aln.sort(key = lambda x:x[2])
+ 
+@@ -54,7 +54,7 @@ for item in aln:
+         aln_group.append(item)
+ 
+ num = len(alns)
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ #print [len(item) for item in alns]
+ #print [item[0:3] for item in aln]
+--- a/scripts/draw2_pileup_region.py
++++ b/scripts/draw2_pileup_region.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -28,14 +28,14 @@ with open('ecoli.linear.edges') as f:
+ 
+         bb.append(int(e))
+ 
+-print bb
++print(bb)
+ 
+ bb = set(bb)
+ 
+ 
+ for i,item in enumerate(util.get_alignments_mapping2(path+'draft', path +'ecoli', path +'draft.ecoli.las')):
+     if i%2000 == 0:
+-        print i, item
++        print(i, item)
+ 
+     if item[3] >= left and item[4] <= right:
+         aln.append(item)
+@@ -47,7 +47,7 @@ for i,item in enumerate(util.get_alignme
+ 
+ 
+ 
+-print 'number:',len(aln)
++print('number:',len(aln))
+ aln.sort(key = lambda x:x[2])
+ 
+ alns = []
+@@ -65,7 +65,7 @@ for item in aln:
+ 
+ num = len(alns)
+ 
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ alns.sort(key = lambda x:min([item[3] for item in x]))
+ 
+--- a/scripts/draw2_pileup_w_repeat.py
++++ b/scripts/draw2_pileup_w_repeat.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -22,10 +22,10 @@ with open(n) as f:
+ rep = {}
+ with open(path + 'ecoli.repeat.txt') as f:
+     for line in f:
+-        l = map(int, line.strip().split())
++        l = list(map(int, line.strip().split()))
+         if len(l) > 1:
+             for i in range((len(l) - 1) / 2):
+-                if not rep.has_key(l[0]):
++                if l[0] not in rep:
+                     rep[l[0]] = []
+                 rep[l[0]].append((l[2*i+1], l[2*i+2]))
+                 
+@@ -34,14 +34,14 @@ with open(path + 'ecoli.repeat.txt') as
+ aln = []
+ for i,e in enumerate(rst):
+     n = e
+-    print i,n
++    print(i,n)
+     li = list(util.get_alignments_mapping(path+'ecoli', path + 'ecoli.ref', path +'ecoli.ecoli.ref.las', [n]))
+     if (len(li) > 0):
+         item = sorted(li, key=lambda x:x[4] - x[3], reverse = True)
+         for l in item:
+             aln.append(l)
+ 
+-print aln[0:20]
++print(aln[0:20])
+ 
+ 
+ #aln.sort(key = lambda x:x[2])
+@@ -61,7 +61,7 @@ for item in aln:
+         aln_group.append(item)
+ 
+ num = len(alns)
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ #print [len(item) for item in alns]
+ #print [item[0:3] for item in aln]
+@@ -134,7 +134,7 @@ for i,aln_group in enumerate(alns):
+         #    plt.gca().add_patch(polygon2)
+             
+             
+-        if rep.has_key(rid):
++        if rid in rep:
+             for item in rep[rid]:
+                 s = item[0]
+                 e = item[1]
+--- a/scripts/draw_pileup_region.py
++++ b/scripts/draw_pileup_region.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -39,7 +39,7 @@ aln = []
+ 
+ for i,item in enumerate(util.get_alignments_mapping3(ref, read, las, contig)):
+     if i%2000 == 0:
+-        print i, item
++        print(i, item)
+ 
+     if item[3] >= left and item[4] <= right and item[4] - item[3] > length_th:
+         aln.append(item)
+@@ -53,7 +53,7 @@ for item in aln:
+ covx = np.arange(left, right)
+ 
+ 
+-print 'number:',len(aln)
++print('number:',len(aln))
+ aln.sort(key = lambda x:x[2])
+ 
+ alns = []
+@@ -72,7 +72,7 @@ for item in aln:
+ #num = len(alns)
+ num = len(aln)
+ 
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ alns.sort(key = lambda x:min([item[3] for item in x]))
+ 
+--- a/scripts/draw_pileup_region_find_bridges.py
++++ b/scripts/draw_pileup_region_find_bridges.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import numpy as np
+ import matplotlib
+ matplotlib.use('Agg')
+@@ -57,7 +57,7 @@ covx = np.arange(left, right)
+ #for i in range(0, len(covx), 10):
+ #    print covx[i], covy[i]
+ 
+-print 'number:',len(aln)
++print('number:',len(aln))
+ aln.sort(key = lambda x:x[2])
+ 
+ alns = []
+@@ -75,7 +75,7 @@ for item in aln:
+ 
+ num = len(alns)
+ 
+-print len(aln), len(alns)
++print(len(aln), len(alns))
+ 
+ alns.sort(key = lambda x:min([item[3] for item in x]))
+ 
+@@ -111,7 +111,7 @@ for i,aln_group in enumerate(alns):
+         abpos = item[3]
+         aepos = item[4]
+         if abpos < bridge_begin+200 and aepos > bridge_end-200:
+-            print item
++            print(item)
+         bbpos = item[6]
+         bepos = item[7]
+         blen = item[8]
+--- a/scripts/fasta_to_fastq.py
++++ b/scripts/fasta_to_fastq.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ """
+ Convert FASTA to FASTQ file with a static
+ 
+--- a/scripts/get_NCTC_json.py
++++ b/scripts/get_NCTC_json.py
+@@ -1,8 +1,8 @@
+-import urllib2
++import urllib.request, urllib.error, urllib.parse
+ from bs4 import BeautifulSoup
+ import json
+ 
+-response = urllib2.urlopen('http://www.sanger.ac.uk/resources/downloads/bacteria/nctc/')
++response = urllib.request.urlopen('http://www.sanger.ac.uk/resources/downloads/bacteria/nctc/')
+ html = response.read()
+ 
+ soup=BeautifulSoup(html)
+@@ -13,9 +13,9 @@ headings = [th.get_text() for th in tabl
+ dataset={}
+ for row in table.find_all("tr")[1:]:
+     #
+-    print row 
++    print(row) 
+     row1=  [td.get_text() for td in row.find_all("td")]
+-    print row1
++    print(row1)
+     metadata={}
+     cellname=''
+     for i, td in enumerate(row.find_all("td")):
+@@ -26,7 +26,7 @@ for row in table.find_all("tr")[1:]:
+         
+         if i==1:
+             cellname=td.get_text()
+-            print cellname
++            print(cellname)
+         
+         if i==3:
+             # print td
+@@ -46,7 +46,7 @@ for row in table.find_all("tr")[1:]:
+     list_of_files={}
+     for run in metadata[headings[3]]:
+         link_to_go=run[1]
+-        response1 = urllib2.urlopen(link_to_go+"&display=xml")
++        response1 = urllib.request.urlopen(link_to_go+"&display=xml")
+         xml = response1.read()
+         xmlsoup = BeautifulSoup(xml)
+         fllist=[]
+--- a/scripts/get_consensus_gfa.py
++++ b/scripts/get_consensus_gfa.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -71,8 +71,8 @@ h = g.subgraph(nodes_to_keep)
+ #         print len(h.nodes()), len(consensus_contigs)
+ #         raise
+ 
+-print 'Number of contigs'
+-print len(consensus_contigs), len(h.nodes())
++print('Number of contigs')
++print(len(consensus_contigs), len(h.nodes()))
+ # print [len(x) for x in consensus_contigs]
+ 
+ 
+@@ -84,7 +84,7 @@ with open(gfaname,'w') as f:
+         # print j, i
+ 
+         seg = consensus_contigs[i]
+-        print(len(seg))
++        print((len(seg)))
+         seg_line = "S\t"+vert+"\t"+seg + '\n'
+         f.write(seg_line)
+     for edge in h.edges():
+--- a/scripts/get_draft_annotation.py
++++ b/scripts/get_draft_annotation.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import sys
+ import os
+ import subprocess
+@@ -22,7 +22,7 @@ stream = subprocess.Popen(DBshow_cmd.spl
+                                   stdout=subprocess.PIPE,bufsize=1)
+ reads_queried = parse_read(stream.stdout)
+ read_dict = {}
+-for read_id,read in itertools.izip(reads,reads_queried):
++for read_id,read in zip(reads,reads_queried):
+     rdlen = len(read[1])
+ #     print read
+     read_dict[read_id] = read
+@@ -30,7 +30,7 @@ for read_id,read in itertools.izip(reads
+ complement = {'A':'T','C': 'G','T':'A', 'G':'C','a':'t','t':'a','c':'g','g':'c'}
+ 
+ def reverse_complement(string):
+-    return "".join(map(lambda x:complement[x],reversed(string)))
++    return "".join([complement[x] for x in reversed(string)])
+ 
+ def get_string(path):
+     #print path
+@@ -53,9 +53,9 @@ def get_string(path):
+ #         print read_id
+ #         print read_dict[int(read_id)][str_st:str_end]
+ #         print read_str
+-        print 'read len',len(read_str)
++        print('read len',len(read_str))
+         ret_str += read_str
+-    print len(path), len(ret_str)
++    print(len(path), len(ret_str))
+     return ret_str
+ 
+ 
+@@ -83,7 +83,7 @@ for vert in vertices_of_interest:
+     else:
+         read_end = vert_len
+     read_tuples[vert] = (read_start,read_end)
+-    print read_starts, read_ends, vert
++    print(read_starts, read_ends, vert)
+ 
+ 
+ for vert in vertices_of_interest:
+@@ -131,7 +131,7 @@ vertices_used = set([x for x in h.nodes(
+ contig_no = 1
+ for start_vertex in vertices_of_interest:
+     first_out_vertices = in_graph.successors(start_vertex)
+-    print start_vertex, first_out_vertices
++    print(start_vertex, first_out_vertices)
+     for vertex in first_out_vertices:
+         predecessor = start_vertex
+         start_vertex_id,start_vertex_or = start_vertex.split("_")
+@@ -263,7 +263,7 @@ while set(in_graph.nodes())-vertices_use
+             h.node[node_name]['start_read'] = path_var[0][1][0]
+             h.node[node_name]['end_read'] = path_var[-1][1][1]
+             h.node[node_name]['segment'] = get_string(cur_path)
+-            print len(cur_path)
++            print(len(cur_path))
+             h.add_edges_from([(vertRC,node_name),(node_name,vertRC)])
+ 
+ 
+@@ -294,7 +294,7 @@ while True:
+     h.remove_node(vert)
+ 
+ for  i, vert in enumerate(h.nodes()):
+-    print i,len(h.node[vert]['path'])
++    print(i,len(h.node[vert]['path']))
+ 
+ with open(outfile, 'w') as f:
+     for i,node in enumerate(h.nodes()):
+@@ -332,7 +332,7 @@ try:
+ except:
+     pass
+ for  i, vert in enumerate(h.nodes()):
+-    print i,len(h.node[vert]['path']), len(h.node[vert]['segment']), len(consensus_contigs[i])
++    print(i,len(h.node[vert]['path']), len(h.node[vert]['segment']), len(consensus_contigs[i]))
+ 
+ 
+ with open(gfaname,'w') as f:
+--- a/scripts/get_draft_path.py
++++ b/scripts/get_draft_path.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -74,7 +74,7 @@ stream = subprocess.Popen(DBshow_cmd.spl
+                                   stdout=subprocess.PIPE,bufsize=1)
+ reads_queried = parse_read(stream.stdout)
+ read_dict = {}
+-for read_id,read in itertools.izip(reads,reads_queried):
++for read_id,read in zip(reads,reads_queried):
+     rdlen = len(read[1])
+ #     print read
+     read_dict[read_id] = read
+@@ -273,7 +273,7 @@ with open(outfile, 'w') as f:
+         # print len(node_list),len(weights_list)
+ 
+         if len(node_list) != len(weights_list)+1:
+-            print 'Something went wrong with contig '+str(contig_no)
++            print('Something went wrong with contig '+str(contig_no))
+             continue
+ 
+         # printed_nodes = printed_nodes | set(node_list)
+@@ -295,7 +295,7 @@ with open(outfile, 'w') as f:
+ 
+             prev_contig = out_graph.predecessors(vertex)[0]
+             cut_start = out_graph.node[prev_contig]['cut_end']
+-            if out_graph.node[prev_contig].has_key('path'):
++            if 'path' in out_graph.node[prev_contig]:
+                 nodeA = out_graph.node[prev_contig]['path'].split(';')[-1]
+             else:
+                 nodeA = prev_contig
+@@ -338,7 +338,7 @@ with open(outfile, 'w') as f:
+             cut_end = out_graph.node[next_contig]['cut_start']
+ 
+             nodeA = node_list[len(weights_list)]
+-            if out_graph.node[next_contig].has_key('path'):
++            if 'path' in out_graph.node[next_contig]:
+                 nodeB = out_graph.node[next_contig]['path'].split(';')[0]
+             else:
+                 nodeB = next_contig
+@@ -374,7 +374,7 @@ with open(outfile, 'w') as f:
+             next_contig = out_graph.successors(vertex)[0]
+ 
+             nodeB = rev_node(node_list[len(weights_list)])
+-            if out_graph.node[next_contig].has_key('path'):
++            if 'path' in out_graph.node[next_contig]:
+                 nodeA = rev_node(out_graph.node[next_contig]['path'].split(';')[0])
+             else:
+                 nodeA = rev_node(next_contig)
+@@ -419,7 +419,7 @@ with open(outfile, 'w') as f:
+ 
+             nodeA = rev_node(node_list[0])
+ 
+-            if out_graph.node[prev_contig].has_key('path'):
++            if 'path' in out_graph.node[prev_contig]:
+                 nodeB = rev_node(out_graph.node[prev_contig]['path'].split(';')[-1])
+             else:
+                 nodeB = rev_node(prev_contig)
+@@ -437,7 +437,7 @@ with open(outfile, 'w') as f:
+ 
+ 
+ 
+-print "Number of contigs: "+str(contig_no)
++print("Number of contigs: "+str(contig_no))
+ 
+ nx.write_graphml(out_graph,out_graphml_name)
+ 
+--- a/scripts/get_draft_path_norevcomp.py
++++ b/scripts/get_draft_path_norevcomp.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+--- a/scripts/get_single_strand.py
++++ b/scripts/get_single_strand.py
+@@ -1,6 +1,6 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+-#usage python get_single_strand.py <in-fasta> <out-fasta>
++#usage python3 get_single_strand.py <in-fasta> <out-fasta>
+ 
+ from pbcore.io import FastaIO
+ import sys
+--- a/scripts/interface_utils.py
++++ b/scripts/interface_utils.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -11,40 +11,40 @@ from parse_qv import *
+ #readarg = sys.argv[2]
+ 
+ def get_reads(filename, readlist):
+-    stream = subprocess.Popen(["DBshow", filename] + map(str,readlist),
++    stream = subprocess.Popen(["DBshow", filename] + list(map(str,readlist)),
+                                       stdout=subprocess.PIPE,bufsize=1)
+     reads = parse_read(stream.stdout) # generator
+     return reads
+ 
+ def get_QV(filename, readlist):
+-    stream = subprocess.Popen(["DBdump", filename, '-i'] + map(str,readlist),
++    stream = subprocess.Popen(["DBdump", filename, '-i'] + list(map(str,readlist)),
+                                       stdout=subprocess.PIPE,bufsize=1)
+     qv = parse_qv(stream.stdout) # generator
+     return qv
+ 
+ 
+ def get_alignments(filename, readlist):
+-    stream = subprocess.Popen(["LAshow", filename,filename]+ map(str,readlist),
++    stream = subprocess.Popen(["LAshow", filename,filename]+ list(map(str,readlist)),
+                                       stdout=subprocess.PIPE,bufsize=1)
+     alignments = parse_alignment(stream.stdout) # generator
+     return alignments
+ 
+ 
+ def get_alignments2(filename, alignmentname, readlist):
+-    stream = subprocess.Popen(["LA4Awesome", filename, filename, alignmentname]+ map(str,readlist),
++    stream = subprocess.Popen(["LA4Awesome", filename, filename, alignmentname]+ list(map(str,readlist)),
+                                       stdout=subprocess.PIPE,bufsize=1)
+     alignments = parse_alignment2(stream.stdout) # generator
+     return alignments
+     
+     
+ def get_alignments_mapping(filename, ref, alignmentname, readlist):
+-    stream = subprocess.Popen(["LA4Awesome", filename, ref, alignmentname]+ map(str,readlist)+ ['-F'],
++    stream = subprocess.Popen(["LA4Awesome", filename, ref, alignmentname]+ list(map(str,readlist))+ ['-F'],
+                                       stdout=subprocess.PIPE,bufsize=1)
+     alignments = parse_alignment2(stream.stdout) # generator
+     return alignments
+     
+ def get_alignments_mapping2(ref, filename, alignmentname):
+-    print ref,filename,alignmentname
++    print(ref,filename,alignmentname)
+     stream = subprocess.Popen(["LA4Awesome", ref, filename, alignmentname],
+                                       stdout=subprocess.PIPE,bufsize=1)
+     alignments = parse_alignment2(stream.stdout) # generator
+@@ -53,7 +53,7 @@ def get_alignments_mapping2(ref, filenam
+ 
+     
+ def get_alignments_mapping3(ref, filename, alignmentname, contig_no):
+-    print ref,filename,alignmentname
++    print(ref,filename,alignmentname)
+     stream = subprocess.Popen(["LA4Awesome", ref, filename, alignmentname, contig_no],
+                                       stdout=subprocess.PIPE,bufsize=1)
+     alignments = parse_alignment2(stream.stdout) # generator
+@@ -80,8 +80,8 @@ def get_all_alignments2(filename, alignm
+ def get_all_reads_in_alignment_with_one(filename,read):
+     this_read = get_reads(filename,[read])
+     alignments = list(get_alignments(filename,[read]))
+-    readlist = map(lambda x:x[2],alignments)
+-    print readlist
++    readlist = [x[2] for x in alignments]
++    print(readlist)
+     other_reads = get_reads(filename,readlist)
+     
+     return [list(this_read), list(other_reads), alignments] # note that this is not a generator
+--- a/scripts/longest_path.py
++++ b/scripts/longest_path.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import sys
+@@ -13,7 +13,7 @@ def longest_path(G):
+             dist[node] = max(pairs)
+         else:
+             dist[node] = (0, node)
+-    node,(length,_)  = max(dist.items(), key=lambda x:x[1])
++    node,(length,_)  = max(list(dist.items()), key=lambda x:x[1])
+     path = []
+     while length > 0:
+         path.append(node)
+@@ -27,23 +27,23 @@ filename = sys.argv[1]
+ g = nx.DiGraph()
+ 
+ with open(filename,'r') as f:
+-    for line in f.xreadlines():
++    for line in f:
+         g.add_edge(*(line.strip().split('->')))
+ 
+ 
+-print nx.info(g)
+-degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-print Counter(degree_sequence)
++print(nx.info(g))
++degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++print(Counter(degree_sequence))
+ 
+ for i in range(7):
+     for node in g.nodes():
+         if g.in_degree(node) == 0:
+             g.remove_node(node)
+             
+-    print nx.info(g)
++    print(nx.info(g))
+ 
+-degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-print Counter(degree_sequence)
++degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++print(Counter(degree_sequence))
+ 
+ 
+ def rev(string):
+@@ -57,7 +57,7 @@ for edge in g.edges():
+     #print edge
+     #print rev(edge[1]), rev(edge[0])
+ 
+-print nx.info(g)
++print(nx.info(g))
+ nx.write_graphml(g, filename.split('.')[0]+'.graphml')
+ #print(list(nx.dfs_edges(g,sys.argv[2])))
+ #p=nx.shortest_path(g)
+--- a/scripts/merge_hinges.py
++++ b/scripts/merge_hinges.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import random
+@@ -70,7 +70,7 @@ def z_clipping(G,threshold,in_hinges,out
+             
+             if len(cur_path) <= threshold and H.in_degree(cur_node) > 1 and H.out_degree(st_node) > 1 and cur_node not in in_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+                 
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+@@ -98,7 +98,7 @@ def z_clipping(G,threshold,in_hinges,out
+             
+             if len(cur_path) <= threshold and H.out_degree(cur_node) > 1 and H.in_degree(end_node) > 1 and cur_node not in out_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+                 for j in range(len(cur_path)-1):
+@@ -167,7 +167,7 @@ def random_condensation(G,n_nodes):
+                         merge_path(g,in_node,node,out_node)
+     
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification"+str(len(g.nodes()))
++        print("couldn't finish sparsification"+str(len(g.nodes())))
+                         
+     return g
+ 
+@@ -177,10 +177,10 @@ def add_groundtruth(g,json_file,in_hinge
+     
+     mapping = ujson.load(json_file)
+ 
+-    print 'getting mapping'
++    print('getting mapping')
+     mapped_nodes=0
+-    print str(len(mapping)) 
+-    print str(len(g.nodes()))
++    print(str(len(mapping))) 
++    print(str(len(g.nodes())))
+     
+     slack = 500
+            
+@@ -191,7 +191,7 @@ def add_groundtruth(g,json_file,in_hinge
+         # print node_base
+ 
+         #print node
+-        if mapping.has_key(node_base):
++        if node_base in mapping:
+             g.node[node]['aln_start'] = min (mapping[node_base][0][0],mapping[node_base][0][1])
+             g.node[node]['aln_end'] = max(mapping[node_base][0][1],mapping[node_base][0][0])
+ #             g.node[node]['chr'] = mapping[node_base][0][2]
+@@ -440,7 +440,7 @@ def read_graph(edges_file,hg_file,gt_fil
+ 
+                         hinge_node = lines1[1]+"_"+lines1[4] + '_' + lines1[6]
+ 
+-                        print hinge_node
++                        print(hinge_node)
+ 
+                         eff_hinge = hinge_mapping[hinge_node]
+ 
+--- a/scripts/parallel_draw.sh
++++ b/scripts/parallel_draw.sh
+@@ -3,7 +3,7 @@ echo "Bash version ${BASH_VERSION}..."
+ for i in $(seq 4000 1 20000)
+   do
+      echo drawing read $i
+-     num1=$(ps -ef | grep 'python draw.py' | wc -l)
++     num1=$(ps -ef | grep 'python3 draw.py' | wc -l)
+      num2=$(ps -ef | grep 'LA4Awesome' | wc -l)
+      num=$(( $num1 + $num2 ))
+      echo $num running
+@@ -11,9 +11,9 @@ for i in $(seq 4000 1 20000)
+          do 
+              sleep 5
+              echo waiting, $num running
+-             num1=$(ps -ef | grep 'python draw.py' | wc -l)
++             num1=$(ps -ef | grep 'python3 draw.py' | wc -l)
+              num2=$(ps -ef | grep 'LA4Awesome' | wc -l)
+              num=$(( $num1 + $num2 ))             
+          done
+-     python draw2.py $i &
++     python3 draw2.py $i &
+  done
+--- a/scripts/parallel_draw_large.sh
++++ b/scripts/parallel_draw_large.sh
+@@ -3,7 +3,7 @@ echo "Bash version ${BASH_VERSION}..."
+ for i in $(seq 1 1 100)
+   do
+      echo drawing read $i
+-     num1=$(ps -ef | grep 'python draw.py' | wc -l)
++     num1=$(ps -ef | grep 'python3 draw.py' | wc -l)
+      num2=$(ps -ef | grep 'LA4Awesome' | wc -l)
+      num=$(( $num1 + $num2 ))
+      echo $num running
+@@ -12,5 +12,5 @@ for i in $(seq 1 1 100)
+              sleep 5
+              echo waiting
+          done
+-     python draw.py $i &
++     python3 draw.py $i &
+  done
+\ No newline at end of file
+--- a/scripts/parse.py
++++ b/scripts/parse.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ min_len_aln = 1000
+@@ -13,7 +13,7 @@ with sys.stdin as f:
+         read_id = l[0]
+         seq = l[1]
+         
+-        print read_id,seq
++        print(read_id,seq)
+         
+         #if len(seq) > max_len:
+         #    seq = seq[:max_len-1]
+--- a/scripts/parse_alignment.py
++++ b/scripts/parse_alignment.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import re
+@@ -11,7 +11,7 @@ def parse_alignment(stream = sys.stdin):
+             sub = re.sub(',','',sub)
+             lst = sub.split()[:-1]
+             if len(lst) == 9:
+-                yield [lst[2]] + map(int, lst[0:2] + lst[3:])
++                yield [lst[2]] + list(map(int, lst[0:2] + lst[3:]))
+                 
+                 
+ def parse_alignment2(stream = sys.stdin):    
+@@ -22,5 +22,5 @@ def parse_alignment2(stream = sys.stdin)
+             sub = re.sub(',','',sub)
+             lst = sub.split()[:-1]
+             if len(lst) == 11:
+-                yield [lst[2]] + map(int, lst[0:2] + lst[3:])
++                yield [lst[2]] + list(map(int, lst[0:2] + lst[3:]))
+                 
+--- a/scripts/parse_qv.py
++++ b/scripts/parse_qv.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ 
+--- a/scripts/parse_read.py
++++ b/scripts/parse_read.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ 
+--- a/scripts/pileup.ipynb
++++ b/scripts/pileup.ipynb
+@@ -205,21 +205,21 @@
+  ],
+  "metadata": {
+   "kernelspec": {
+-   "display_name": "Python 2",
+-   "language": "python",
+-   "name": "python2"
++   "display_name": "Python 3",
++   "language": "python3",
++   "name": "python3"
+   },
+   "language_info": {
+    "codemirror_mode": {
+-    "name": "ipython",
+-    "version": 2
++    "name": "ipython3",
++    "version": 3
+    },
+    "file_extension": ".py",
+-   "mimetype": "text/x-python",
+-   "name": "python",
+-   "nbconvert_exporter": "python",
+-   "pygments_lexer": "ipython2",
+-   "version": "2.7.6"
++   "mimetype": "text/x-python3",
++   "name": "python3",
++   "nbconvert_exporter": "python3",
++   "pygments_lexer": "ipython3",
++   "version": "3.7"
+   }
+  },
+  "nbformat": 4,
+--- a/scripts/pipeline_consensus.py
++++ b/scripts/pipeline_consensus.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -38,63 +38,63 @@ base_path = './'
+ 
+ if st_point <= 1 and end_point >= 1:
+     draft_path_cmd = 'get_draft_path.py '+base_path+' '+ bact_id+' '+graphml_file
+-    print '1: '+draft_path_cmd
++    print('1: '+draft_path_cmd)
+     subprocess.check_output(draft_path_cmd,cwd=base_path, shell=True)
+ 
+ 
+ if st_point <= 2 and end_point >= 2:
+     draft_assembly_cmd = 'draft_assembly --db '+bact_id+' --las '+bact_id+'.las --prefix '+bact_id+' --config '+ini_path+' --out '+bact_id+'.draft'
+-    print '2: '+draft_assembly_cmd
++    print('2: '+draft_assembly_cmd)
+     subprocess.check_output(draft_assembly_cmd,cwd=base_path, shell=True)
+   
+ 
+ if st_point <= 3 and end_point >= 3:
+     corr_head_cmd = 'correct_head.py '+bact_id+'.draft.fasta '+bact_id+'.draft.pb.fasta draft_map.txt'
+-    print '3: '+corr_head_cmd
++    print('3: '+corr_head_cmd)
+     subprocess.check_output(corr_head_cmd,cwd=base_path, shell=True)
+ 
+ 
+ if st_point <= 4 and end_point >= 4:
+     subprocess.call("rm -f draft.db",shell=True,cwd=base_path)
+     fasta2DB_cmd = "fasta2DB draft "+base_path+bact_id+'.draft.pb.fasta'
+-    print '4: '+fasta2DB_cmd
++    print('4: '+fasta2DB_cmd)
+     subprocess.check_output(fasta2DB_cmd.split(),cwd=base_path)
+ 
+ if st_point <= 5 and end_point >= 5:
+     subprocess.call("rm -f draft.*.las",shell=True,cwd=base_path)
+     mapper_cmd = "HPCmapper draft "+bact_id
+-    print '5: '+mapper_cmd
++    print('5: '+mapper_cmd)
+     subprocess.call(mapper_cmd.split(),stdout=open(base_path+'draft_consensus.sh','w') , cwd=base_path)
+ 
+ 
+ if st_point <= 6 and end_point >= 6:
+     # modify_cmd = """awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh"""
+     modify_cmd = ['awk','{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}','draft_consensus.sh']
+-    print '6: '+"""awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh"""
++    print('6: '+"""awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh""")
+     subprocess.call(modify_cmd,stdout=open(base_path+'draft_consensus2.sh','w') , cwd=base_path)
+ 
+ 
+ if st_point <= 7 and end_point >= 7:
+     mapper_shell_cmd = "csh -v draft_consensus.sh"
+-    print '7: '+mapper_shell_cmd
++    print('7: '+mapper_shell_cmd)
+     subprocess.check_output(mapper_shell_cmd.split(), cwd=base_path)
+ 
+ if st_point <= 8 and end_point >= 8:
+     # remove_cmd = 'rm -f nonrevcompdraft.'+bact_id+'.*.las'
+     # subprocess.call(remove_cmd,shell=True,cwd=base_path)
+     LAmerge_cmd = "LAmerge draft."+bact_id+".las "+'draft.'+bact_id+'.[0-9].las'
+-    print '8: '+LAmerge_cmd
++    print('8: '+LAmerge_cmd)
+     subprocess.check_output(LAmerge_cmd,cwd=base_path,shell=True)
+ 
+ if st_point <= 9 and end_point >= 9:
+     consensus_cmd = 'consensus draft '+bact_id+' draft.'+bact_id+'.las '+bact_id+'.consensus.fasta '+ini_path
+-    print '9: '+consensus_cmd
++    print('9: '+consensus_cmd)
+     subprocess.check_output(consensus_cmd,cwd=base_path,shell=True)
+     
+ 
+ if st_point <= 10 and end_point >= 10:
+     gfa_cmd =  'get_consensus_gfa.py '+base_path+ ' '+ bact_id+ ' '+bact_id+'.consensus.fasta' 
+-    print '10: '+gfa_cmd
++    print('10: '+gfa_cmd)
+     subprocess.check_output(gfa_cmd,cwd=base_path,shell=True)
+ 
+ 
+--- a/scripts/pipeline_consensus_norevcomp.py
++++ b/scripts/pipeline_consensus_norevcomp.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ 
+ import sys
+@@ -39,32 +39,32 @@ base_path = './'
+ 
+ if st_point <= 1 and end_point >= 1:
+     draft_path_cmd = 'get_draft_path_norevcomp.py '+base_path+' '+ bact_id+' '+graphml_file
+-    print '1: '+draft_path_cmd
++    print('1: '+draft_path_cmd)
+     subprocess.check_output(draft_path_cmd,cwd=base_path, shell=True)
+ 
+ 
+ if st_point <= 2 and end_point >= 2:
+     draft_assembly_cmd = 'draft_assembly --db '+bact_id+' --las '+bact_id+'.las --prefix '+bact_id+' --config '+ini_path+' --out '+bact_id+'.draft'
+-    print '2: '+draft_assembly_cmd
++    print('2: '+draft_assembly_cmd)
+     subprocess.check_output(draft_assembly_cmd,cwd=base_path, shell=True)
+   
+ 
+ if st_point <= 3 and end_point >= 3:
+     corr_head_cmd = 'correct_head.py '+bact_id+'.draft.fasta '+bact_id+'.draft.pb.fasta draft_map.txt'
+-    print '3: '+corr_head_cmd
++    print('3: '+corr_head_cmd)
+     subprocess.check_output(corr_head_cmd,cwd=base_path, shell=True)
+ 
+ 
+ if st_point <= 4 and end_point >= 4:
+     subprocess.call("rm -f draft.db",shell=True,cwd=base_path)
+     fasta2DB_cmd = "fasta2DB draft "+base_path+bact_id+'.draft.pb.fasta'
+-    print '4: '+fasta2DB_cmd
++    print('4: '+fasta2DB_cmd)
+     subprocess.check_output(fasta2DB_cmd.split(),cwd=base_path)
+ 
+ if st_point <= 5 and end_point >= 5:
+     subprocess.call("rm -f draft.*.las",shell=True,cwd=base_path)
+     mapper_cmd = "HPCmapper draft "+bact_id
+-    print '5: '+mapper_cmd
++    print('5: '+mapper_cmd)
+     subprocess.call(mapper_cmd.split(),stdout=open(base_path+'draft_consensus.sh','w') , cwd=base_path)
+ 
+ 
+@@ -72,14 +72,14 @@ if st_point <= 5 and end_point >= 5:
+ if st_point <= 6 and end_point >= 6:
+     # modify_cmd = """awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh"""
+     modify_cmd = ['awk','{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}','draft_consensus.sh']
+-    print '6: '+"""awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh"""
++    print('6: '+"""awk '{gsub("daligner -A -k20 -h50 -e.85","daligner -A",$0); print $0}' draft_consensus.sh""")
+     subprocess.call(modify_cmd,stdout=open(base_path+'draft_consensus2.sh','w') , cwd=base_path)
+ 
+ 
+ 
+ if st_point <= 7 and end_point >= 7:
+     mapper_shell_cmd = "csh -v draft_consensus2.sh"
+-    print '7: '+mapper_shell_cmd
++    print('7: '+mapper_shell_cmd)
+     subprocess.check_output(mapper_shell_cmd.split(), cwd=base_path)
+ 
+ 
+@@ -87,13 +87,13 @@ if st_point <= 8 and end_point >= 8:
+     # remove_cmd = 'rm -f nonrevcompdraft.'+bact_id+'.*.las'
+     # subprocess.call(remove_cmd,shell=True,cwd=base_path)
+     LAmerge_cmd = "LAmerge draft."+bact_id+".las "+'draft.'+bact_id+'.[0-9].las'
+-    print '8: '+LAmerge_cmd
++    print('8: '+LAmerge_cmd)
+     subprocess.check_output(LAmerge_cmd,cwd=base_path,shell=True)
+ 
+ 
+ if st_point <= 9 and end_point >= 9:
+     consensus_cmd = 'consensus draft '+bact_id+' draft.'+bact_id+'.las '+bact_id+'.norevcomp_consensus.fasta '+ini_path
+-    print '9: '+consensus_cmd
++    print('9: '+consensus_cmd)
+     subprocess.check_output(consensus_cmd,cwd=base_path,shell=True)
+     
+ 
+--- a/scripts/pipeline_nctc.py
++++ b/scripts/pipeline_nctc.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -27,35 +27,35 @@ assert len(fasta_names)==1
+ fasta_name = fasta_names[0]
+ bact_name = fasta_name.split('.fasta')[0]
+ 
+-print bact_name
++print(bact_name)
+ 
+ 
+ if st_point <= 1:
+ 	subprocess.call("rm -f *.db",shell=True,cwd=base_path)
+ 	fasta2DB_cmd = "fasta2DB "+bact_name+' '+base_path+fasta_name
+-	print fasta2DB_cmd
++	print(fasta2DB_cmd)
+ 	subprocess.check_output(fasta2DB_cmd.split(),cwd=base_path)
+ 
+ if st_point <= 2:
+ 	DBsplit_cmd = "DBsplit -x500 -s100 "+bact_name
+-	print DBsplit_cmd
++	print(DBsplit_cmd)
+ 	subprocess.check_output(DBsplit_cmd.split(),cwd=base_path)
+ 
+ if st_point <= 3:
+ 	subprocess.call("rm -f *.las",shell=True,cwd=base_path)
+ 	daligner_cmd = "HPCdaligner -t5 "+bact_name
+         daligner_shell_cmd = "csh -v daligner_cmd.sh"
+-	print daligner_cmd
++	print(daligner_cmd)
+ 	p = subprocess.call(daligner_cmd.split(),stdout=open(base_path+'daligner_cmd.sh','w') , cwd=base_path)
+         p2 = subprocess.check_output(daligner_shell_cmd.split(), cwd=base_path)
+ if st_point <= 4:
+ 	remove_cmd = "rm "+base_path+bact_name+".*."+bact_name+".*"
+-	print remove_cmd
++	print(remove_cmd)
+ 	os.system(remove_cmd)
+ 
+ if st_point <= 5:
+ 	LAmerge_cmd = "LAmerge "+bact_name+".las "+bact_name+".*.las"
+-	print LAmerge_cmd
++	print(LAmerge_cmd)
+ 	subprocess.check_output(LAmerge_cmd,cwd=base_path,shell=True)
+ 
+ if st_point <= 6:
+@@ -71,17 +71,17 @@ if st_point <= 8:
+ 
+ if st_point <= 9:
+ 	Reads_filter_cmd = "Reads_filter --db "+bact_name+" --las "+bact_name+".las -x "+bact_name+" --config ~/AwesomeAssembler/utils/nominal.ini"
+-	print Reads_filter_cmd
++	print(Reads_filter_cmd)
+ 	subprocess.check_output(Reads_filter_cmd,cwd=base_path, shell=True)
+ 
+ if st_point <= 10:
+ 	hinging_cmd = "hinging --db "+bact_name+" --las "+bact_name+".las -x "+bact_name+" --config ~/AwesomeAssembler/utils/nominal.ini -o "+bact_name
+-	print hinging_cmd
++	print(hinging_cmd)
+ 	subprocess.check_output(hinging_cmd, cwd=base_path, shell=True)
+ 
+ if st_point <= 11:
+-	pruning_cmd = "python ~/AwesomeAssembler/scripts/pruning_and_clipping.py "+bact_name+".edges.hinges "+bact_name+".hinge.list A"
+-	print pruning_cmd
++	pruning_cmd = "python3 /usr/lib/hinge/pruning_and_clipping.py "+bact_name+".edges.hinges "+bact_name+".hinge.list A"
++	print(pruning_cmd)
+ 	subprocess.check_output(pruning_cmd, cwd=base_path, shell=True)
+ 
+ 
+--- a/scripts/pruning_and_clipping.py
++++ b/scripts/pruning_and_clipping.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ # coding: utf-8
+ 
+@@ -39,7 +39,7 @@ def write_graph2(G,Ginfo,flname):
+ 
+             if (edge[0],edge[1]) not in Ginfo:
+                 count_no += 1
+-                print "not found"
++                print("not found")
+                 continue
+             else:
+                 count_yes += 1
+@@ -54,7 +54,7 @@ def write_graph2(G,Ginfo,flname):
+ 
+             f.write(Ginfo[(edge[0],edge[1])]+'\n')
+ 
+-    print count_no, count_yes
++    print(count_no, count_yes)
+ 
+ 
+ 
+@@ -90,7 +90,7 @@ def prune_graph(graph,in_hinges,out_hing
+                 H.add_node(successor)
+         for node in in_hinges:
+             H.add_node(node)
+-    map(H.add_node,start_nodes)
++    list(map(H.add_node,start_nodes))
+     all_vertices=set(G.nodes())
+     current_vertices=set(H.nodes())
+     undiscovered_vertices=all_vertices-current_vertices
+@@ -107,10 +107,10 @@ def prune_graph(graph,in_hinges,out_hing
+         current_vertices=current_vertices.union(discovered_vertices_set)
+ #         print len(undiscovered_vertices)
+         if len(discovered_vertices_set)==0:
+-            print last_discovered_vertices
+-            print 'did not reach all nodes'
+-            print 'size of G: '+str(len(G.nodes()))
+-            print 'size of H: '+str(len(H.nodes()))
++            print(last_discovered_vertices)
++            print('did not reach all nodes')
++            print('size of G: '+str(len(G.nodes())))
++            print('size of H: '+str(len(H.nodes())))
+ #             return H
+ 
+             rand_node = list(undiscovered_vertices)[0]
+@@ -206,20 +206,20 @@ def dead_end_clipping_sym(G,threshold,pr
+ 
+         cur_node = st_node
+         if print_debug:
+-            print '----0'
+-            print st_node
++            print('----0')
++            print(st_node)
+ 
+         if len(H.successors(st_node)) == 1:
+             cur_node = H.successors(st_node)[0]
+ 
+             if print_debug:
+-                print '----1'
++                print('----1')
+ 
+             while H.in_degree(cur_node) == 1 and H.out_degree(cur_node) == 1 and len(cur_path) < threshold + 2:
+                 cur_path.append(cur_node)
+ 
+                 if print_debug:
+-                    print cur_node
++                    print(cur_node)
+ 
+                 cur_node = H.successors(cur_node)[0]
+ 
+@@ -228,21 +228,21 @@ def dead_end_clipping_sym(G,threshold,pr
+ 
+ 
+         if print_debug:
+-            print '----2'
+-            print cur_path
++            print('----2')
++            print(cur_path)
+ 
+ 
+         if len(cur_path) <= threshold and (H.in_degree(cur_node) > 1 or H.out_degree(cur_node) == 0):
+             for vertex in cur_path:
+                 # try:
+                 if print_debug:
+-                    print 'about to delete ',vertex,rev_node(vertex)
++                    print('about to delete ',vertex,rev_node(vertex))
+                 H.remove_node(vertex)
+                 H.remove_node(rev_node(vertex))
+                 # except:
+                     # pass
+                 if print_debug:
+-                    print 'deleted ',vertex,rev_node(vertex)
++                    print('deleted ',vertex,rev_node(vertex))
+ 
+ 
+     return H
+@@ -276,7 +276,7 @@ def z_clipping(G,threshold,in_hinges,out
+ 
+             if len(cur_path) <= threshold and H.in_degree(cur_node) > 1 and H.out_degree(st_node) > 1 and cur_node not in in_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+ 
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+@@ -304,7 +304,7 @@ def z_clipping(G,threshold,in_hinges,out
+ 
+             if len(cur_path) <= threshold and H.out_degree(cur_node) > 1 and H.in_degree(end_node) > 1 and cur_node not in out_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+                 for j in range(len(cur_path)-1):
+@@ -346,7 +346,7 @@ def z_clipping_sym(G,threshold,in_hinges
+ 
+             if len(cur_path) <= threshold and H.in_degree(cur_node) > 1 and H.out_degree(st_node) > 1 and cur_node not in in_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+ 
+                 for edge in cur_path:
+ 
+@@ -432,7 +432,7 @@ def random_condensation(G,n_nodes,check_
+                         merge_path(g,in_node,node,out_node)
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification"+str(len(g.nodes()))
++        print("couldn't finish sparsification"+str(len(g.nodes())))
+ 
+     return g
+ 
+@@ -478,7 +478,7 @@ def random_condensation_sym(G,n_nodes,ch
+                             pass
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification"+str(len(g.nodes()))
++        print("couldn't finish sparsification"+str(len(g.nodes())))
+ 
+     return g
+ 
+@@ -535,7 +535,7 @@ def random_condensation2(g,n_nodes):
+ 
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification: "+str(len(g.nodes()))
++        print("couldn't finish sparsification: "+str(len(g.nodes())))
+ 
+ 
+     return g
+@@ -583,7 +583,7 @@ def bubble_bursting_sym(H,threshold,prin
+         if len(cur_path) <= threshold and len(alt_path) <= threshold and end_node0 == cur_node:
+ 
+             if print_bubble:
+-                print 'found bubble'
++                print('found bubble')
+ 
+             for edge in cur_path:
+ 
+@@ -692,8 +692,8 @@ def loop_resolution(g,max_nodes,flank,pr
+     starting_nodes =  [x for x in g.nodes() if g.out_degree(x) == 2]
+ 
+     if print_debug:
+-        print '----'
+-        print starting_nodes
++        print('----')
++        print(starting_nodes)
+ 
+     tandem = []
+ 
+@@ -704,8 +704,8 @@ def loop_resolution(g,max_nodes,flank,pr
+             continue
+ 
+         if print_debug:
+-            print '----'
+-            print st_node
++            print('----')
++            print(st_node)
+ 
+         loop_len = 0
+ 
+@@ -716,14 +716,14 @@ def loop_resolution(g,max_nodes,flank,pr
+                 continue
+ 
+             if print_debug:
+-                print '----'
+-                print first_node
++                print('----')
++                print(first_node)
+ 
+             other_successor = [x for x in g.successors(st_node) if x != first_node][0]
+ 
+             next_node = first_node
+             if print_debug:
+-                print 'going on loop'
++                print('going on loop')
+ 
+             loop_len = 0
+             prev_edge = g[st_node][next_node]
+@@ -739,7 +739,7 @@ def loop_resolution(g,max_nodes,flank,pr
+                 continue
+ 
+             if print_debug:
+-                print "length in loop " + str(loop_len)
++                print("length in loop " + str(loop_len))
+             len_in_loop = loop_len
+             first_node_of_repeat = next_node
+ 
+@@ -780,8 +780,8 @@ def loop_resolution(g,max_nodes,flank,pr
+                 try:
+                     assert not (g.in_degree(next_double_node) == 1 and g.out_degree(next_double_node) == 1)
+                 except:
+-                    print str(g.in_degree(next_node))
+-                    print str(g.out_degree(next_node))
++                    print(str(g.in_degree(next_node)))
++                    print(str(g.out_degree(next_node)))
+                     raise
+ 
+             while g.in_degree(next_double_node) == 1 and g.out_degree(next_double_node) == 1 and node_cnt < max_nodes:
+@@ -791,16 +791,16 @@ def loop_resolution(g,max_nodes,flank,pr
+                 rep.append(next_double_node)
+ 
+             if print_debug:
+-                print "length in repeat " + str(loop_len-len_in_loop)
++                print("length in repeat " + str(loop_len-len_in_loop))
+ 
+             if next_double_node == st_node and loop_len > MAX_PLASMID_LENGTH:
+                 if print_debug:
+-                    print 'success!'
+-                    print "length in loop " + str(loop_len)
+-                    print 'rep is:'
+-                    print rep
+-                    print 'in_node and other_successor:'
+-                    print in_node, other_successor
++                    print('success!')
++                    print("length in loop " + str(loop_len))
++                    print('rep is:')
++                    print(rep)
++                    print('in_node and other_successor:')
++                    print(in_node, other_successor)
+                 resolve_rep(g,rep,in_node,other_successor)
+     #             print next_double_node
+ 
+@@ -880,10 +880,10 @@ def add_groundtruth(g,json_file,in_hinge
+ 
+     mapping = ujson.load(json_file)
+ 
+-    print 'getting mapping'
++    print('getting mapping')
+     mapped_nodes=0
+-    print str(len(mapping))
+-    print str(len(g.nodes()))
++    print(str(len(mapping)))
++    print(str(len(g.nodes())))
+ 
+     slack = 500
+     max_chr = 0
+@@ -897,7 +897,7 @@ def add_groundtruth(g,json_file,in_hinge
+ 
+         #print node
+         g.node[node]['normpos'] = 0
+-        if mapping.has_key(node_base):
++        if node_base in mapping:
+             g.node[node]['chr'] = mapping[node_base][0][2]+1
+             g.node[node]['aln_start'] = min (mapping[node_base][0][0],mapping[node_base][0][1])
+             g.node[node]['aln_end'] = max(mapping[node_base][0][1],mapping[node_base][0][0])
+@@ -922,22 +922,22 @@ def add_groundtruth(g,json_file,in_hinge
+         else:
+             chr_length_dict[g.node[node]['chr']] = max(g.node[node]['aln_end'], 1)
+ 
+-    chr_list = sorted(chr_length_dict.items(), key=operator.itemgetter(1), reverse=True)
++    chr_list = sorted(list(chr_length_dict.items()), key=operator.itemgetter(1), reverse=True)
+ 
+     max_chr_len1 = max([g.node[x]['aln_end'] for x in  g.nodes()])
+     max_chr_multiplier = 10**len(str(max_chr_len1))
+-    print [x for x in chr_list]
++    print([x for x in chr_list])
+     chr_set =[x [0] for x in chr_list]
+-    print chr_set
++    print(chr_set)
+     # red_bk = 102
+     # green_bk = 102
+     # blue_bk = 102
+     colour_list = ['red', 'lawngreen', 'deepskyblue', 'deeppink', 'darkorange', 'purple', 'gold', 'mediumblue',   'saddlebrown', 'darkgreen']
+     for colour in colour_list:
+-        print  matplotlib.colors.colorConverter.to_rgb(colour)
++        print(matplotlib.colors.colorConverter.to_rgb(colour))
+     for index, chrom in enumerate(chr_set):
+         node_set = set([x for x in  g.nodes() if g.node[x]['chr'] == chrom])
+-        print chrom
++        print(chrom)
+ 
+ 
+         max_chr_len = max([g.node[x]['aln_end'] for x in  g.nodes() if g.node[x]['chr'] == chrom])
+@@ -960,7 +960,7 @@ def add_groundtruth(g,json_file,in_hinge
+         blue_bk = max(blue-100,0)
+         green_bk = max(green-100,0)
+ 
+-        print red,blue,green
++        print(red,blue,green)
+         for node in node_set:
+             g.node[node]['normpos'] = g.node[node]['chr'] * max_chr_multiplier + (g.node[node]['aln_end']/float(max_chr_len))*max_chr_multiplier
+             lamda = (g.node[node]['aln_end']/max_chr_len)
+@@ -1058,13 +1058,13 @@ def add_chimera_flags(g,prefix):
+                     assert not ((node_name+'_0' in node_set and node_name+'_1' not in node_set)
+                         or (node_name+'_0' not in node_set and node_name+'_1'  in node_set))
+                 except:
+-                    print node_name + ' is not symmetrically present in the graph input.'
++                    print(node_name + ' is not symmetrically present in the graph input.')
+                     raise
+                 if node_name+'_0' in node_set:
+                     g.node[node_name+'_0']['CFLAG'] = True
+                     g.node[node_name+'_1']['CFLAG'] = True
+                     num_bad_cov_reads += 1
+-    print str(num_bad_cov_reads) + ' bad coverage reads.'
++    print(str(num_bad_cov_reads) + ' bad coverage reads.')
+ 
+     num_bad_slf_reads = 0
+     if slf_flags != None:
+@@ -1075,13 +1075,13 @@ def add_chimera_flags(g,prefix):
+                     assert not ((node_name+'_0' in node_set and node_name+'_1' not in node_set)
+                         or (node_name+'_0' not in node_set and node_name+'_1'  in node_set))
+                 except:
+-                    print node_name + ' is not symmetrically present in the graph input.'
++                    print(node_name + ' is not symmetrically present in the graph input.')
+                     raise
+                 if node_name+'_0' in node_set:
+                     g.node[node_name+'_0']['SFLAG'] = True
+                     g.node[node_name+'_1']['SFLAG'] = True
+                     num_bad_slf_reads += 1
+-    print str(num_bad_slf_reads) + ' bad self aligned reads.'            
++    print(str(num_bad_slf_reads) + ' bad self aligned reads.')            
+ 
+ 
+ 
+--- a/scripts/pruning_and_clipping_nanopore.py
++++ b/scripts/pruning_and_clipping_nanopore.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ # coding: utf-8
+ 
+@@ -36,7 +36,7 @@ def write_graph2(G,Ginfo,flname):
+ 
+             if (edge[0],edge[1]) not in Ginfo:
+                 count_no += 1
+-                print "not found"
++                print("not found")
+                 continue
+             else:
+                 count_yes += 1
+@@ -51,7 +51,7 @@ def write_graph2(G,Ginfo,flname):
+ 
+             f.write(Ginfo[(edge[0],edge[1])]+'\n')
+ 
+-    print count_no, count_yes
++    print(count_no, count_yes)
+ 
+ 
+ 
+@@ -87,7 +87,7 @@ def prune_graph(graph,in_hinges,out_hing
+                 H.add_node(successor)
+         for node in in_hinges:
+             H.add_node(node)
+-    map(H.add_node,start_nodes)
++    list(map(H.add_node,start_nodes))
+     all_vertices=set(G.nodes())
+     current_vertices=set(H.nodes())
+     undiscovered_vertices=all_vertices-current_vertices
+@@ -104,10 +104,10 @@ def prune_graph(graph,in_hinges,out_hing
+         current_vertices=current_vertices.union(discovered_vertices_set)
+ #         print len(undiscovered_vertices)
+         if len(discovered_vertices_set)==0:
+-            print last_discovered_vertices
+-            print 'did not reach all nodes'
+-            print 'size of G: '+str(len(G.nodes()))
+-            print 'size of H: '+str(len(H.nodes()))
++            print(last_discovered_vertices)
++            print('did not reach all nodes')
++            print('size of G: '+str(len(G.nodes())))
++            print('size of H: '+str(len(H.nodes())))
+ #             return H
+ 
+             rand_node = list(undiscovered_vertices)[0]
+@@ -203,20 +203,20 @@ def dead_end_clipping_sym(G,threshold,pr
+ 
+         cur_node = st_node
+         if print_debug:
+-            print '----0'
+-            print st_node
++            print('----0')
++            print(st_node)
+ 
+         if len(H.successors(st_node)) == 1:
+             cur_node = H.successors(st_node)[0]
+ 
+             if print_debug:
+-                print '----1'
++                print('----1')
+ 
+             while H.in_degree(cur_node) == 1 and H.out_degree(cur_node) == 1 and len(cur_path) < threshold + 2:
+                 cur_path.append(cur_node)
+ 
+                 if print_debug:
+-                    print cur_node
++                    print(cur_node)
+ 
+                 cur_node = H.successors(cur_node)[0]
+ 
+@@ -225,21 +225,21 @@ def dead_end_clipping_sym(G,threshold,pr
+ 
+ 
+         if print_debug:
+-            print '----2'
+-            print cur_path
++            print('----2')
++            print(cur_path)
+ 
+ 
+         if len(cur_path) <= threshold and (H.in_degree(cur_node) > 1 or H.out_degree(cur_node) == 0):
+             for vertex in cur_path:
+                 # try:
+                 if print_debug:
+-                    print 'about to delete ',vertex,rev_node(vertex)
++                    print('about to delete ',vertex,rev_node(vertex))
+                 H.remove_node(vertex)
+                 H.remove_node(rev_node(vertex))
+                 # except:
+                     # pass
+                 if print_debug:
+-                    print 'deleted ',vertex,rev_node(vertex)
++                    print('deleted ',vertex,rev_node(vertex))
+ 
+ 
+     return H
+@@ -273,7 +273,7 @@ def z_clipping(G,threshold,in_hinges,out
+ 
+             if len(cur_path) <= threshold and H.in_degree(cur_node) > 1 and H.out_degree(st_node) > 1 and cur_node not in in_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+ 
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+@@ -301,7 +301,7 @@ def z_clipping(G,threshold,in_hinges,out
+ 
+             if len(cur_path) <= threshold and H.out_degree(cur_node) > 1 and H.in_degree(end_node) > 1 and cur_node not in out_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+                 for edge in cur_path:
+                     H.remove_edge(edge[0],edge[1])
+                 for j in range(len(cur_path)-1):
+@@ -343,7 +343,7 @@ def z_clipping_sym(G,threshold,in_hinges
+ 
+             if len(cur_path) <= threshold and H.in_degree(cur_node) > 1 and H.out_degree(st_node) > 1 and cur_node not in in_hinges:
+                 if print_z:
+-                    print cur_path
++                    print(cur_path)
+ 
+                 for edge in cur_path:
+ 
+@@ -429,7 +429,7 @@ def random_condensation(G,n_nodes,check_
+                         merge_path(g,in_node,node,out_node)
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification"+str(len(g.nodes()))
++        print("couldn't finish sparsification"+str(len(g.nodes())))
+ 
+     return g
+ 
+@@ -475,7 +475,7 @@ def random_condensation_sym(G,n_nodes,ch
+                             pass
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification"+str(len(g.nodes()))
++        print("couldn't finish sparsification"+str(len(g.nodes())))
+ 
+     return g
+ 
+@@ -532,7 +532,7 @@ def random_condensation2(g,n_nodes):
+ 
+ 
+     if iter_cnt >= max_iter:
+-        print "couldn't finish sparsification: "+str(len(g.nodes()))
++        print("couldn't finish sparsification: "+str(len(g.nodes())))
+ 
+ 
+     return g
+@@ -580,7 +580,7 @@ def bubble_bursting_sym(H,threshold,prin
+         if len(cur_path) <= threshold and len(alt_path) <= threshold and end_node0 == cur_node:
+ 
+             if print_bubble:
+-                print 'found bubble'
++                print('found bubble')
+ 
+             for edge in cur_path:
+ 
+@@ -689,8 +689,8 @@ def loop_resolution(g,max_nodes,flank,pr
+     starting_nodes =  [x for x in g.nodes() if g.out_degree(x) == 2]
+ 
+     if print_debug:
+-        print '----'
+-        print starting_nodes
++        print('----')
++        print(starting_nodes)
+ 
+     tandem = []
+ 
+@@ -701,8 +701,8 @@ def loop_resolution(g,max_nodes,flank,pr
+             continue
+ 
+         if print_debug:
+-            print '----'
+-            print st_node
++            print('----')
++            print(st_node)
+ 
+ 
+         for first_node in g.successors(st_node):
+@@ -712,14 +712,14 @@ def loop_resolution(g,max_nodes,flank,pr
+                 continue
+ 
+             if print_debug:
+-                print '----'
+-                print first_node
++                print('----')
++                print(first_node)
+ 
+             other_successor = [x for x in g.successors(st_node) if x != first_node][0]
+ 
+             next_node = first_node
+             if print_debug:
+-                print 'going on loop'
++                print('going on loop')
+ 
+             node_cnt = 0
+             while g.in_degree(next_node) == 1 and g.out_degree(next_node) == 1 and node_cnt < max_nodes:
+@@ -771,11 +771,11 @@ def loop_resolution(g,max_nodes,flank,pr
+ 
+             if next_double_node == st_node:
+                 if print_debug:
+-                    print 'success!'
+-                    print 'rep is:'
+-                    print rep
+-                    print 'in_node and other_successor:'
+-                    print in_node, other_successor
++                    print('success!')
++                    print('rep is:')
++                    print(rep)
++                    print('in_node and other_successor:')
++                    print(in_node, other_successor)
+                 resolve_rep(g,rep,in_node,other_successor)
+     #             print next_double_node
+ 
+@@ -805,10 +805,10 @@ def add_groundtruth(g,json_file,in_hinge
+ 
+     mapping = ujson.load(json_file)
+ 
+-    print 'getting mapping'
++    print('getting mapping')
+     mapped_nodes=0
+-    print str(len(mapping))
+-    print str(len(g.nodes()))
++    print(str(len(mapping)))
++    print(str(len(g.nodes())))
+ 
+     slack = 500
+     max_chr = 0
+@@ -822,7 +822,7 @@ def add_groundtruth(g,json_file,in_hinge
+ 
+         #print node
+         g.node[node]['normpos'] = 0
+-        if mapping.has_key(node_base):
++        if node_base in mapping:
+             g.node[node]['chr'] = mapping[node_base][0][2]+1
+             g.node[node]['aln_start'] = min (mapping[node_base][0][0],mapping[node_base][0][1])
+             g.node[node]['aln_end'] = max(mapping[node_base][0][1],mapping[node_base][0][0])
+@@ -847,22 +847,22 @@ def add_groundtruth(g,json_file,in_hinge
+         else:
+             chr_length_dict[g.node[node]['chr']] = max(g.node[node]['aln_end'], 1)
+ 
+-    chr_list = sorted(chr_length_dict.items(), key=operator.itemgetter(1), reverse=True)
++    chr_list = sorted(list(chr_length_dict.items()), key=operator.itemgetter(1), reverse=True)
+ 
+     max_chr_len1 = max([g.node[x]['aln_end'] for x in  g.nodes()])
+     max_chr_multiplier = 10**len(str(max_chr_len1))
+-    print [x for x in chr_list]
++    print([x for x in chr_list])
+     chr_set =[x [0] for x in chr_list]
+-    print chr_set
++    print(chr_set)
+     # red_bk = 102
+     # green_bk = 102
+     # blue_bk = 102
+     colour_list = ['red', 'lawngreen', 'deepskyblue', 'deeppink', 'darkorange', 'purple', 'gold', 'mediumblue',   'saddlebrown', 'darkgreen']
+     for colour in colour_list:
+-        print  matplotlib.colors.colorConverter.to_rgb(colour)
++        print(matplotlib.colors.colorConverter.to_rgb(colour))
+     for index, chrom in enumerate(chr_set):
+         node_set = set([x for x in  g.nodes() if g.node[x]['chr'] == chrom])
+-        print chrom
++        print(chrom)
+ 
+ 
+         max_chr_len = max([g.node[x]['aln_end'] for x in  g.nodes() if g.node[x]['chr'] == chrom])
+@@ -885,7 +885,7 @@ def add_groundtruth(g,json_file,in_hinge
+         blue_bk = max(blue-100,0)
+         green_bk = max(green-100,0)
+ 
+-        print red,blue,green
++        print(red,blue,green)
+         for node in node_set:
+             g.node[node]['normpos'] = g.node[node]['chr'] * max_chr_multiplier + (g.node[node]['aln_end']/float(max_chr_len))*max_chr_multiplier
+             lamda = (g.node[node]['aln_end']/max_chr_len)
+--- a/scripts/random_condensation.py
++++ b/scripts/random_condensation.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import networkx as nx
+ import random
+@@ -9,7 +9,7 @@ from collections import Counter
+ 
+ # This script does a random condensation of the graph down to 2000 nodes
+ 
+-# python random_condensation.py ecoli.edges 2000
++# python3 random_condensation.py ecoli.edges 2000
+ 
+ # It also keeps the ground truth on the graph through the condensation steps (if a json file is available)
+ 
+@@ -23,7 +23,7 @@ def merge_path(g,in_node,node,out_node):
+ 
+ def input1(flname):
+ 
+-    print "input1"
++    print("input1")
+ 
+     g = nx.DiGraph()
+     with open (flname) as f:
+@@ -39,7 +39,7 @@ def input1(flname):
+             
+ def input2(flname):
+ 
+-    print "input2"
++    print("input2")
+ 
+     g = nx.DiGraph()
+     with open (flname) as f:
+@@ -53,7 +53,7 @@ def input2(flname):
+ 
+ def input3(flname):
+ 
+-    print "input3"
++    print("input3")
+     # g = nx.DiGraph()
+     g = nx.read_graphml(flname)
+ 
+@@ -65,7 +65,7 @@ def de_clip(filename, n_nodes, hinge_lis
+     
+     f=open(filename)
+     line1=f.readline()
+-    print line1
++    print(line1)
+     f.close()
+ 
+     extension = filename.split('.')[-1]
+@@ -78,28 +78,28 @@ def de_clip(filename, n_nodes, hinge_lis
+         g=input2(filename)
+ 
+     
+-    print nx.info(g)
+-    degree_sequence=sorted(g.degree().values(),reverse=True)
+-    print Counter(degree_sequence)
++    print(nx.info(g))
++    degree_sequence=sorted(list(g.degree().values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+     
+     try:
+         import ujson
+         mapping = ujson.load(open(gt_file))
+         
+-        print 'getting mapping'
++        print('getting mapping')
+         mapped_nodes=0
+-        print str(len(mapping)) 
+-        print str(len(g.nodes()))
++        print(str(len(mapping))) 
++        print(str(len(g.nodes())))
+         for node in g.nodes():
+             # print node
+             node_base=node.split("_")[0]
+             # print node_base
+ 
+             #print node
+-            if mapping.has_key(node_base):
++            if node_base in mapping:
+                 g.node[node]['aln_start'] = min (mapping[node_base][0][0],mapping[node_base][0][1])
+                 g.node[node]['aln_end'] = max(mapping[node_base][0][1],mapping[node_base][0][0])
+                 g.node[node]['chr'] = mapping[node_base][0][2]
+@@ -127,9 +127,9 @@ def de_clip(filename, n_nodes, hinge_lis
+         raise
+         # print "json "+filename.split('.')[0]+'.mapping.json'+" not found. exiting."
+            
+-    print hinge_list
++    print(hinge_list)
+ 
+-    print str(mapped_nodes)+" out of " +str(len(g.nodes()))+" nodes mapped."
++    print(str(mapped_nodes)+" out of " +str(len(g.nodes()))+" nodes mapped.")
+     
+     # for i in range(5):
+     #     merge_simple_path(g)
+@@ -141,7 +141,7 @@ def de_clip(filename, n_nodes, hinge_lis
+     num_iter=10000
+     iter_done=0
+     if hinge_list != None:
+-        print "Found hinge list."
++        print("Found hinge list.")
+         with open(hinge_list,'r') as f:
+             for lines in f:
+                 lines1=lines.split()
+@@ -153,7 +153,7 @@ def de_clip(filename, n_nodes, hinge_lis
+                   in_hinges.add(lines1[0]+'_1')
+                   out_hinges.add(lines1[0]+'_0')
+ 
+-        print str(len(in_hinges))+' hinges found.'
++        print(str(len(in_hinges))+' hinges found.')
+ 
+         for node in g.nodes():
+             if node in in_hinges and node in out_hinges:
+@@ -205,7 +205,7 @@ def de_clip(filename, n_nodes, hinge_lis
+ 
+                 for nd in g.nodes():
+                     if len(nd.split("_"))==1:
+-                        print nd + " in trouble"
++                        print(nd + " in trouble")
+                 # in_node = g.in_edges(node2)[0][0]
+                 # out_node = g.out_edges(node2)[0][1]
+                 # if g.node[node2]['hinge']==0 and g.node[in_node]['hinge']==0  and g.node[out_node]['hinge']==0:
+@@ -260,24 +260,24 @@ def de_clip(filename, n_nodes, hinge_lis
+ 
+ 
+     
+-    degree_sequence=sorted(nx.degree(g).values(),reverse=True)
+-    print Counter(degree_sequence)
++    degree_sequence=sorted(list(nx.degree(g).values()),reverse=True)
++    print(Counter(degree_sequence))
+ 
+     
+     nx.write_graphml(g, filename.split('.')[0]+'.sparse3.graphml')
+     
+-    print nx.number_weakly_connected_components(g)
+-    print nx.number_strongly_connected_components(g)
++    print(nx.number_weakly_connected_components(g))
++    print(nx.number_strongly_connected_components(g))
+   
+ 
+ if __name__ == "__main__":   
+     filename = sys.argv[1]
+     try :
+         hinge_list=sys.argv[3]
+-        print "Found hinge list."
++        print("Found hinge list.")
+     except:
+         hinge_list=None
+-        print "in except "+hinge_list
++        print("in except "+hinge_list)
+ 
+     de_clip(filename, int(sys.argv[2]),hinge_list, sys.argv[4])
+ 
+--- a/scripts/repeat_annotate_reads.py
++++ b/scripts/repeat_annotate_reads.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+--- a/scripts/run_mapping.py
++++ b/scripts/run_mapping.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import sys
+ import os
+ import subprocess
+@@ -17,7 +17,7 @@ alignments = parse_alignment2(stream.std
+ 
+ d = {}
+ for alignment in alignments:
+-    if not d.has_key(alignment[2]):
++    if alignment[2] not in d:
+         d[alignment[2]] = []
+     d[alignment[2]].append([alignment[0],alignment[3],alignment[4], alignment[6], alignment[7], alignment[1]])
+     
+@@ -25,7 +25,7 @@ for alignment in alignments:
+ 
+ mapping = {}
+ 
+-for key,value in d.items():
++for key,value in list(d.items()):
+     value.sort(key = lambda x:x[2]-x[1], reverse=True)
+     aln = value[0]
+     
+--- a/scripts/run_mapping2.py
++++ b/scripts/run_mapping2.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import sys
+ import os
+ import subprocess
+@@ -18,7 +18,7 @@ alignments = parse_alignment2(stream.std
+ 
+ d = {}
+ for alignment in alignments:
+-    if not d.has_key(alignment[2]):
++    if alignment[2] not in d:
+         d[alignment[2]] = []
+     d[alignment[2]].append([alignment[0],alignment[3],alignment[4], alignment[6], alignment[7], alignment[1]])
+     
+@@ -26,13 +26,13 @@ for alignment in alignments:
+ 
+ mapping = {}
+ 
+-for key,value in d.items():
++for key,value in list(d.items()):
+     value.sort(key = lambda x:x[2]-x[1], reverse=True)
+     alns = value[:k]
+     max_val=alns[0][2]-alns[0][1]
+     for aln in alns:
+         if aln[2]-aln[1] > max_val/2.:
+-            if not mapping.has_key(str(key)):
++            if str(key) not in mapping:
+                 mapping[str(key)] = [(aln[1], aln[2],aln[-1], 1-int(aln[0] == 'n'))]
+                 # mapping[str(key)+'\''] = [(aln[2], aln[1],aln[-1], int(aln[0] == 'n'))]
+             else:
+--- a/scripts/run_mapping3.py
++++ b/scripts/run_mapping3.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import sys
+ import os
+ import subprocess
+@@ -18,7 +18,7 @@ alignments = parse_alignment2(stream.std
+ 
+ d = {}
+ for alignment in alignments:
+-    if not d.has_key(alignment[2]):
++    if alignment[2] not in d:
+         d[alignment[2]] = []
+     d[alignment[2]].append([alignment[0],alignment[3],alignment[4], alignment[6], alignment[7], alignment[1]])
+     
+@@ -26,13 +26,13 @@ for alignment in alignments:
+ 
+ mapping = {}
+ 
+-for key,value in d.items():
++for key,value in list(d.items()):
+     value.sort(key = lambda x:x[2]-x[1], reverse=True)
+     #alns = value[:k]
+     if len(alns) > 0:
+         alns = [item for item in alns if (item[2] - item[1]) > (alns[0][2] - alns[0][1])/2]
+     for aln in alns:
+-        if not mapping.has_key(str(key)):
++        if str(key) not in mapping:
+             mapping[str(key)] = [(aln[1], aln[2],aln[-1], 1-int(aln[0] == 'n'))]
+             mapping[str(key)+'\''] = [(aln[2], aln[1],aln[-1], int(aln[0] == 'n'))]
+         else:
+--- a/scripts/run_parse_alignment.py
++++ b/scripts/run_parse_alignment.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import sys
+ import os
+ import subprocess
+@@ -15,4 +15,4 @@ stream = subprocess.Popen(["LAshow", fil
+ alignments = parse_alignment(stream.stdout) # generator
+ 
+ for alignment in alignments:
+-    print alignment
++    print(alignment)
+--- a/scripts/run_parse_read.py
++++ b/scripts/run_parse_read.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import sys
+ import os
+@@ -15,6 +15,6 @@ stream = subprocess.Popen(["DBshow", fil
+ reads = parse_read(stream.stdout) # generator
+ 
+ for read in reads:
+-    print read
++    print(read)
+ 
+ #print result
+--- a/scripts/split_las.py
++++ b/scripts/split_las.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ 
+ import os
+ import argparse
+--- a/scripts/unitig.py
++++ b/scripts/unitig.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/python3
+ import networkx as nx
+ import sys
+ import itertools
+@@ -7,7 +7,7 @@ filename = sys.argv[1]
+ outfile = filename.split('.')[0] + ".edges.list"
+ 
+ g = nx.read_graphml(filename)
+-print nx.info(g)
++print(nx.info(g))
+ 
+ 
+ def get_circle(g,node,vertices_of_interest):
+@@ -20,8 +20,8 @@ def get_circle(g,node,vertices_of_intere
+         try:
+             assert len(g.successors(cur_vertex)) == 1
+         except:
+-            print g.successors(cur_vertex), cur_vertex, node
+-            print cur_vertex in vertices_of_interest
++            print(g.successors(cur_vertex), cur_vertex, node)
++            print(cur_vertex in vertices_of_interest)
+             raise
+         successor = g.successors(cur_vertex)[0]
+         cur_vertex = successor
+@@ -43,7 +43,7 @@ def get_unitigs(g):
+     vertices_used = set(vertices_of_interest)
+     for start_vertex in vertices_of_interest:
+         first_out_vertices = g.successors(start_vertex)
+-        print first_out_vertices
++        print(first_out_vertices)
+         for vertex in first_out_vertices:
+             cur_path = [start_vertex]
+             cur_vertex = vertex
+@@ -57,8 +57,8 @@ def get_unitigs(g):
+             vertices_used = vertices_used.union(set(cur_path))
+             paths.append(cur_path)
+ 
+-    print len(node_set)
+-    print len(vertices_used)
++    print(len(node_set))
++    print(len(vertices_used))
+ 
+     while len(node_set-vertices_used) > 0:
+         node = list(node_set-vertices_used)[0]
+@@ -70,15 +70,15 @@ def get_unitigs(g):
+         vertices_used = vertices_used.union(set(path))
+         if len(path) > 1:
+             paths.append(path)
+-    print len(paths) 
++    print(len(paths)) 
+     # print paths
+-    print "paths"
++    print("paths")
+     return paths
+         
+         
+ paths = get_unitigs(g)
+ 
+-print len(paths)
++print(len(paths))
+ 
+ h = nx.DiGraph()
+ for i, path in enumerate(paths):
+@@ -89,7 +89,7 @@ vertices_of_interest = set([x for x in g
+ for vertex in vertices_of_interest:
+     successors = [x for x in h.nodes() if h.node[x]['path'][0] == vertex]
+     predecessors = [x for x in h.nodes() if h.node[x]['path'][-1] == vertex]
+-    print successors,predecessors
++    print(successors,predecessors)
+     assert len(successors)==1 or len(predecessors)==1
+     for succ, pred in itertools.product(successors,predecessors):
+         h.add_edge(pred,succ)
+--- a/utils/run.sh
++++ b/utils/run.sh
+@@ -15,14 +15,14 @@ hinging --las $1.las --db $1 --config ~/
+ 
+ echo "Running Visualise"
+ 
+-python ~/AwesomeAssembler/scripts/Visualise_graph.py $1.edges.hinges hinge_list.txt
++python3 /usr/lib/hinge/Visualise_graph.py $1.edges.hinges hinge_list.txt
+ 
+ echo "Running Condense"
+ 
+-python ~/AwesomeAssembler/scripts/condense_graph.py $1.edges.hinges
++python3 /usr/lib/hinge/condense_graph.py $1.edges.hinges
+ 
+ echo "Putting ground truth and condensing"
+ if [ -e "$1.mapping.1.json" ]
+ 	then
+-	python ~/AwesomeAssembler/scripts/condense_graph_with_aln_json.py $1.edges.hinges $1.mapping.1.json
+-fi
+\ No newline at end of file
++	python3 /usr/lib/hinge/condense_graph_with_aln_json.py $1.edges.hinges $1.mapping.1.json
++fi


=====================================
debian/patches/series
=====================================
@@ -1,3 +1,4 @@
 external-spdlog.patch
 libspdlog-14.0.patch
 libspdlog-1:1.3.0
+2to3.patch



View it on GitLab: https://salsa.debian.org/med-team/hinge/compare/166303bb54b1e0e113d13a9286cd1635ec074d57...b21be0eeb81b0f0f78baca0f6f5030ad89314c8e

-- 
View it on GitLab: https://salsa.debian.org/med-team/hinge/compare/166303bb54b1e0e113d13a9286cd1635ec074d57...b21be0eeb81b0f0f78baca0f6f5030ad89314c8e
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20190904/69417eca/attachment-0001.html>


More information about the debian-med-commit mailing list