[med-svn] [Git][med-team/conservation-code][master] 13 commits: Rename test dir in autopktest from ADTTMP to AUTOPKGTEST_TMP

Andreas Tille gitlab at salsa.debian.org
Sun Dec 15 15:28:16 GMT 2019



Andreas Tille pushed to branch master at Debian Med / conservation-code


Commits:
24188a3d by Andreas Tille at 2019-12-15T14:56:32Z
Rename test dir in autopktest from ADTTMP to AUTOPKGTEST_TMP

- - - - -
30e20fe1 by Andreas Tille at 2019-12-15T14:59:52Z
Use 2to3 to port from Python2 to Python3

- - - - -
2af1513a by Andreas Tille at 2019-12-15T15:00:08Z
routine-update: debhelper-compat 12

- - - - -
80774d29 by Andreas Tille at 2019-12-15T15:00:12Z
routine-update: Standards-Version: 4.4.1

- - - - -
7b46d429 by Andreas Tille at 2019-12-15T15:00:13Z
R-U: Trailing whitespace in debian/changelog

- - - - -
b739ec2b by Andreas Tille at 2019-12-15T15:14:00Z
routine-update: Do not parse d/changelog

- - - - -
305f4b2f by Andreas Tille at 2019-12-15T15:14:00Z
Trim trailing whitespace.

Fixes lintian: file-contains-trailing-whitespace
See https://lintian.debian.org/tags/file-contains-trailing-whitespace.html for more details.

- - - - -
c4bddda2 by Andreas Tille at 2019-12-15T15:14:08Z
Use secure URI in Homepage field.

Fixes lintian: homepage-field-uses-insecure-uri
See https://lintian.debian.org/tags/homepage-field-uses-insecure-uri.html for more details.

- - - - -
de30ec55 by Andreas Tille at 2019-12-15T15:14:30Z
Remove obsolete fields Contact, Name from debian/upstream/metadata.
- - - - -
a3866944 by Andreas Tille at 2019-12-15T15:14:30Z
Remove unnecessary get-orig-source-target.

Fixes lintian: debian-rules-contains-unnecessary-get-orig-source-target
See https://lintian.debian.org/tags/debian-rules-contains-unnecessary-get-orig-source-target.html for more details.

- - - - -
944a149e by Andreas Tille at 2019-12-15T15:15:30Z
Python3 in packaging

- - - - -
ecb87e5d by Andreas Tille at 2019-12-15T15:23:55Z
Replace tab by spaces since Python3 is picky about spacing errors

- - - - -
f83e1fd4 by Andreas Tille at 2019-12-15T15:26:37Z
Upload to unstable

- - - - -


9 changed files:

- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3.patch
- debian/patches/series
- debian/rules
- debian/tests/installation-test
- debian/tests/non-default-params-test
- debian/upstream/metadata


Changes:

=====================================
debian/changelog
=====================================
@@ -1,3 +1,19 @@
+conservation-code (20110309.0-8) unstable; urgency=medium
+
+  * Rename test dir in autopktest from ADTTMP to AUTOPKGTEST_TMP
+  * Use 2to3 to port from Python2 to Python3
+    Closes: #942925
+  * debhelper-compat 12
+  * Standards-Version: 4.4.1
+  * Remove trailing whitespace in debian/changelog
+  * Do not parse d/changelog
+  * Trim trailing whitespace.
+  * Use secure URI in Homepage field.
+  * Remove obsolete fields Contact, Name from debian/upstream/metadata.
+  * Remove unnecessary get-orig-source-target.
+
+ -- Andreas Tille <tille at debian.org>  Sun, 15 Dec 2019 16:24:28 +0100
+
 conservation-code (20110309.0-7) unstable; urgency=medium
 
   * debhelper 11
@@ -25,10 +41,10 @@ conservation-code (20110309.0-5) unstable; urgency=medium
   * upstream fix - to load identity matrix without error when no alignment
     matrix is found
   * add hardening
-  * allow-stderr for testsuite to allow test pass instead of fail when matrix 
+  * allow-stderr for testsuite to allow test pass instead of fail when matrix
     file is not found
   * verbose output in test + check distributions usage
-  * simplified debian/tests/non-default-params-test, but made more verbose 
+  * simplified debian/tests/non-default-params-test, but made more verbose
     debian/README.test
   * cme fix dpkg-copyright
 


=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11


=====================================
debian/control
=====================================
@@ -4,19 +4,19 @@ Uploaders: Laszlo Kajan <lkajan at rostlab.org>,
            Andreas Tille <tille at debian.org>
 Section: science
 Priority: optional
-Build-Depends: debhelper (>= 11~),
+Build-Depends: debhelper-compat (= 12),
                dh-python,
-               python
-Standards-Version: 4.2.1
+               python3
+Standards-Version: 4.4.1
 Vcs-Browser: https://salsa.debian.org/med-team/conservation-code
 Vcs-Git: https://salsa.debian.org/med-team/conservation-code.git
-Homepage: http://compbio.cs.princeton.edu/conservation/
+Homepage: https://compbio.cs.princeton.edu/conservation/
 
 Package: conservation-code
 Architecture: all
 Depends: ${misc:Depends},
-         ${python:Depends},
-         python-numpy
+         ${python3:Depends},
+         python3-numpy
 Enhances: concavity
 Description: protein sequence conservation scoring tool
  This package provides score_conservation(1), a tool to score protein sequence


=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,839 @@
+Description: Use 2to3 to port from Python2 to Python3
+Bug-Debian: https://bugs.debian.org/942925
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Sun, 15 Dec 2019 15:56:03 +0100
+
+--- a/score_conservation.py
++++ b/score_conservation.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/python3
+ 
+ ################################################################################
+ # score_conservation.py - Copyright Tony Capra 2007 - Last Update: 03/09/11
+@@ -83,7 +83,7 @@
+ #
+ ################################################################################
+ 
+-from __future__ import print_function
++
+ import math, sys, getopt
+ import re
+ # numarray imported below
+@@ -126,21 +126,21 @@ def weighted_freq_count_pseudocount(col,
+ 
+     # if the weights do not match, use equal weight
+     if len(seq_weights) != len(col):
+-	seq_weights = [1.] * len(col)
++        seq_weights = [1.] * len(col)
+ 
+     aa_num = 0
+     freq_counts = len(amino_acids)*[pc_amount] # in order defined by amino_acids
+ 
+     for aa in amino_acids:
+-	for j in range(len(col)):
+-	    if col[j] == aa:
+-		freq_counts[aa_num] += 1 * seq_weights[j]
++        for j in range(len(col)):
++            if col[j] == aa:
++                freq_counts[aa_num] += 1 * seq_weights[j]
+ 
+-	aa_num += 1
++        aa_num += 1
+ 
+     freqsum = (sum(seq_weights) + len(amino_acids) * pc_amount)
+     for j in range(len(freq_counts)):
+-	freq_counts[j] = freq_counts[j] / freqsum
++        freq_counts[j] = freq_counts[j] / freqsum
+ 
+     return freq_counts
+ 
+@@ -152,12 +152,12 @@ def weighted_gap_penalty(col, seq_weight
+ 
+     # if the weights do not match, use equal weight
+     if len(seq_weights) != len(col):
+-	seq_weights = [1.] * len(col)
++        seq_weights = [1.] * len(col)
+     
+     gap_sum = 0.
+     for i in range(len(col)):
+-	if col[i] == '-':
+-	    gap_sum += seq_weights[i]
++        if col[i] == '-':
++            gap_sum += seq_weights[i]
+ 
+     return 1 - (gap_sum / sum(seq_weights))
+ 
+@@ -167,7 +167,7 @@ def gap_percentage(col):
+     num_gaps = 0.
+ 
+     for aa in col:
+-	if aa == '-': num_gaps += 1
++        if aa == '-': num_gaps += 1
+ 
+     return num_gaps / len(col)
+ 
+@@ -187,8 +187,8 @@ def shannon_entropy(col, sim_matrix, bg_
+ 
+     h = 0. 
+     for i in range(len(fc)):
+-	if fc[i] != 0:
+-	    h += fc[i] * math.log(fc[i])
++        if fc[i] != 0:
++            h += fc[i] * math.log(fc[i])
+ 
+ #    h /= math.log(len(fc))
+     h /= math.log(min(len(fc), len(col)))
+@@ -196,9 +196,9 @@ def shannon_entropy(col, sim_matrix, bg_
+     inf_score = 1 - (-1 * h)
+ 
+     if gap_penalty == 1: 
+-	return inf_score * weighted_gap_penalty(col, seq_weights)
++        return inf_score * weighted_gap_penalty(col, seq_weights)
+     else: 
+-	return inf_score
++        return inf_score
+ 
+ 
+ ################################################################################
+@@ -221,22 +221,22 @@ def property_entropy(col, sim_matrix, bg
+     # sum the aa frequencies to get the property frequencies
+     prop_fc = [0.] * len(property_partition)
+     for p in range(len(property_partition)):
+-	for aa in property_partition[p]:
+-	    prop_fc[p] += fc[aa_to_index[aa]]
++        for aa in property_partition[p]:
++            prop_fc[p] += fc[aa_to_index[aa]]
+ 
+     h = 0. 
+     for i in range(len(prop_fc)):
+-	if prop_fc[i] != 0:
+-	    h += prop_fc[i] * math.log(prop_fc[i])
++        if prop_fc[i] != 0:
++            h += prop_fc[i] * math.log(prop_fc[i])
+ 
+     h /= math.log(min(len(property_partition), len(col)))
+ 
+     inf_score = 1 - (-1 * h)
+ 
+     if gap_penalty == 1: 
+-	return inf_score * weighted_gap_penalty(col, seq_weights)
++        return inf_score * weighted_gap_penalty(col, seq_weights)
+     else: 
+-	return inf_score
++        return inf_score
+ 
+ 
+ ################################################################################
+@@ -257,9 +257,9 @@ def property_relative_entropy(col, sim_m
+ 
+     prop_bg_freq = []
+     if len(bg_distr) == len(property_partition):
+-	prop_bg_freq = bg_distr
++        prop_bg_freq = bg_distr
+     else:
+-	prop_bg_freq = [0.248, 0.092, 0.114, 0.075, 0.132, 0.111, 0.161, 0.043, 0.024, 0.000] # from BL62
++        prop_bg_freq = [0.248, 0.092, 0.114, 0.075, 0.132, 0.111, 0.161, 0.043, 0.024, 0.000] # from BL62
+ 
+     #fc = weighted_freq_count_ignore_gaps(col, seq_weights)
+     fc = weighted_freq_count_pseudocount(col, seq_weights, PSEUDOCOUNT)
+@@ -267,19 +267,19 @@ def property_relative_entropy(col, sim_m
+     # sum the aa frequencies to get the property frequencies
+     prop_fc = [0.] * len(property_partition)
+     for p in range(len(property_partition)):
+-	for aa in property_partition[p]:
+-	    prop_fc[p] += fc[aa_to_index[aa]]
++        for aa in property_partition[p]:
++            prop_fc[p] += fc[aa_to_index[aa]]
+ 
+     d = 0. 
+     for i in range(len(prop_fc)):
+-	if prop_fc[i] != 0 and prop_bg_freq[i] != 0:
+-	    d += prop_fc[i] * math.log(prop_fc[i] / prop_bg_freq[i], 2)
++        if prop_fc[i] != 0 and prop_bg_freq[i] != 0:
++            d += prop_fc[i] * math.log(prop_fc[i] / prop_bg_freq[i], 2)
+ 
+ 
+     if gap_penalty == 1: 
+-	return d * weighted_gap_penalty(col, seq_weights)
++        return d * weighted_gap_penalty(col, seq_weights)
+     else: 
+-	return d
++        return d
+ 
+ 
+ ################################################################################
+@@ -293,14 +293,14 @@ def vn_entropy(col, sim_matrix, bg_distr
+ 
+     aa_counts = [0.] * 20
+     for aa in col:
+-	if aa != '-': aa_counts[aa_to_index[aa]] += 1
++        if aa != '-': aa_counts[aa_to_index[aa]] += 1
+ 
+     dm_size = 0
+     dm_aas = []
+     for i in range(len(aa_counts)):
+-	if aa_counts[i] != 0:
+-	    dm_aas.append(i)
+-	    dm_size += 1
++        if aa_counts[i] != 0:
++            dm_aas.append(i)
++            dm_size += 1
+ 
+     if dm_size == 0: return 0.0
+ 
+@@ -308,31 +308,31 @@ def vn_entropy(col, sim_matrix, bg_distr
+     col_i = 0
+     dm = zeros((dm_size, dm_size), Float32)
+     for i in range(dm_size):
+-	row_i = dm_aas[i]
+-	for j in range(dm_size):
+-	    col_i = dm_aas[j]
+-	    dm[i][j] = aa_counts[row_i] * sim_matrix[row_i][col_i]
++        row_i = dm_aas[i]
++        for j in range(dm_size):
++            col_i = dm_aas[j]
++            dm[i][j] = aa_counts[row_i] * sim_matrix[row_i][col_i]
+ 
+     ev = la.eigenvalues(dm).real
+ 
+     temp = 0.
+     for e in ev:
+-	temp += e
++        temp += e
+ 
+     if temp != 0:
+-	for i in range(len(ev)):
+-	    ev[i] = ev[i] / temp
++        for i in range(len(ev)):
++            ev[i] = ev[i] / temp
+ 
+     vne = 0.0
+     for e in ev:
+-	if e > (10**-10):
+-	    vne -= e * math.log(e) / math.log(20)
++        if e > (10**-10):
++            vne -= e * math.log(e) / math.log(20)
+ 
+     if gap_penalty == 1: 
+-	#return (1-vne) * weighted_gap_penalty(col, seq_weights)
+-	return (1-vne) * weighted_gap_penalty(col, [1.] * len(col))
++        #return (1-vne) * weighted_gap_penalty(col, seq_weights)
++        return (1-vne) * weighted_gap_penalty(col, [1.] * len(col))
+     else: 
+-	return 1 - vne
++        return 1 - vne
+ 
+ 
+ ################################################################################
+@@ -350,25 +350,25 @@ def relative_entropy(col, sim_matix, bg_
+ 
+     # remove gap count
+     if len(distr) == 20: 
+-	new_fc = fc[:-1]
+-	s = sum(new_fc)
+-	for i in range(len(new_fc)):
+-	    new_fc[i] = new_fc[i] / s
+-	fc = new_fc
++        new_fc = fc[:-1]
++        s = sum(new_fc)
++        for i in range(len(new_fc)):
++            new_fc[i] = new_fc[i] / s
++        fc = new_fc
+ 
+     if len(fc) != len(distr): return -1
+ 
+     d = 0.
+     for i in range(len(fc)):
+-	if distr[i] != 0.0:
+-	    d += fc[i] * math.log(fc[i]/distr[i])
++        if distr[i] != 0.0:
++            d += fc[i] * math.log(fc[i]/distr[i])
+ 
+     d /= math.log(len(fc))
+ 
+     if gap_penalty == 1: 
+-	return d * weighted_gap_penalty(col, seq_weights)
++        return d * weighted_gap_penalty(col, seq_weights)
+     else: 
+-	return d
++        return d
+ 
+ 
+ 
+@@ -386,36 +386,36 @@ def js_divergence(col, sim_matrix, bg_di
+ 
+     # if background distrubtion lacks a gap count, remove fc gap count
+     if len(distr) == 20: 
+-	new_fc = fc[:-1]
+-	s = sum(new_fc)
+-	for i in range(len(new_fc)):
+-	    new_fc[i] = new_fc[i] / s
+-	fc = new_fc
++        new_fc = fc[:-1]
++        s = sum(new_fc)
++        for i in range(len(new_fc)):
++            new_fc[i] = new_fc[i] / s
++        fc = new_fc
+ 
+     if len(fc) != len(distr): return -1
+ 
+     # make r distriubtion
+     r = []
+     for i in range(len(fc)):
+-	r.append(.5 * fc[i] + .5 * distr[i])
++        r.append(.5 * fc[i] + .5 * distr[i])
+ 
+     d = 0.
+     for i in range(len(fc)):
+-	if r[i] != 0.0:
+-	    if fc[i] == 0.0:
+-		d += distr[i] * math.log(distr[i]/r[i], 2)
+-	    elif distr[i] == 0.0:
+-		d += fc[i] * math.log(fc[i]/r[i], 2) 
+-	    else:
+-		d += fc[i] * math.log(fc[i]/r[i], 2) + distr[i] * math.log(distr[i]/r[i], 2)
++        if r[i] != 0.0:
++            if fc[i] == 0.0:
++                d += distr[i] * math.log(distr[i]/r[i], 2)
++            elif distr[i] == 0.0:
++                d += fc[i] * math.log(fc[i]/r[i], 2) 
++            else:
++                d += fc[i] * math.log(fc[i]/r[i], 2) + distr[i] * math.log(distr[i]/r[i], 2)
+ 
+     # d /= 2 * math.log(len(fc))
+     d /= 2
+ 
+     if gap_penalty == 1: 
+-	return d * weighted_gap_penalty(col, seq_weights)
++        return d * weighted_gap_penalty(col, seq_weights)
+     else: 
+-	return d
++        return d
+ 
+ 
+ ################################################################################
+@@ -430,20 +430,20 @@ def sum_of_pairs(col, sim_matrix, bg_dis
+     max_sum = 0.
+ 
+     for i in range(len(col)):
+-	for j in range(i):
+-	    if col[i] != '-' and col[j] != '-':
+-		max_sum += seq_weights[i] * seq_weights[j]
+-		sum += seq_weights[i] * seq_weights[j] * sim_matrix[aa_to_index[col[i]]][aa_to_index[col[j]]]
++        for j in range(i):
++            if col[i] != '-' and col[j] != '-':
++                max_sum += seq_weights[i] * seq_weights[j]
++                sum += seq_weights[i] * seq_weights[j] * sim_matrix[aa_to_index[col[i]]][aa_to_index[col[j]]]
+ 
+     if max_sum != 0: 
+-	sum /= max_sum
++        sum /= max_sum
+     else:
+-	sum = 0.
++        sum = 0.
+ 
+     if gap_penalty == 1: 
+-	return sum * weighted_gap_penalty(col, seq_weights)
++        return sum * weighted_gap_penalty(col, seq_weights)
+     else:
+-	return sum
++        return sum
+ 
+ 
+ 
+@@ -461,18 +461,18 @@ def window_score(scores, window_len, lam
+     w_scores = scores[:]
+ 
+     for i in range(window_len, len(scores) - window_len):
+-	if scores[i] < 0: 
+-	    continue
++        if scores[i] < 0: 
++            continue
+ 
+-	sum = 0.
+-	num_terms = 0.
+-	for j in range(i - window_len, i + window_len + 1):
+-	    if i != j and scores[j] >= 0:
+-		num_terms += 1
+-		sum += scores[j]
++        sum = 0.
++        num_terms = 0.
++        for j in range(i - window_len, i + window_len + 1):
++            if i != j and scores[j] >= 0:
++                num_terms += 1
++                sum += scores[j]
+ 
+-	if num_terms > 0:
+-	    w_scores[i] = (1 - lam) * (sum / num_terms) + lam * scores[i]
++        if num_terms > 0:
++            w_scores[i] = (1 - lam) * (sum / num_terms) + lam * scores[i]
+ 
+     return w_scores
+ 
+@@ -487,22 +487,22 @@ def calc_z_scores(scores, score_cutoff):
+     num_scores = 0
+ 
+     for s in scores:
+-	if s > score_cutoff:
+-	    average += s
+-	    num_scores += 1
++        if s > score_cutoff:
++            average += s
++            num_scores += 1
+     if num_scores != 0:
+-	average /= num_scores
++        average /= num_scores
+ 
+     for s in scores:
+-	if s > score_cutoff:
+-	    std_dev += ((s - average)**2) / num_scores
++        if s > score_cutoff:
++            std_dev += ((s - average)**2) / num_scores
+     std_dev = math.sqrt(std_dev)
+ 
+     for s in scores:
+-	if s > score_cutoff and std_dev != 0:
+-	    z_scores.append((s-average)/std_dev)
+-	else:
+-	    z_scores.append(-1000.0)
++        if s > score_cutoff and std_dev != 0:
++            z_scores.append((s-average)/std_dev)
++        else:
++            z_scores.append(-1000.0)
+ 
+     return z_scores
+ 
+@@ -525,37 +525,37 @@ def read_scoring_matrix(sm_file):
+     list_sm = [] # hold the matrix in list form
+ 
+     try:
+-	matrix_file = open(sm_file, 'r')
++        matrix_file = open(sm_file, 'r')
+ 
+-	for line in matrix_file:
++        for line in matrix_file:
+ 
+-	    if line[0] != '#' and first_line:
+-		first_line = 0
+-		if len(amino_acids) == 0:
+-		    for c in line.split():
+-			aa_to_index[string.lower(c)] = aa_index
+-			amino_acids.append(string.lower(c))
+-			aa_index += 1
+-
+-	    elif line[0] != '#' and first_line == 0:
+-		if len(line) > 1:
+-		    row = line.split()
+-		    list_sm.append(row)
+-
+-    except IOError, e:
+-	print( "Could not load similarity matrix: %s. Using identity matrix..." % sm_file, file=sys.stderr )
+-	from numpy import identity
+-	return identity(20)
+-	
++            if line[0] != '#' and first_line:
++                first_line = 0
++                if len(amino_acids) == 0:
++                    for c in line.split():
++                        aa_to_index[string.lower(c)] = aa_index
++                        amino_acids.append(string.lower(c))
++                        aa_index += 1
++
++            elif line[0] != '#' and first_line == 0:
++                if len(line) > 1:
++                    row = line.split()
++                    list_sm.append(row)
++
++    except IOError as e:
++        print( "Could not load similarity matrix: %s. Using identity matrix..." % sm_file, file=sys.stderr )
++        from numpy import identity
++        return identity(20)
++        
+     # if matrix is stored in lower tri form, copy to upper
+     if len(list_sm[0]) < 20:
+-	for i in range(0,19):
+-	    for j in range(i+1, 20):
+-		list_sm[i].append(list_sm[j][i])
++        for i in range(0,19):
++            for j in range(i+1, 20):
++                list_sm[i].append(list_sm[j][i])
+ 
+     for i in range(len(list_sm)):
+-	for j in range(len(list_sm[i])):
+-	    list_sm[i][j] = float(list_sm[i][j])
++        for j in range(len(list_sm[i])):
++            list_sm[i][j] = float(list_sm[i][j])
+ 
+     return list_sm
+     #sim_matrix = array(list_sm, type=Float32)
+@@ -595,16 +595,16 @@ def load_sequence_weights(fname):
+     seq_weights = []
+ 
+     try:
+-	f = open(fname)
++        f = open(fname)
+ 
+-	for line in f:
+-	    l = line.split()
+-	    if line[0] != '#' and len(l) == 2:
+-	       seq_weights.append(float(l[1]))
+-
+-    except IOError, e:
+-	pass
+-	#print "No sequence weights. Can't find: ", fname
++        for line in f:
++            l = line.split()
++            if line[0] != '#' and len(l) == 2:
++               seq_weights.append(float(l[1]))
++
++    except IOError as e:
++        pass
++        #print "No sequence weights. Can't find: ", fname
+ 
+     return seq_weights
+ 
+@@ -612,7 +612,7 @@ def get_column(col_num, alignment):
+     """Return the col_num column of alignment as a list."""
+     col = []
+     for seq in alignment:
+-	if col_num < len(seq): col.append(seq[col_num])
++        if col_num < len(seq): col.append(seq[col_num])
+ 
+     return col
+ 
+@@ -623,23 +623,23 @@ def get_distribution_from_file(fname):
+ 
+     distribution = []
+     try:
+-	f = open(fname)
+-	for line in f:
+-	    if line[0] == '#': continue
+-	    line = line[:-1]
+-	    distribution = line.split()
+-	    distribution = map(float, distribution)
+-
+-	    
+-    except IOError, e:
+-	print( e, "Using default (BLOSUM62) background.", file=sys.stderr )
+-	return []
++        f = open(fname)
++        for line in f:
++            if line[0] == '#': continue
++            line = line[:-1]
++            distribution = line.split()
++            distribution = list(map(float, distribution))
++
++            
++    except IOError as e:
++        print( e, "Using default (BLOSUM62) background.", file=sys.stderr )
++        return []
+ 
+     # use a range to be flexible about round off
+     if .997 > sum(distribution) or sum(distribution) > 1.003:
+-	print( "Distribution does not sum to 1. Using default (BLOSUM62) background.", file=sys.stderr )
+-	print( sum(distribution), file=sys.stderr )
+-	return []
++        print( "Distribution does not sum to 1. Using default (BLOSUM62) background.", file=sys.stderr )
++        print( sum(distribution), file=sys.stderr )
++        return []
+ 
+     return distribution
+ 
+@@ -655,22 +655,22 @@ def read_fasta_alignment(filename):
+     cur_seq = ''
+ 
+     for line in f:
+-	line = line[:-1]
+-	if len(line) == 0: continue
++        line = line[:-1]
++        if len(line) == 0: continue
+ 
+-	if line[0] == ';': continue
+-	if line[0] == '>':
+-	    names.append(line[1:].replace('\r', ''))
++        if line[0] == ';': continue
++        if line[0] == '>':
++            names.append(line[1:].replace('\r', ''))
+ 
+-	    if cur_seq != '':
++            if cur_seq != '':
+                 cur_seq = cur_seq.upper()
+                 for i, aa in enumerate(cur_seq):
+                     if aa not in iupac_alphabet:
+                         cur_seq = cur_seq.replace(aa, '-')
+-		alignment.append(cur_seq.replace('B', 'D').replace('Z', 'Q').replace('X', '-'))
+-		cur_seq = ''
+-	elif line[0] in iupac_alphabet:
+-	    cur_seq += line.replace('\r', '')
++                alignment.append(cur_seq.replace('B', 'D').replace('Z', 'Q').replace('X', '-'))
++                cur_seq = ''
++        elif line[0] in iupac_alphabet:
++            cur_seq += line.replace('\r', '')
+ 
+     # add the last sequence
+     cur_seq = cur_seq.upper()
+@@ -680,7 +680,7 @@ def read_fasta_alignment(filename):
+     alignment.append(cur_seq.replace('B', 'D').replace('Z', 'Q').replace('X', '-'))
+ 
+     return names, alignment
+-	
++        
+ def read_clustal_alignment(filename):
+     """ Read in the alignment stored in the CLUSTAL or Stockholm file, filename. Return
+     two lists: the names and sequences. """
+@@ -693,26 +693,26 @@ def read_clustal_alignment(filename):
+     f = open(filename)
+ 
+     for line in f:
+-	line = line[:-1]
+-	if len(line) == 0: continue
+-	if '*' in line: continue
+-
+-	if line[0:7] == 'CLUSTAL': continue
+-	if line[0:11] == '# STOCKHOLM': continue
+-	if line[0:2] == '//': continue
+-
+-	if re_stock_markup.match(line): continue
+-
+-	t = line.split()
+-
+-	if len(t) == 2 and t[1][0] in iupac_alphabet:
+-	    ali = t[1].upper().replace('B', 'D').replace('Z', 'Q').replace('X', '-').replace('\r', '').replace('.', '-')
+-	    if t[0] not in names:
+-		names.append(t[0])
+-		alignment.append(ali)
+-	    else:
+-		alignment[names.index(t[0])] += ali
+-		   
++        line = line[:-1]
++        if len(line) == 0: continue
++        if '*' in line: continue
++
++        if line[0:7] == 'CLUSTAL': continue
++        if line[0:11] == '# STOCKHOLM': continue
++        if line[0:2] == '//': continue
++
++        if re_stock_markup.match(line): continue
++
++        t = line.split()
++
++        if len(t) == 2 and t[1][0] in iupac_alphabet:
++            ali = t[1].upper().replace('B', 'D').replace('Z', 'Q').replace('X', '-').replace('\r', '').replace('.', '-')
++            if t[0] not in names:
++                names.append(t[0])
++                alignment.append(ali)
++            else:
++                alignment[names.index(t[0])] += ali
++                   
+     return names, alignment
+ 
+ 
+@@ -756,57 +756,57 @@ if len(args) < 1:
+ 
+ for opt, arg in opts:
+     if opt == "-h":
+-	usage()
+-	sys.exit()
++        usage()
++        sys.exit()
+     if opt == "-o":
+-	outfile_name = arg
++        outfile_name = arg
+     elif opt == "-l":
+-	if 'false' in arg.lower():
+-	    use_seq_weights = False
++        if 'false' in arg.lower():
++            use_seq_weights = False
+     elif opt == "-p":
+-	if 'false' in arg.lower():
+-	    use_gap_penalty = 0
++        if 'false' in arg.lower():
++            use_gap_penalty = 0
+     elif opt == "-m":
+-	s_matrix_file = arg
++        s_matrix_file = arg
+     elif opt == "-d":
+-	d = get_distribution_from_file(arg)
+-	if d != []: 
+-	    bg_distribution = d
+-	    background_name = arg
++        d = get_distribution_from_file(arg)
++        if d != []: 
++            bg_distribution = d
++            background_name = arg
+     elif opt == "-w":
+-	try:
+-	    window_size = int(arg)
+-	except ValueError:
+-	    print( "ERROR: Window size must be an integer. Using window_size 3...", file=sys.stderr )
+-	    window_size = 3
++        try:
++            window_size = int(arg)
++        except ValueError:
++            print( "ERROR: Window size must be an integer. Using window_size 3...", file=sys.stderr )
++            window_size = 3
+     elif opt == "-b":
+-	try:
+-	    win_lam = float(arg)
+-	    if not (0. <= win_lam <= 1.): raise ValueError
+-	except ValueError:
+-	    print( "ERROR: Window lambda must be a real in [0,1]. Using lambda = .5...", file=sys.stderr )
+-	    win_lam = .5
++        try:
++            win_lam = float(arg)
++            if not (0. <= win_lam <= 1.): raise ValueError
++        except ValueError:
++            print( "ERROR: Window lambda must be a real in [0,1]. Using lambda = .5...", file=sys.stderr )
++            win_lam = .5
+     elif opt == "-g":
+-	try:
+-	    gap_cutoff = float(arg)
+-	    if not (0. <= gap_cutoff < 1.): raise ValueError
+-	except ValueError:
+-	    print( "ERROR: Gap cutoff must be a real in [0,1). Using a gap cutoff of .3...", file=sys.stderr )
+-	    gap_cutoff = .3
++        try:
++            gap_cutoff = float(arg)
++            if not (0. <= gap_cutoff < 1.): raise ValueError
++        except ValueError:
++            print( "ERROR: Gap cutoff must be a real in [0,1). Using a gap cutoff of .3...", file=sys.stderr )
++            gap_cutoff = .3
+     elif opt == '-a':
+-	seq_specific_output = arg
++        seq_specific_output = arg
+     elif opt == '-n':
+-	normalize_scores = True
++        normalize_scores = True
+     elif opt == '-s':
+-	if arg == 'shannon_entropy': scoring_function = shannon_entropy
+-	elif arg == 'property_entropy': scoring_function = property_entropy
+-	elif arg == 'property_relative_entropy': scoring_function = property_relative_entropy
+-	elif arg == 'vn_entropy': scoring_function = vn_entropy; from numpy.numarray import *; import numpy.numarray.linear_algebra as la
+-
+-	elif arg == 'relative_entropy': scoring_function = relative_entropy
+-	elif arg == 'js_divergence': scoring_function = js_divergence
+-	elif arg == 'sum_of_pairs': scoring_function = sum_of_pairs
+-	else: print( "%s is not a valid scoring method. Using %s.\n" % (arg, scoring_function.__name__), file=sys.stderr )
++        if arg == 'shannon_entropy': scoring_function = shannon_entropy
++        elif arg == 'property_entropy': scoring_function = property_entropy
++        elif arg == 'property_relative_entropy': scoring_function = property_relative_entropy
++        elif arg == 'vn_entropy': scoring_function = vn_entropy; from numpy.numarray import *; import numpy.numarray.linear_algebra as la
++
++        elif arg == 'relative_entropy': scoring_function = relative_entropy
++        elif arg == 'js_divergence': scoring_function = js_divergence
++        elif arg == 'sum_of_pairs': scoring_function = sum_of_pairs
++        else: print( "%s is not a valid scoring method. Using %s.\n" % (arg, scoring_function.__name__), file=sys.stderr )
+ 
+ 
+ align_file = args[0]
+@@ -821,27 +821,27 @@ seq_weights = []
+ try:
+     names, alignment = read_clustal_alignment(align_file)
+     if names == []:
+-	names, alignment = read_fasta_alignment(align_file)
+-except IOError, e:
++        names, alignment = read_fasta_alignment(align_file)
++except IOError as e:
+     print( e, "Could not find %s. Exiting..." % align_file, file=sys.stderr )
+     sys.exit(1)
+ 
+ 
+ if len(alignment) != len(names) or alignment == []:
+         print( "Unable to parse alignment.\n", file=sys.stderr )
+-	sys.exit(1)
++        sys.exit(1)
+ 
+ seq_len = len(alignment[0])
+ for i, seq in enumerate(alignment):
+     if len(seq) != seq_len:
+-	print( "ERROR: Sequences of different lengths: %s (%d) != %s (%d).\n" % (names[0], seq_len, names[i], len(seq)), file=sys.stderr )
+-	sys.exit(1)
++        print( "ERROR: Sequences of different lengths: %s (%d) != %s (%d).\n" % (names[0], seq_len, names[i], len(seq)), file=sys.stderr )
++        sys.exit(1)
+ 
+ 
+ if use_seq_weights:
+     seq_weights = load_sequence_weights(align_file.replace('.%s' % align_suffix, '.weights'))
+     if seq_weights == []:
+-	seq_weights = calculate_sequence_weights(alignment)
++        seq_weights = calculate_sequence_weights(alignment)
+ 
+ if len(seq_weights) != len(alignment): seq_weights = [1.] * len(alignment)
+ 
+@@ -859,10 +859,10 @@ for i in range(len(alignment[0])):
+     col = get_column(i, alignment)
+ 
+     if len(col) == len(alignment):
+-	if gap_percentage(col) <= gap_cutoff:
+-	    scores.append(scoring_function(col, s_matrix, bg_distribution, seq_weights, use_gap_penalty))
+-	else:
+-	    scores.append(-1000.)
++        if gap_percentage(col) <= gap_cutoff:
++            scores.append(scoring_function(col, s_matrix, bg_distribution, seq_weights, use_gap_penalty))
++        else:
++            scores.append(-1000.)
+ 
+ if window_size > 0:
+     scores = window_score(scores, window_size, win_lam)
+@@ -874,36 +874,36 @@ if normalize_scores:
+ # print to file/stdout
+ try:
+     if outfile_name != "": 
+-	outfile = open(outfile_name, 'w')
+-	outfile.write("# %s -- %s - window_size: %d - window lambda: %.2f - background: %s - seq. weighting: %s - gap penalty: %d - normalized: %s\n" % (align_file, scoring_function.__name__, window_size, win_lam, background_name, use_seq_weights, use_gap_penalty, normalize_scores))
+-	if seq_specific_output: 
+-	    outfile.write("# reference sequence: %s\n" % seq_specific_output)
+-	    outfile.write("# align_column_number\tamino acid\tscore\n")
+-	else:
+-	    outfile.write("# align_column_number\tscore\tcolumn\n")
++        outfile = open(outfile_name, 'w')
++        outfile.write("# %s -- %s - window_size: %d - window lambda: %.2f - background: %s - seq. weighting: %s - gap penalty: %d - normalized: %s\n" % (align_file, scoring_function.__name__, window_size, win_lam, background_name, use_seq_weights, use_gap_penalty, normalize_scores))
++        if seq_specific_output: 
++            outfile.write("# reference sequence: %s\n" % seq_specific_output)
++            outfile.write("# align_column_number\tamino acid\tscore\n")
++        else:
++            outfile.write("# align_column_number\tscore\tcolumn\n")
+     else:
+-	print( "# %s -- %s - window_size: %d - background: %s - seq. weighting: %s - gap penalty: %d - normalized: %s" % (align_file, scoring_function.__name__, window_size, background_name, use_seq_weights, use_gap_penalty, normalize_scores) )
+-	if seq_specific_output: 
+-	    print( "# reference sequence: %s" % seq_specific_output )
+-	    print( "# align_column_number\tamino acid\tscore\n" )
+-	else:
+-	    print( "# align_column_number\tscore\tcolumn\n" )
++        print( "# %s -- %s - window_size: %d - background: %s - seq. weighting: %s - gap penalty: %d - normalized: %s" % (align_file, scoring_function.__name__, window_size, background_name, use_seq_weights, use_gap_penalty, normalize_scores) )
++        if seq_specific_output: 
++            print( "# reference sequence: %s" % seq_specific_output )
++            print( "# align_column_number\tamino acid\tscore\n" )
++        else:
++            print( "# align_column_number\tscore\tcolumn\n" )
+ 
+-except IOError, e:
++except IOError as e:
+     print( "Could not open %s for output. Printing results to standard out..." % outfile_name, file=sys.stderr )
+     outfile_name = ""
+ 
+ for i, score in enumerate(scores):
+     if seq_specific_output:
+-	cur_aa = get_column(i, alignment)[ref_seq_num]
+-	if cur_aa == '-': continue
+-	if outfile_name == "":
+-	    print( "%d\t%s\t%.5f" % (i, cur_aa, score) )
+-	else:
+-	    outfile.write("%d\t%s\t%5f\n" % (i, cur_aa, score))
++        cur_aa = get_column(i, alignment)[ref_seq_num]
++        if cur_aa == '-': continue
++        if outfile_name == "":
++            print( "%d\t%s\t%.5f" % (i, cur_aa, score) )
++        else:
++            outfile.write("%d\t%s\t%5f\n" % (i, cur_aa, score))
+     else:
+-	if outfile_name == "":
+-	    print( "%d\t%.5f\t%s" % (i, score, "".join(get_column(i, alignment))) )
+-	else:
+-	    outfile.write("%d\t%5f\t%s\n" % (i, score, "".join(get_column(i, alignment))))
++        if outfile_name == "":
++            print( "%d\t%.5f\t%s" % (i, score, "".join(get_column(i, alignment))) )
++        else:
++            outfile.write("%d\t%5f\t%s\n" % (i, score, "".join(get_column(i, alignment))))
+ 


=====================================
debian/patches/series
=====================================
@@ -7,3 +7,4 @@ stockholm_format
 Python3-prints
 usage
 fix_load_identity_matrix
+2to3.patch


=====================================
debian/rules
=====================================
@@ -15,7 +15,7 @@ pkgdatadir:=${datarootdir}/$(DEB_SOURCE)
 
 
 %:
-	dh $@ --with python2
+	dh $@ --with python3
 
 override_dh_auto_build: $(MANS)
 
@@ -37,19 +37,3 @@ override_dh_auto_clean:
 	rm -f $(MANS) ChangeLog
 
 # Policy §4.9 says that the get-orig-source target 'may be invoked in any directory'. So we do not use variables set from dpkg-parsechangelog.
-get-orig-source:
-	set -e; \
-	if ! ( which xz >/dev/null ); then \
-		echo "Could not find 'xz' tool for compression. Please install the package 'xz-utils'." >&2; \
-		exit 1; \
-	fi ; \
-	t=$$(mktemp -d) || exit 1; \
-	trap "rm -rf -- '$$t'" EXIT; \
-	( cd "$$t"; \
-		wget -O conservation-code_20110309.0.orig.tar.gz http://compbio.cs.princeton.edu/conservation/conservation_code.tar.gz; \
-		gunzip *.tar.gz; \
-		tar --owner=root --group=root --mode=a+rX --delete -f *.tar --wildcards '*/._*'; \
-		xz --best *.tar; \
-	); \
-	mv $$t/*.tar.?z ./
-


=====================================
debian/tests/installation-test
=====================================
@@ -6,12 +6,12 @@ set -e
 
 pkg=conservation-code
 
-if [ "$ADTTMP" = "" ] ; then
-  ADTTMP=$(mktemp -d /tmp/${pkg}-test.XXXXXX)
-  trap "rm -rf $ADTTMP" 0 INT QUIT ABRT PIPE TERM
+if [ "$AUTOPKGTEST_TMP" = "" ] ; then
+  AUTOPKGTEST_TMP=$(mktemp -d /tmp/${pkg}-test.XXXXXX)
+  trap "rm -rf $AUTOPKGTEST_TMP" 0 INT QUIT ABRT PIPE TERM
 fi
 
-cd $ADTTMP
+cd $AUTOPKGTEST_TMP
 
 cp -a /usr/share/doc/${pkg}/examples/* .
 


=====================================
debian/tests/non-default-params-test
=====================================
@@ -6,12 +6,12 @@ set -e
 
 pkg=conservation-code
 
-if [ "$ADTTMP" = "" ] ; then
-  ADTTMP=$(mktemp -d /tmp/${pkg}-test.XXXXXX)
-  trap "rm -rf $ADTTMP" 0 INT QUIT ABRT PIPE TERM
+if [ "$AUTOPKGTEST_TMP" = "" ] ; then
+  AUTOPKGTEST_TMP=$(mktemp -d /tmp/${pkg}-test.XXXXXX)
+  trap "rm -rf $AUTOPKGTEST_TMP" 0 INT QUIT ABRT PIPE TERM
 fi
 
-cd $ADTTMP
+cd $AUTOPKGTEST_TMP
 
 cp -a /usr/share/doc/${pkg}/examples/* .
 


=====================================
debian/upstream/metadata
=====================================
@@ -1,20 +1,18 @@
-Name: conservation-code
-Contact: Tony Capra <http://compbio.cs.princeton.edu/conservation/>
 Reference:
- - Author: John A. Capra and Mona Singh
-   Title: Predicting functionally important residues from sequence conservation
-   Journal: Bioinformatics
-   Volume: 23
-   Number: 15
-   Pages: 1875-82
-   Year: 2007
-   URL: http://bioinformatics.oxfordjournals.org/content/23/15/1875.full
-   DOI: 10.1093/bioinformatics/btm270
-   PMID: 17519246
+- Author: John A. Capra and Mona Singh
+  Title: Predicting functionally important residues from sequence conservation
+  Journal: Bioinformatics
+  Volume: 23
+  Number: 15
+  Pages: 1875-82
+  Year: 2007
+  URL: http://bioinformatics.oxfordjournals.org/content/23/15/1875.full
+  DOI: 10.1093/bioinformatics/btm270
+  PMID: 17519246
 Registry:
- - Name: OMICtools
-   Entry: OMICS_06943
- - Name: SciCrunch
-   Entry: NA
- - Name: bio.tools
-   Entry: NA
+- Name: OMICtools
+  Entry: OMICS_06943
+- Name: SciCrunch
+  Entry: NA
+- Name: bio.tools
+  Entry: NA



View it on GitLab: https://salsa.debian.org/med-team/conservation-code/compare/b7176bea5f3d9c048e9f041adb86b26f376afa56...f83e1fd4e55bf279d5ee76bf46a9451ca5522861

-- 
View it on GitLab: https://salsa.debian.org/med-team/conservation-code/compare/b7176bea5f3d9c048e9f041adb86b26f376afa56...f83e1fd4e55bf279d5ee76bf46a9451ca5522861
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20191215/1746d211/attachment-0001.html>


More information about the debian-med-commit mailing list