[med-svn] [Git][med-team/dindel][master] 10 commits: routine-update: Standards-Version: 4.5.0
Andreas Tille
gitlab at salsa.debian.org
Mon Nov 16 19:07:07 GMT 2020
Andreas Tille pushed to branch master at Debian Med / dindel
Commits:
6dff3369 by Andreas Tille at 2020-11-16T19:27:15+01:00
routine-update: Standards-Version: 4.5.0
- - - - -
5d6d3666 by Andreas Tille at 2020-11-16T19:27:15+01:00
routine-update: debhelper-compat 13
- - - - -
601705ee by Andreas Tille at 2020-11-16T19:27:21+01:00
routine-update: Remove trailing whitespace in debian/rules
- - - - -
b72e3cc2 by Andreas Tille at 2020-11-16T19:27:21+01:00
routine-update: Add salsa-ci file
- - - - -
8d49ae70 by Andreas Tille at 2020-11-16T19:27:21+01:00
routine-update: Rules-Requires-Root: no
- - - - -
5ffa98d2 by Andreas Tille at 2020-11-16T19:27:24+01:00
Remove obsolete field Name from debian/upstream/metadata (already present in machine-readable debian/copyright).
Changes-By: lintian-brush
- - - - -
874d42cb by Andreas Tille at 2020-11-16T19:27:25+01:00
routine-update: watch file standard 4
- - - - -
dd9e9ef6 by Andreas Tille at 2020-11-16T19:57:00+01:00
Remove boilerplateRemove boilerplate
- - - - -
eed7aab2 by Andreas Tille at 2020-11-16T20:03:33+01:00
Call 2to3 on Python scripts
- - - - -
d0aa2263 by Andreas Tille at 2020-11-16T20:06:25+01:00
Upload to unstable
- - - - -
9 changed files:
- debian/changelog
- − debian/compat
- debian/control
- + debian/patches/2to3.patch
- debian/patches/series
- debian/rules
- + debian/salsa-ci.yml
- debian/upstream/metadata
- debian/watch
Changes:
=====================================
debian/changelog
=====================================
@@ -1,3 +1,18 @@
+dindel (1.01-wu1-3+dfsg-2) unstable; urgency=medium
+
+ * Standards-Version: 4.5.0 (routine-update)
+ * debhelper-compat 13 (routine-update)
+ * Remove trailing whitespace in debian/rules (routine-update)
+ * Add salsa-ci file (routine-update)
+ * Rules-Requires-Root: no (routine-update)
+ * Remove obsolete field Name from debian/upstream/metadata (already present in
+ machine-readable debian/copyright).
+ * watch file standard 4 (routine-update)
+ * d/rules: Remove boilerplate
+ * Call 2to3 on Python scripts
+
+ -- Andreas Tille <tille at debian.org> Mon, 16 Nov 2020 20:03:42 +0100
+
dindel (1.01-wu1-3+dfsg-1) unstable; urgency=medium
[ Steffen Moeller ]
=====================================
debian/compat deleted
=====================================
@@ -1 +0,0 @@
-11
=====================================
debian/control
=====================================
@@ -5,16 +5,17 @@ Uploaders: Animesh Sharma <sharma.animesh at gmail.com>,
Andreas Tille <tille at debian.org>
Section: science
Priority: optional
-Build-Depends: debhelper (>= 11~),
- seqan-dev (>= 1.4.1),
+Build-Depends: debhelper-compat (= 13),
+ seqan-dev,
libbam-dev,
libboost-program-options-dev,
libboost-math-dev,
zlib1g-dev
-Standards-Version: 4.2.0
+Standards-Version: 4.5.0
Vcs-Browser: https://salsa.debian.org/med-team/dindel
Vcs-Git: https://salsa.debian.org/med-team/dindel.git
Homepage: https://github.com/genome/dindel-tgi
+Rules-Requires-Root: no
Package: dindel
Architecture: any
=====================================
debian/patches/2to3.patch
=====================================
@@ -0,0 +1,536 @@
+Author: Andreas Tille <tille at debian.org>
+Last-Update: Mon, 16 Nov 2020 19:27:15 +0100
+Description: Call 2to3 on Python scripts
+
+diff --git a/python/convertVCFToDindel.py b/python/convertVCFToDindel.py
+index e8efde8..546138b 100644
+--- a/python/convertVCFToDindel.py
++++ b/python/convertVCFToDindel.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ import os, sys, glob, gzip, math
+ from optparse import OptionParser
+ # Make sure to set the proper PYTHONPATH before running this script:
+diff --git a/python/makeGenotypeLikelihoodFilePooled.py b/python/makeGenotypeLikelihoodFilePooled.py
+index effcdd0..880b490 100644
+--- a/python/makeGenotypeLikelihoodFilePooled.py
++++ b/python/makeGenotypeLikelihoodFilePooled.py
+@@ -31,18 +31,18 @@ def getCalls(callFile = ''):
+ newpos = pos + var.offset - 1
+ newstr = var.str
+
+- if not calls.has_key(chrom):
++ if chrom not in calls:
+ calls[chrom] = {}
+- if not calls[chrom].has_key(newpos):
++ if newpos not in calls[chrom]:
+ calls[chrom][newpos] = {}
+- if calls[chrom][newpos].has_key(newstr):
++ if newstr in calls[chrom][newpos]:
+ raise NameError('Multiple same variants?')
+
+ calls[chrom][newpos][newstr] = dat.copy()
+ numcalls += 1
+
+ vcf.close()
+- print "Number of calls imported:",numcalls
++ print("Number of calls imported:",numcalls)
+ return calls
+
+
+@@ -126,7 +126,7 @@ def loadGLFFiles(inputGLFFiles = ''):
+ dat = fg.readline()
+ if dat['realigned_position'] != 'NA':
+ firstpos = int(dat['realigned_position'])
+- if fp_to_fname.has_key(firstpos):
++ if firstpos in fp_to_fname:
+ raise NameError('Huh?')
+
+ fp_to_fname[firstpos] = glffile
+@@ -135,7 +135,7 @@ def loadGLFFiles(inputGLFFiles = ''):
+
+ newglffiles = []
+ for pos in sorted(fp_to_fname.keys()):
+- print "pos:",pos,"glffile:", fp_to_fname[pos]
++ print("pos:",pos,"glffile:", fp_to_fname[pos])
+ newglffiles.append(fp_to_fname[pos])
+
+ return newglffiles
+@@ -185,7 +185,7 @@ def makeGLF(inputGLFFiles = '', outputFile = '', callFile = '', bamfilesFile = '
+ break
+
+ newindex = "%s.%s.%s" % (dat['index'], dat['realigned_position'], dat['nref_all'])
+- if not buffer.has_key(newindex):
++ if newindex not in buffer:
+ buffer[newindex] = []
+ buffer[newindex].append(dat)
+
+@@ -205,7 +205,7 @@ def makeGLF(inputGLFFiles = '', outputFile = '', callFile = '', bamfilesFile = '
+
+ fg.close()
+
+- print "Number written:", numwritten
++ print("Number written:", numwritten)
+ sys.stdout.flush()
+
+ # finish up
+diff --git a/python/makeWindows.py b/python/makeWindows.py
+index 47c1bcf..be327bb 100644
+--- a/python/makeWindows.py
++++ b/python/makeWindows.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from optparse import OptionParser
+ import sys, os, getopt
+
+@@ -80,7 +80,7 @@ def write_output_candidates(newVariants, outputPrefix = '', variantsPerFile = 10
+ # results should be joined at the mergeOutput stage
+
+ windows.append(wsize)
+- if not histWindows.has_key(wsize):
++ if wsize not in histWindows:
+ histWindows[wsize] = 1
+ else:
+ histWindows[wsize] += 1
+@@ -117,8 +117,8 @@ def write_output_candidates(newVariants, outputPrefix = '', variantsPerFile = 10
+ sys.stderr.write("Mean window size: %d\n" % (sum(windows)/len(windows)))
+ if False:
+ for ws in sorted(histWindows.keys()):
+- print " %d:%d" % (ws, histWindows[ws]),
+- print "\n"
++ print(" %d:%d" % (ws, histWindows[ws]), end=' ')
++ print("\n")
+
+
+
+@@ -140,7 +140,7 @@ def split_and_merge(inputVarFile = '', windowFilePrefix = '', minDist = 10, vari
+ pos = int(dat[1])
+ numread += 1
+ if numread % 10000 == 1:
+- print numread,"lines read"
++ print(numread,"lines read")
+ try:
+ chrno = int(chr)
+ if chrno>24 or chrno == 0:
+@@ -148,10 +148,10 @@ def split_and_merge(inputVarFile = '', windowFilePrefix = '', minDist = 10, vari
+ except ValueError:
+ pass
+
+- if not variants.has_key(chr):
++ if chr not in variants:
+ variants[chr] = {}
+
+- if not variants[chr].has_key(pos):
++ if pos not in variants[chr]:
+ variants[chr][pos] = []
+
+ i = 2
+@@ -165,14 +165,14 @@ def split_and_merge(inputVarFile = '', windowFilePrefix = '', minDist = 10, vari
+ # output statistics
+
+ sys.stderr.write("Number of chromosomes: %d\n" % (len(variants)))
+- for chr in variants.keys():
++ for chr in list(variants.keys()):
+ sys.stderr.write("\tchr %s: %d\n" % ( chr, len(variants[chr]) ))
+
+ # per chromosome: group variants and split
+
+ newVariants = {}
+ startIdx = 0
+- for chr in variants.keys():
++ for chr in list(variants.keys()):
+ totVar = 0
+ positions = sorted(variants[chr].keys())
+ newPosition = positions[:]
+@@ -191,7 +191,7 @@ def split_and_merge(inputVarFile = '', windowFilePrefix = '', minDist = 10, vari
+ newPos = newPosition[p]
+ pos = positions[p]
+
+- if not newVariants[chr].has_key(newPos):
++ if newPos not in newVariants[chr]:
+ newVariants[chr][newPos] = []
+
+ for var in variants[chr][pos]:
+diff --git a/python/mergeOutputDiploid.py b/python/mergeOutputDiploid.py
+index aa46713..8d6c41f 100644
+--- a/python/mergeOutputDiploid.py
++++ b/python/mergeOutputDiploid.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ import os, sys, glob, gzip, math
+ from optparse import OptionParser
+ # Make sure to set the proper PYTHONPATH before running this script:
+@@ -17,7 +17,7 @@ def getPercentiles(hist = {}, pctiles = [ 1,5,10,25, 50, 75, 90, 95, 99]):
+ cum[k] += cum[prevk]
+ prevk = k
+
+- if not cum.has_key(0):
++ if 0 not in cum:
+ cum[0] = 0
+ tot = cum[prevk]
+ iles = pctiles
+@@ -227,10 +227,10 @@ def processDiploidGLFFile(glfFile = '', variants = {}, refFile = '', maxHPLen =
+ glf['genotype'] = dat['glf']
+
+ (vcf_str,report_pos) = getVCFString(glf = glf, fa = fa, filterQual = filterQual)
+- if not variants.has_key(chrom):
++ if chrom not in variants:
+ variants[chrom] = {}
+
+- if not variants[chrom].has_key(report_pos):
++ if report_pos not in variants[chrom]:
+ variants[chrom][report_pos] = []
+
+ variants[chrom][report_pos].append(vcf_str)
+@@ -305,12 +305,12 @@ def mergeOutput(glfFilesFile = '', sampleID = 'SAMPLE', refFile = '', maxHPLen =
+ processDiploidGLFFile(glfFile = gf, variants = variants, refFile = refFile, maxHPLen = maxHPLen, isHomozygous = isHomozygous, newVarCov = newVarCov, doNotFilterOnFR = doNotFilterOnFR, filterQual = filterQual)
+
+ this_chr = chromosomes[:]
+- for chr in variants.keys():
++ for chr in list(variants.keys()):
+ if chr not in this_chr:
+ this_chr.append(chr)
+
+ for chr in this_chr:
+- if variants.has_key(chr):
++ if chr in variants:
+ for pos in sorted(variants[chr].keys()):
+ for vcfLine in variants[chr][pos]:
+ fv.write("%s\n" % (vcfLine))
+diff --git a/python/mergeOutputPooled.py b/python/mergeOutputPooled.py
+index a503ed3..e26b27b 100644
+--- a/python/mergeOutputPooled.py
++++ b/python/mergeOutputPooled.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ import os, sys, glob, gzip, math
+ from optparse import OptionParser
+ # Make sure to set the proper PYTHONPATH before running this script:
+@@ -17,7 +17,7 @@ def getPercentiles(hist = {}, pctiles = [ 1,5,10,25, 50, 75, 90, 95, 99]):
+ cum[k] += cum[prevk]
+ prevk = k
+
+- if not cum.has_key(0):
++ if 0 not in cum:
+ cum[0] = 0
+ tot = cum[prevk]
+ iles = pctiles
+@@ -281,7 +281,7 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ rdhist = {}
+ for glffile in allFiles:
+ fglf = FileUtils.FileWithHeader(fname = glffile, mode = 'r', joinChar = ' ')
+- print "Reading", glffile
++ print("Reading", glffile)
+ done = False
+ while True:
+ pos = -1
+@@ -289,7 +289,7 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ nr += 1
+
+ if nr % 10000 == 9999:
+- print "Number of lines read:",nr+1
++ print("Number of lines read:",nr+1)
+
+ num_ind_with_data = 0
+ tot_coverage = 0
+@@ -355,16 +355,16 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+
+ prob = float(dat[col_post_prob])
+ freq = float(dat[headerLabels.index('est_freq')])
+- if rdhist.has_key(tot_coverage):
++ if tot_coverage in rdhist:
+ rdhist[tot_coverage] += 1
+ else:
+ rdhist[tot_coverage] = 1
+
+
+ if prob>0.20:
+- if not varStat.has_key(chr):
++ if chr not in varStat:
+ varStat[chr] = {}
+- if not varStat[chr].has_key(pos):
++ if pos not in varStat[chr]:
+ varStat[chr][pos] = {}
+ # hplen
+ seq = fa.get(chr, pos+1-25,50)
+@@ -386,9 +386,9 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ fqp = 1.0 - math.pow(10.0, -float(filterQual)/10.0)
+ fqp_str = "q%d" % filterQual
+
+- for chr in varStat.keys():
+- for pos in varStat[chr].keys():
+- for varseq, var in varStat[chr][pos].iteritems():
++ for chr in list(varStat.keys()):
++ for pos in list(varStat[chr].keys()):
++ for varseq, var in varStat[chr][pos].items():
+
+ filters = []
+ prob = var['QUAL']
+@@ -412,9 +412,9 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ filters.append("mf")
+
+ if filters == []:
+- if not pass_filters.has_key(chr):
++ if chr not in pass_filters:
+ pass_filters[chr]={}
+- if not pass_filters[chr].has_key(pos):
++ if pos not in pass_filters[chr]:
+ pass_filters[chr][pos]=[]
+ pass_filters[chr][pos].append(varseq)
+ num_pass += 1
+@@ -433,7 +433,7 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ chromosomes.extend(other_chr)
+
+ # create VCF file
+- print "Writing VCF"
++ print("Writing VCF")
+
+ fv = open(outputVCFFile, 'w')
+ fv.write("##fileformat=VCFv4.0\n")
+@@ -461,7 +461,7 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+
+
+ for chr in chromosomes:
+- if not pass_filters.has_key(chr):
++ if chr not in pass_filters:
+ continue
+ # filter out variants that are too close
+ totSites = 0
+@@ -481,23 +481,23 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ newPos = newPosition[p]
+ pos = positions[p]
+
+- if not newSites.has_key(newPos):
++ if newPos not in newSites:
+ newSites[newPos] = {}
+
+- if not newSites[newPos].has_key(pos):
++ if pos not in newSites[newPos]:
+ newSites[newPos][pos]=[]
+
+- for var in varStat[chr][pos].keys():
++ for var in list(varStat[chr][pos].keys()):
+ newSites[newPos][pos].append(var)
+
+- print "New number of sites:", len(newSites.keys())
+- print "Number of sites filtered:",len(pass_filters[chr].keys())-len(newSites.keys())
++ print("New number of sites:", len(list(newSites.keys())))
++ print("Number of sites filtered:",len(list(pass_filters[chr].keys()))-len(list(newSites.keys())))
+
+ # select best call for double sites
+
+ filtered = []
+- for newPos in newSites.keys():
+- old = newSites[newPos].keys()
++ for newPos in list(newSites.keys()):
++ old = list(newSites[newPos].keys())
+
+ pos_probs = []
+ pos_vars = []
+@@ -523,7 +523,7 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ filtered.append(pos_pos[idx])
+
+ for duppos in set(old)-set([okpos]):
+- for var in varStat[chr][duppos].keys():
++ for var in list(varStat[chr][duppos].keys()):
+
+ if varStat[chr][duppos][var]['filter'] == '':
+ varStat[chr][duppos][var]['filter'] == tcFilter
+@@ -531,11 +531,11 @@ def processPooledGLFFiles(bamFilesFile = '', glfFilesFile = '', refFile = '', ou
+ varStat[chr][duppos][var]['filter']+=';'+tcFilter
+
+
+- print "Number of indel sites:",len(filtered)
++ print("Number of indel sites:",len(filtered))
+
+
+ for pos in sorted(varStat[chr].keys()):
+- for var in varStat[chr][pos].keys():
++ for var in list(varStat[chr][pos].keys()):
+
+
+ indel_report_pos = pos
+diff --git a/python/selectCandidates.py b/python/selectCandidates.py
+index a11ca0d..53e297d 100644
+--- a/python/selectCandidates.py
++++ b/python/selectCandidates.py
+@@ -1,11 +1,11 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ import os, sys, glob, gzip, math
+ from optparse import OptionParser
+
+
+ def emptyBuffer(fo = 0, variants = {}, parameters = {}, chrom = ''):
+ for p in sorted(variants.keys()):
+- for v in variants[p].keys():
++ for v in list(variants[p].keys()):
+ if variants[p][v]>=parameters['minCount'] and variants[p][v]<=parameters['maxCount']:
+ fo.write("%s %d %s # %d\n" % (chrom, p, v, variants[p][v]))
+
+@@ -33,7 +33,7 @@ def selectCandidates(inputFile = '', outputFile = '', parameters = {}):
+
+
+ if chr != currChr:
+- if visitedChromosomes.has_key(chr):
++ if chr in visitedChromosomes:
+ raise NameError("Chromosome already processed. Please sort with respect to chromosome first and then position!")
+ currChr = chr
+ visitedChromosomes[chr]=1
+@@ -56,7 +56,7 @@ def selectCandidates(inputFile = '', outputFile = '', parameters = {}):
+
+ if pos == currPos:
+ for idx, v in enumerate(vars):
+- if not variants[pos].has_key(v):
++ if v not in variants[pos]:
+ variants[pos][v]=0
+ variants[pos][v] += int(freqs[idx])
+
+diff --git a/python/utils/Fasta.py b/python/utils/Fasta.py
+index 6ed1f38..50ed185 100644
+--- a/python/utils/Fasta.py
++++ b/python/utils/Fasta.py
+@@ -36,7 +36,7 @@ class Fasta:
+ try:
+ idx = self.fai.ft[tid]
+ except KeyError:
+- print 'KeyError: ', tid
++ print('KeyError: ', tid)
+ raise NameError('KeyError')
+ fpos = idx.offset+ ( int(pos)/idx.blen)*idx.llen + (int(pos)%idx.blen)
+ self.fa.seek(fpos,0)
+@@ -59,7 +59,7 @@ def getChromosomes(faFile = ''):
+
+ faidx = FastaIndex(faiFile)
+
+- fachr = faidx.ft.keys()
++ fachr = list(faidx.ft.keys())
+
+ chromosomes = []
+ autosomal = ["%d" % c for c in range(1,23)]
+diff --git a/python/utils/FileUtils.py b/python/utils/FileUtils.py
+index ce15726..4e265cd 100644
+--- a/python/utils/FileUtils.py
++++ b/python/utils/FileUtils.py
+@@ -93,7 +93,7 @@ class FileWithHeader:
+ raise NameError('File not opened for writing')
+
+ out = self.emptyline[:]
+- for key,value in data.iteritems():
++ for key,value in data.items():
+ if not key in self.lab_to_col:
+ self.nlabnotfound += 1
+ else:
+@@ -195,15 +195,15 @@ def sortFile(fname = '', foutname='', col = 1, splitChar = ''):
+
+ try:
+ val = float(coldata[sortcol])
+- if data.has_key(val):
++ if val in data:
+ data[val].append(line)
+ else:
+ data[val]=[line]
+ except ValueError:
+- print 'Columns contains non-numeric data'
++ print('Columns contains non-numeric data')
+ sys.exit(2)
+
+- keys = data.keys()
++ keys = list(data.keys())
+ keys.sort()
+
+ for key in keys:
+diff --git a/python/utils/VCFFile.py b/python/utils/VCFFile.py
+index d95f622..4e32bb9 100644
+--- a/python/utils/VCFFile.py
++++ b/python/utils/VCFFile.py
+@@ -21,18 +21,18 @@ class VCFFile:
+ self.f.write("##fileformat=VCF4\n")
+
+
+- for inf_id in info.keys():
++ for inf_id in list(info.keys()):
+ line = "##INFO=<ID=%s,Number=%s,Type=%s,Description=\"%s\">" % (info[inf_id]['ID'], info[inf_id]['Number'], info[inf_id]['Type'], info[inf_id]['Description'])
+ self.f.write(line+'\n')
+ self.headerLines.append(line)
+ self.info[ info[inf_id]['ID'] ] = info[inf_id].copy()
+- for id in filter.keys():
++ for id in list(filter.keys()):
+ line = "##FILTER=<ID=%s,Description=\"%s\">" % (filter[id]['ID'],filter[id]['Description'])
+ self.f.write(line+'\n')
+ self.headerLines.append(line)
+ self.filter[ filter[id]['ID'] ] = filter[id].copy()
+
+- for id in format.keys():
++ for id in list(format.keys()):
+ line = "##FORMAT=<ID=%s,Number=%s,Type=%s,Description=\"%s\">" % (format[id]['ID'], format[id]['Number'], format[id]['Type'], format[id]['Description'])
+ self.f.write(line+'\n')
+ self.headerLines.append(line)
+@@ -130,7 +130,7 @@ class VCFFile:
+ type = fields2[2]
+ description = fields1[-2]
+
+- if self.info.has_key(id):
++ if id in self.info:
+ raise NameError("VCF file already has info-id %s" % id)
+
+ self.info[id]= {'Number':number, 'ID':id, 'Type':type, 'Description':description}
+@@ -183,7 +183,7 @@ class VCFFile:
+
+ description = fields1[-2]
+
+- if self.filter.has_key(id):
++ if id in self.filter:
+ raise NameError("VCF file already has info-id %s" % id)
+
+ self.filter[id]= {'Number':number, 'ID':id, 'Type':type, 'Description':description.replace('"','')}
+@@ -226,7 +226,7 @@ class VCFFile:
+ type = fields2[2]
+ description = fields1[-2]
+
+- if self.format.has_key(id):
++ if id in self.format:
+ raise NameError("VCF file already has info-id %s" % id)
+
+ self.format[id]= {'Number':number, 'ID':id, 'Type':type, 'Description':description.replace('"','')}
+@@ -323,7 +323,7 @@ class VCFFile:
+ # output filters
+ out['FILTER'] = col[ self.lab_to_col['FILTER'] ].split(';')
+
+- for filter in self.filter.keys():
++ for filter in list(self.filter.keys()):
+ hs = "FILTER_%s" % filter
+ if filter in out['FILTER']:
+ out[hs] = '1'
+@@ -357,7 +357,7 @@ class VCFFile:
+ raise NameError("Can only write in write-mode")
+ linedata = []
+ for lab in self.headerLabels:
+- if not dat.has_key(lab):
++ if lab not in dat:
+ raise NameError("Cannot find label %s in input" % lab)
+
+ for lab in ['CHROM','POS','ID', 'REF','ALT','QUAL']:
+@@ -375,15 +375,15 @@ class VCFFile:
+ else:
+ if checkTags:
+ for fi in dat['FILTER']:
+- if not self.filter.has_key(fi):
++ if fi not in self.filter:
+ raise NameError('Undefined filter!')
+ linedata.append(';'.join(dat['FILTER']))
+
+ # info
+ infos = []
+- for inf in dat['INFO'].keys():
++ for inf in list(dat['INFO'].keys()):
+ if checkTags:
+- if not self.info.has_key(inf):
++ if inf not in self.info:
+ raise NameError('Undefined info tag!')
+ if self.info[inf]['Type'] == 'Flag':
+ infos.append(inf)
=====================================
debian/patches/series
=====================================
@@ -1,3 +1,4 @@
modernize-Makefile.patch
compiler_errors.patch
fix-ftbfs-with-gcc6.patch
+2to3.patch
=====================================
debian/rules
=====================================
@@ -4,4 +4,4 @@
export DEB_BUILD_MAINT_OPTIONS = hardening=+all
%:
- dh $@
+ dh $@
=====================================
debian/salsa-ci.yml
=====================================
@@ -0,0 +1,4 @@
+---
+include:
+ - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
+ - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
=====================================
debian/upstream/metadata
=====================================
@@ -1,5 +1,4 @@
Contact: Cornelis A. Albers
-Name: dindel
Reference:
Author: >
Cornelis A. Albers and Gerton Lunter and Daniel G. MacArthur and
=====================================
debian/watch
=====================================
@@ -1,4 +1,4 @@
-version=3
+version=4
opts="repacksuffix=+dfsg,dversionmangle=s/\+dfsg//g,repack,compression=xz" \
https://github.com/genome/dindel-tgi/releases .*/archive/@ANY_VERSION@@ARCHIVE_EXT@
View it on GitLab: https://salsa.debian.org/med-team/dindel/-/compare/13f18fb8027692eaceaae6f3b07afbcd53ad3f61...d0aa2263b255d495f3598b1e0366cccbeb7b7d93
--
View it on GitLab: https://salsa.debian.org/med-team/dindel/-/compare/13f18fb8027692eaceaae6f3b07afbcd53ad3f61...d0aa2263b255d495f3598b1e0366cccbeb7b7d93
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20201116/1cf6cb69/attachment-0001.html>
More information about the debian-med-commit
mailing list