[Debian-med-packaging] Bug#747337: More details about the test suite

Andreas Tille andreas at an3as.eu
Tue Jul 14 11:19:29 UTC 2015


Hi Charles,

since bedtools and this test suite problem seems to affect python-pysam
tests I dived a bit into this.  Due to my commits I was able to sort out
the five simple "fail"s out of 12.  Since I'm not yet know how to care
for enabling upstream a simple pull request I sticked to quilt patches.
Please feel free to turn this into proper shape to make sure upstream
will take over the patch to the test script (fix_test_script.patch) and
also make sure that bug223_d.vcf will make it into the next release
properly.  No idea why this was missing - I worked around this by adding
it to debian/test_missing_files/intersect and installed it into the
Debian package.

This now leaves us with three types of failures:

1. probably simply fixable:

    fisher.t3...\c
14c14
< 1     1       1       -nan
---
> 1     1       1       nan
fail


I admit I would not mind patching the file
   test/fisher/test-fisher.sh
to something like

$ diff -u test-fisher.sh.orig test-fisher.sh 
--- test-fisher.sh.orig 2015-07-14 12:42:52.000000000 +0200
+++ test-fisher.sh      2015-07-14 13:09:10.718678365 +0200
@@ -69,7 +69,7 @@
 #_________________________________________
 # p-values for fisher's exact test
 left   right   two-tail        ratio
-1      1       1       nan" > exp
+1      1       1       -nan" > exp
 $BT fisher -a a_merge.bed -b b.bed -g t.60.genome > obs
 check obs exp
 rm obs exp

This should enable passing the test.  I would backup that this makes
sense at debian-mentors if you have some doubt but I think this is OK.

2. the bit harder ones are reldist.t02 and reldist.t03.

Here to me the results are while different structural similar.
Comparing the outbut basically the constant column 3 shows a different
constant value.  I have no idea about the meaning but I would assume
some systematic deviation that might get some sensible explanation.

3. To me the shuffle.t1, shuffle.t2, shuffle.t3 and shuffle.t5 tests are
the hardest ones.  These resluts are looking really different and to
me it is hard to believe that this is simply caused by diverging random
number generators.  If this would be the only cause I admit I'd question
the principle of the algorithm at a whole (without knowing the basics
behind it, admittedly).

Charles, could you please take over my fixes properly (or tell me
exactly what I should do and we put it into README.source as well).
Then we should do another upload and throw the discussion of type
1. and 2. at debian-mentors to possibly get a clue about this.

For type 3 I would like to discuss this with upstream first.

Kind regards

      Andreas.

-- 
http://fam-tille.de



More information about the Debian-med-packaging mailing list