[med-svn] [Git][med-team/qiime][master] 5 commits: No doc package

Andreas Tille gitlab at salsa.debian.org
Sun Oct 28 17:07:14 GMT 2018


Andreas Tille pushed to branch master at Debian Med / qiime


Commits:
b9552ff4 by Andreas Tille at 2018-10-28T16:27:41Z
No doc package

- - - - -
24733a16 by Andreas Tille at 2018-10-28T16:28:34Z
Remove remainings to get source package of qiime1

- - - - -
f40b198f by Andreas Tille at 2018-10-28T16:30:19Z
Remove qiime1 patches

- - - - -
0bfafca4 by Andreas Tille at 2018-10-28T16:40:31Z
Remove old qiime config stuff

- - - - -
7d6b5382 by Andreas Tille at 2018-10-28T17:06:36Z
Drop unused overrides

- - - - -


22 changed files:

- − debian/README.source
- debian/control
- − debian/get-orig-source
- − debian/patches/allow_empty_default_taxonomy.patch
- − debian/patches/call_renamed_seqprep_sortmerna.patch
- − debian/patches/check_config_file_in_new_location.patch
- − debian/patches/detect_matplotlib_version.patch
- − debian/patches/exclude_tests_that_need_to_fail.patch
- − debian/patches/fix_binary_helper_location.patch
- − debian/patches/fix_path_for_support_files.patch
- − debian/patches/fix_script_usage_tests.patch
- − debian/patches/make_qiime_accept_new_rdp_classifier.patch
- − debian/patches/prevent_download_on_builds.patch
- − debian/patches/prevent_google_addsense.patch
- − debian/patches/relax_mothur_blast_raxml_versions.patch
- debian/patches/series
- − debian/qiime-doc.doc-base
- − debian/qiime-doc.links
- − debian/qiime_config
- − debian/scripts/print_qiime_config_all
- − debian/source/lintian-overrides
- debian/watch


Changes:

=====================================
debian/README.source deleted
=====================================
@@ -1,63 +0,0 @@
-QIIME for Debian - source
-=========================
-
-.py endings and the qiime wrapper of Bio-Linux
-----------------------------------------------
-
-Previous comment by Steffen:
-
-Lintian complains a lot about script-with-language-extension,
-i.e. the .py endings for files ending up in /usr/bin.
-What to do about this is not clear for the moment. For the
-time speaking it seems like lintian is wrong here, too deeply
-embedded is python in the project and Debian does not
-want to become incompatible.
-
-New comments by Tim:
-
-I borrowed the Bio-Linux approach for QIIME which may or may not be the best idea.
-Essentially, the Python scripts are not in the path and instead of running:
-
-% my_qiime_app.py
-
-You run:
-
-% qiime my_qiime_app.py
-
-or equivalently just:
-
-% qiime my_qiime_app
-
-The 'qiime' wrapper script adds the extension if needed and sets the path.  If run with
-no arguments it sets the path and drops to an interactive shell.
-
-The dependencies of QIIME essentially make it non-free despite the DFSG licence on the QIIME
-code itself.  Chief among these is UClust.  This package is supposed to handle the lack of UClust
-gracefully but it is still up to the user to fetch and install it.  
-
-Other dependencies/TODO:
-------------------------
-
-Everything else needed should be packaged, if not in Debian proper then in the SVN.  QIIME
-keeps adding new deps so these need to be checked with each release.
-
-To package:
- * Emperor 1.8.0+
- * python-burrito-fillings
-
-The
-   Build-Depends: bwa, infernal, raxml, sortmerna, swarm, vsearch
-are just added to make sure the qiime package will build only on those architectures successfully
-where all these dependencies exist.
-
-QIIME data
-----------
-
-The QIIME installation instructions say that you need to download some core files from Greengenes.
-Since these barely change and are included in the upstream tarball I am now putting them into the
-qiime-data package for easy access.  I have noted in copyright that these are CC licensed.
-
-See here for a discussion of what these are and why they don't change:
-
-https://groups.google.com/forum/?hl=en-US#!searchin/qiime-forum/greengenes/qiime-forum/SvXFetaLNCM/HE6bsBY0yZIJ
-


=====================================
debian/control
=====================================
@@ -102,47 +102,6 @@ Description: Quantitative Insights Into Microbial Ecology
  PyCogent toolkit. It makes extensive use of unit tests, and is highly
  modular to facilitate custom analyses.
 
-Package: qiime-doc
-Architecture: all
-Section: doc
-Depends: ${misc:Depends},
-         libjs-jquery,
-         libjs-underscore
-Description: Quantitative Insights Into Microbial Ecology (tutorial)
- QIIME (canonically pronounced ‘Chime’) is a pipeline for performing
- microbial community analysis that integrates many third party tools which
- have become standard in the field. A standard QIIME analysis begins with
- sequence data from one or more sequencing platforms, including
-  * Sanger,
-  * Roche/454, and
-  * Illumina GAIIx.
- QIIME can perform:
-  * library de-multiplexing and quality filtering;
-  * denoising with PyroNoise;
-  * OTU and representative set picking with uclust, cdhit, mothur, BLAST,
-    or other tools;
-  * taxonomy assignment with BLAST or the RDP classifier;
-  * sequence alignment with PyNAST, muscle, infernal, or other tools;
-  * phylogeny reconstruction with FastTree, raxml, clearcut, or other tools;
-  * alpha diversity and rarefaction, including visualization of results,
-    using over 20 metrics including Phylogenetic Diversity, chao1, and
-    observed species;
-  * beta diversity and rarefaction, including visualization of results,
-    using over 25 metrics including weighted and unweighted UniFrac,
-    Euclidean distance, and Bray-Curtis;
-  * summarization and visualization of taxonomic composition of samples
-    using pie charts and histograms
- and many other features.
- .
- QIIME includes parallelization capabilities for many of the
- computationally intensive steps. By default, these are configured to
- utilize a mutli-core environment, and are easily configured to run in
- a cluster environment. QIIME is built in Python using the open-source
- PyCogent toolkit. It makes extensive use of unit tests, and is highly
- modular to facilitate custom analyses.
- .
- This package contains the documentation and a tutorial.
-
 Package: qiime-data
 Architecture: all
 Depends: ${misc:Depends}


=====================================
debian/get-orig-source deleted
=====================================
@@ -1,31 +0,0 @@
-#!/bin/sh
-# strip binary JARs
-
-COMPRESSION=xz
-
-set -e
-NAME=`dpkg-parsechangelog | awk '/^Source/ { print $2 }'`
-
-if ! echo $@ | grep -q upstream-version ; then
-    VERSION=`dpkg-parsechangelog | awk '/^Version:/ { print $2 }' | sed 's/\([0-9\.]\+\)-[0-9]\+$/\1/'`
-else
-    VERSION=`echo $@ | sed "s?^.*--upstream-version \([0-9.]\+\) .*${name}.*?\1?"`
-    if echo "$VERSION" | grep -q "upstream-version" ; then
-        echo "Unable to parse version number"
-        exit
-    fi
-fi
-
-# Upstream tarball has upper case 'Q'
-UPSTREAMNAME=`echo ${NAME} | tr [q] [Q]`
-TARDIR=${UPSTREAMNAME}-${VERSION}
-
-mkdir -p ../tarballs
-cd ../tarballs
-tar xaf ../${UPSTREAMNAME}-${VERSION}.tar.gz
-
-# Remove useless JAR and CLASS files
-find . -name "*.jar" -delete
-
-GZIP="--best --no-name" tar --owner=root --group=root --mode=a+rX -caf "$NAME"_"$VERSION".orig.tar.${COMPRESSION} "${TARDIR}"
-rm -rf "$TARDIR"


=====================================
debian/patches/allow_empty_default_taxonomy.patch deleted
=====================================
@@ -1,22 +0,0 @@
-The behaviour in qiime/assign_taxonomy.py is that if the taxonomy file path is
-not set then the RDP classifier will not be retrained.
-But the script bin/assign_taxonomy.py detects if the parameter is set to None
-and replaces it with the default value.
-So you always end up retraining the classifier, which is stupid.
-This is a nasty but effective workaround.
---- a/scripts/assign_taxonomy.py
-+++ b/scripts/assign_taxonomy.py
-@@ -106,6 +106,13 @@
- default_reference_seqs_fp = qiime_config['assign_taxonomy_reference_seqs_fp']
- default_id_to_taxonomy_fp = qiime_config['assign_taxonomy_id_to_taxonomy_fp']
- 
-+# Setting a path that begins with '#' disables the option
-+if default_reference_seqs_fp.lstrip().startswith('#'):
-+    default_reference_seqs_fp = None
-+if default_id_to_taxonomy_fp.lstrip().startswith('#'):
-+    default_id_to_taxonomy_fp = None
-+
-+
- script_info['optional_options'] = [
-     make_option('-t', '--id_to_taxonomy_fp', type="existing_filepath",
-                 help='Path to tab-delimited file mapping sequences to assigned '


=====================================
debian/patches/call_renamed_seqprep_sortmerna.patch deleted
=====================================


=====================================
debian/patches/check_config_file_in_new_location.patch deleted
=====================================
@@ -1,15 +0,0 @@
-We've moved the default config file, so tell this script to look in the new
-location.  See the next patch for other path fixes related to support_files
-
---- a/scripts/print_qiime_config.py
-+++ b/scripts/print_qiime_config.py
-@@ -283,8 +283,7 @@
-                               "config file as they will be ignored by QIIME.")
- 
-         qiime_project_dir = get_qiime_project_dir()
--        orig_config = parse_qiime_config_file(open(qiime_project_dir +
--                                                   '/qiime/support_files/qiime_config'))
-+        orig_config = parse_qiime_config_file(open('/etc/qiime/qiime_config'))
- 
-         # check the env qiime_config
-         qiime_config_env_filepath = getenv('QIIME_CONFIG_FP')


=====================================
debian/patches/detect_matplotlib_version.patch deleted
=====================================
@@ -1,15 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>
-Last-Update: Mon, 10 Mar 2014 14:20:08 +0000
-Description: Enable proper detection of mathplotlib
-
---- a/scripts/print_qiime_config.py
-+++ b/scripts/print_qiime_config.py
-@@ -306,7 +306,7 @@ class QIIMEDependencyBase(QIIMEConfig):
-         max_acceptable_version = (1,3,1)
-         try:
-             from matplotlib import __version__ as matplotlib_lib_version
--            version = tuple(map(int,matplotlib_lib_version.split('.')))
-+	    version = tuple(map(lambda x: int(x.replace("rc","")),matplotlib_lib_version.split('.')))
-             pass_test = (version >= min_acceptable_version and
-                          version <= max_acceptable_version)
-             version_string = str(matplotlib_lib_version)


=====================================
debian/patches/exclude_tests_that_need_to_fail.patch deleted
=====================================
@@ -1,923 +0,0 @@
-Author: Andreas Tille <tille at debian.org>
-Last-Update: Thu, 26 Dec 2013 07:39:57 +0100
-Description: Exclude tests that need to fail
- uclust is non-free and can not be packaged.  The QIIME package just contains
- a wrapper telling this fact the user and thus the tests will fail.  To avoid
- useless failures these tests are excluded.
-
-
---- a/tests/test_align_seqs.py
-+++ b/tests/test_align_seqs.py
-@@ -134,20 +134,20 @@ class InfernalAlignerTests(SharedSetupTe
-          LoadSeqs(data=infernal_test1_expected_alignment,aligned=Alignment,\
-             moltype=DNA)
- 
--    def test_call_infernal_test1_file_output(self):
--        """InfernalAligner writes correct output files for infernal_test1 seqs
--        """
--        # do not collect results; check output files instead
--        actual = self.infernal_test1_aligner(\
--         self.infernal_test1_input_fp, result_path=self.result_fp,
--         log_path=self.log_fp)
--         
--        self.assertTrue(actual == None,\
--         "Result should be None when result path provided.")
--         
--        expected_aln = self.infernal_test1_expected_aln
--        actual_aln = LoadSeqs(self.result_fp,aligned=Alignment)
--        self.assertEqual(actual_aln,expected_aln)
-+#    def test_call_infernal_test1_file_output(self):
-+#        """InfernalAligner writes correct output files for infernal_test1 seqs
-+#        """
-+#        # do not collect results; check output files instead
-+#        actual = self.infernal_test1_aligner(\
-+#         self.infernal_test1_input_fp, result_path=self.result_fp,
-+#         log_path=self.log_fp)
-+#         
-+#        self.assertTrue(actual == None,\
-+#         "Result should be None when result path provided.")
-+#         
-+#        expected_aln = self.infernal_test1_expected_aln
-+#        actual_aln = LoadSeqs(self.result_fp,aligned=Alignment)
-+#        self.assertEqual(actual_aln,expected_aln)
- 
-     def test_call_infernal_test1(self):
-         """InfernalAligner: functions as expected when returing objects
-@@ -221,84 +221,84 @@ class PyNastAlignerTests(SharedSetupTest
-         self.pynast_test1_expected_fail = \
-          LoadSeqs(data=pynast_test1_expected_failure,aligned=False)
- 
--    def test_call_pynast_test1_file_output(self):
--        """PyNastAligner writes correct output files for pynast_test1 seqs
--        """
--        # do not collect results; check output files instead
--        actual = self.pynast_test1_aligner(\
--         self.pynast_test1_input_fp, result_path=self.result_fp,
--         log_path=self.log_fp, failure_path=self.failure_fp)
--         
--        self.assertTrue(actual == None,\
--         "Result should be None when result path provided.")
--         
--        expected_aln = self.pynast_test1_expected_aln
--        actual_aln = LoadSeqs(self.result_fp,aligned=DenseAlignment)
--        self.assertEqual(actual_aln,expected_aln)
--
--        actual_fail = LoadSeqs(self.failure_fp,aligned=False)
--        self.assertEqual(actual_fail.toFasta(),\
--                         self.pynast_test1_expected_fail.toFasta())
--
--
--    def test_call_pynast_test1_file_output_alt_params(self):
--        """PyNastAligner writes correct output files when no seqs align
--        """
--        aligner = PyNastAligner({
--                'template_filepath': self.pynast_test1_template_fp,
--                'min_len':1000})
--                
--        actual = aligner(\
--         self.pynast_test1_input_fp, result_path=self.result_fp,
--         log_path=self.log_fp, failure_path=self.failure_fp)
--         
--        self.assertTrue(actual == None,\
--         "Result should be None when result path provided.")
--        
--        self.assertEqual(getsize(self.result_fp),0,\
--         "No alignable seqs should result in an empty file.")
--
--        # all seqs reported to fail
--        actual_fail = LoadSeqs(self.failure_fp,aligned=False)
--        self.assertEqual(actual_fail.getNumSeqs(),3)
--
--    def test_call_pynast_test1(self):
--        """PyNastAligner: functions as expected when returing objects
--        """
--        actual_aln = self.pynast_test1_aligner(self.pynast_test1_input_fp)
--        expected_aln = self.pynast_test1_expected_aln
--
--        expected_names = ['1 description field 1..23', '2 1..23']
--        self.assertEqual(actual_aln.Names, expected_names)
--        self.assertEqual(actual_aln, expected_aln)
--        
--    def test_call_pynast_template_aln_with_dots(self):
--        """PyNastAligner: functions when template alignment contains dots
--        """
--        pynast_aligner = PyNastAligner({
--                'template_filepath': self.pynast_test_template_w_dots_fp,
--                'min_len': 15,
--                })
--        actual_aln = pynast_aligner(self.pynast_test1_input_fp)
--        expected_aln = self.pynast_test1_expected_aln
--
--        expected_names = ['1 description field 1..23', '2 1..23']
--        self.assertEqual(actual_aln.Names, expected_names)
--        self.assertEqual(actual_aln, expected_aln)
--        
--    def test_call_pynast_template_aln_with_lower(self):
--        """PyNastAligner: functions when template alignment contains lower case
--        """
--        pynast_aligner = PyNastAligner({
--                'template_filepath': self.pynast_test_template_w_lower_fp,
--                'min_len': 15,
--                })
--        actual_aln = pynast_aligner(self.pynast_test1_input_fp)
--        expected_aln = self.pynast_test1_expected_aln
--
--        expected_names = ['1 description field 1..23', '2 1..23']
--        self.assertEqual(actual_aln.Names, expected_names)
--        self.assertEqual(actual_aln, expected_aln)
-+#    def test_call_pynast_test1_file_output(self):
-+#        """PyNastAligner writes correct output files for pynast_test1 seqs
-+#        """
-+#        # do not collect results; check output files instead
-+#        actual = self.pynast_test1_aligner(\
-+#         self.pynast_test1_input_fp, result_path=self.result_fp,
-+#         log_path=self.log_fp, failure_path=self.failure_fp)
-+#         
-+#        self.assertTrue(actual == None,\
-+#         "Result should be None when result path provided.")
-+#         
-+#        expected_aln = self.pynast_test1_expected_aln
-+#        actual_aln = LoadSeqs(self.result_fp,aligned=DenseAlignment)
-+#        self.assertEqual(actual_aln,expected_aln)
-+#
-+#        actual_fail = LoadSeqs(self.failure_fp,aligned=False)
-+#        self.assertEqual(actual_fail.toFasta(),\
-+#                         self.pynast_test1_expected_fail.toFasta())
-+
-+
-+#    def test_call_pynast_test1_file_output_alt_params(self):
-+#        """PyNastAligner writes correct output files when no seqs align
-+#        """
-+#        aligner = PyNastAligner({
-+#                'template_filepath': self.pynast_test1_template_fp,
-+#                'min_len':1000})
-+#                
-+#        actual = aligner(\
-+#         self.pynast_test1_input_fp, result_path=self.result_fp,
-+#         log_path=self.log_fp, failure_path=self.failure_fp)
-+#         
-+#        self.assertTrue(actual == None,\
-+#         "Result should be None when result path provided.")
-+#        
-+#        self.assertEqual(getsize(self.result_fp),0,\
-+#         "No alignable seqs should result in an empty file.")
-+#
-+#        # all seqs reported to fail
-+#        actual_fail = LoadSeqs(self.failure_fp,aligned=False)
-+#        self.assertEqual(actual_fail.getNumSeqs(),3)
-+
-+#    def test_call_pynast_test1(self):
-+#        """PyNastAligner: functions as expected when returing objects
-+#        """
-+#        actual_aln = self.pynast_test1_aligner(self.pynast_test1_input_fp)
-+#        expected_aln = self.pynast_test1_expected_aln
-+#
-+#        expected_names = ['1 description field 1..23', '2 1..23']
-+#        self.assertEqual(actual_aln.Names, expected_names)
-+#        self.assertEqual(actual_aln, expected_aln)
-+        
-+#    def test_call_pynast_template_aln_with_dots(self):
-+#        """PyNastAligner: functions when template alignment contains dots
-+#        """
-+#        pynast_aligner = PyNastAligner({
-+#                'template_filepath': self.pynast_test_template_w_dots_fp,
-+#                'min_len': 15,
-+#                })
-+#        actual_aln = pynast_aligner(self.pynast_test1_input_fp)
-+#        expected_aln = self.pynast_test1_expected_aln
-+#
-+#        expected_names = ['1 description field 1..23', '2 1..23']
-+#        self.assertEqual(actual_aln.Names, expected_names)
-+#        self.assertEqual(actual_aln, expected_aln)
-+        
-+#    def test_call_pynast_template_aln_with_lower(self):
-+#        """PyNastAligner: functions when template alignment contains lower case
-+#        """
-+#        pynast_aligner = PyNastAligner({
-+#                'template_filepath': self.pynast_test_template_w_lower_fp,
-+#                'min_len': 15,
-+#                })
-+#        actual_aln = pynast_aligner(self.pynast_test1_input_fp)
-+#        expected_aln = self.pynast_test1_expected_aln
-+#
-+#        expected_names = ['1 description field 1..23', '2 1..23']
-+#        self.assertEqual(actual_aln.Names, expected_names)
-+#        self.assertEqual(actual_aln, expected_aln)
- 
-     def test_call_pynast_template_aln_with_U(self):
-         """PyNastAligner: error message when template contains bad char
-@@ -309,43 +309,43 @@ class PyNastAlignerTests(SharedSetupTest
-                 })
-         self.assertRaises(KeyError,pynast_aligner,self.pynast_test1_input_fp)
-         
--    def test_call_pynast_alt_pairwise_method(self):
--        """PyNastAligner: alternate pairwise alignment method produces correct alignment
--        """
--        aligner = PyNastAligner({
--                'pairwise_alignment_method': 'muscle',
--                'template_filepath': self.pynast_test1_template_fp,
--                'min_len': 15,
--                })
--        actual_aln = aligner(self.pynast_test1_input_fp)
--        expected_aln = self.pynast_test1_expected_aln
--        self.assertEqual(actual_aln, expected_aln)
--        
--    def test_call_pynast_test1_alt_min_len(self):
--        """PyNastAligner: returns no result when min_len too high
--        """
--        aligner = PyNastAligner({
--                'template_filepath': self.pynast_test1_template_fp,
--                'min_len':1000})
--        
--        actual_aln = aligner(\
--         self.pynast_test1_input_fp)
--        expected_aln = {}
--
--        self.assertEqual(actual_aln, expected_aln)
--        
--    def test_call_pynast_test1_alt_min_pct(self):
--        """PyNastAligner: returns no result when min_pct too high
--        """
--        aligner = PyNastAligner({
--                'template_filepath': self.pynast_test1_template_fp,
--                'min_len':15,
--                'min_pct':100.0})
--        
--        actual_aln = aligner(self.pynast_test1_input_fp)
--        expected_aln = {}
--
--        self.assertEqual(actual_aln, expected_aln) 
-+#    def test_call_pynast_alt_pairwise_method(self):
-+#        """PyNastAligner: alternate pairwise alignment method produces correct alignment
-+#        """
-+#        aligner = PyNastAligner({
-+#                'pairwise_alignment_method': 'muscle',
-+#                'template_filepath': self.pynast_test1_template_fp,
-+#                'min_len': 15,
-+#                })
-+#        actual_aln = aligner(self.pynast_test1_input_fp)
-+#        expected_aln = self.pynast_test1_expected_aln
-+#        self.assertEqual(actual_aln, expected_aln)
-+        
-+#    def test_call_pynast_test1_alt_min_len(self):
-+#        """PyNastAligner: returns no result when min_len too high
-+#        """
-+#        aligner = PyNastAligner({
-+#                'template_filepath': self.pynast_test1_template_fp,
-+#                'min_len':1000})
-+#        
-+#        actual_aln = aligner(\
-+#         self.pynast_test1_input_fp)
-+#        expected_aln = {}
-+#
-+#        self.assertEqual(actual_aln, expected_aln)
-+        
-+#    def test_call_pynast_test1_alt_min_pct(self):
-+#        """PyNastAligner: returns no result when min_pct too high
-+#        """
-+#        aligner = PyNastAligner({
-+#                'template_filepath': self.pynast_test1_template_fp,
-+#                'min_len':15,
-+#                'min_pct':100.0})
-+#        
-+#        actual_aln = aligner(self.pynast_test1_input_fp)
-+#        expected_aln = {}
-+#
-+#        self.assertEqual(actual_aln, expected_aln) 
-         
-     def tearDown(self):
-         """
---- a/tests/test_assign_taxonomy.py
-+++ b/tests/test_assign_taxonomy.py
-@@ -134,53 +134,53 @@ class UclustConsensusTaxonAssignerTests(
-             if exists(d):
-                 rmtree(d)
-     
--    def test_uclust_assigner_write_to_file(self):
--        """UclustConsensusTaxonAssigner returns without error, writing results
--        """
--        params = {'id_to_taxonomy_fp':self.id_to_tax1_fp,
--                  'reference_sequences_fp':self.refseqs1_fp}
--        
--        t = UclustConsensusTaxonAssigner(params)
--        result = t(seq_path=self.inseqs1_fp,
--                   result_path=self.output_txt_fp,
--                   uc_path=self.output_uc_fp,
--                   log_path=self.output_log_fp)
--        del t
--        # result files exist after the UclustConsensusTaxonAssigner
--        # no longer exists
--        self.assertTrue(exists(self.output_txt_fp))
--        self.assertTrue(exists(self.output_uc_fp))
--        self.assertTrue(exists(self.output_log_fp))
--        
--        # check that result has the expected lines
--        output_lines = list(open(self.output_txt_fp,'U'))
--        self.assertTrue('q1\tA;F;G\t1.00\t1\n' in output_lines)
--        self.assertTrue('q2\tA;H;I;J\t1.00\t1\n' in output_lines)
-+#    def test_uclust_assigner_write_to_file(self):
-+#        """UclustConsensusTaxonAssigner returns without error, writing results
-+#        """
-+#        params = {'id_to_taxonomy_fp':self.id_to_tax1_fp,
-+#                  'reference_sequences_fp':self.refseqs1_fp}
-+#        
-+#        t = UclustConsensusTaxonAssigner(params)
-+#        result = t(seq_path=self.inseqs1_fp,
-+#                   result_path=self.output_txt_fp,
-+#                   uc_path=self.output_uc_fp,
-+#                   log_path=self.output_log_fp)
-+#        del t
-+#        # result files exist after the UclustConsensusTaxonAssigner
-+#        # no longer exists
-+#        self.assertTrue(exists(self.output_txt_fp))
-+#        self.assertTrue(exists(self.output_uc_fp))
-+#        self.assertTrue(exists(self.output_log_fp))
-+#        
-+#        # check that result has the expected lines
-+#        output_lines = list(open(self.output_txt_fp,'U'))
-+#        self.assertTrue('q1\tA;F;G\t1.00\t1\n' in output_lines)
-+#        self.assertTrue('q2\tA;H;I;J\t1.00\t1\n' in output_lines)
- 
--    def test_uclust_assigner(self):
--        """UclustConsensusTaxonAssigner returns without error, returning dict
--        """
--        params = {'id_to_taxonomy_fp':self.id_to_tax1_fp,
--                  'reference_sequences_fp':self.refseqs1_fp}
--        
--        t = UclustConsensusTaxonAssigner(params)
--        result = t(seq_path=self.inseqs1_fp,
--                   result_path=None,
--                   uc_path=self.output_uc_fp,
--                   log_path=self.output_log_fp)
--                   
--        self.assertEqual(result['q1'],(['A','F','G'],1.0,1))
--        self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1))
--
--        # no result paths provided
--        t = UclustConsensusTaxonAssigner(params)
--        result = t(seq_path=self.inseqs1_fp,
--                   result_path=None,
--                   uc_path=None,
--                   log_path=None)
--                   
--        self.assertEqual(result['q1'],(['A','F','G'],1.0,1))
--        self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1))
-+#    def test_uclust_assigner(self):
-+#        """UclustConsensusTaxonAssigner returns without error, returning dict
-+#        """
-+#        params = {'id_to_taxonomy_fp':self.id_to_tax1_fp,
-+#                  'reference_sequences_fp':self.refseqs1_fp}
-+#        
-+#        t = UclustConsensusTaxonAssigner(params)
-+#        result = t(seq_path=self.inseqs1_fp,
-+#                   result_path=None,
-+#                   uc_path=self.output_uc_fp,
-+#                   log_path=self.output_log_fp)
-+#                   
-+#        self.assertEqual(result['q1'],(['A','F','G'],1.0,1))
-+#        self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1))
-+#
-+#        # no result paths provided
-+#        t = UclustConsensusTaxonAssigner(params)
-+#        result = t(seq_path=self.inseqs1_fp,
-+#                   result_path=None,
-+#                   uc_path=None,
-+#                   log_path=None)
-+#                   
-+#        self.assertEqual(result['q1'],(['A','F','G'],1.0,1))
-+#        self.assertEqual(result['q2'],(['A','H','I','J'],1.0,1))
-     
-     def test_get_consensus_assignment(self):
-         """_get_consensus_assignment fuctions as expected """
---- a/tests/test_pick_otus.py
-+++ b/tests/test_pick_otus.py
-@@ -2478,40 +2478,40 @@ class UclustOtuPickerTests(TestCase):
-         exp = {0:['s1','s4','s6','s2','s3','s5']}
-         self.assertEqual(obs,exp)
-         
--    def test_abundance_sort(self):
--        """UclustOtuPicker: abundance sort functions as expected
--        """
--        #enable abundance sorting with suppress sort = False (it gets
--        # set to True internally, otherwise uclust's length sort would
--        # override the abundance sorting)
--        seqs = [('s1 comment1','ACCTTGTTACTTT'),  # three copies
--                ('s2 comment2','ACCTTGTTACTTTC'), # one copy
--                ('s3 comment3','ACCTTGTTACTTTCC'),# two copies
--                ('s4 comment4','ACCTTGTTACTTT'),
--                ('s5 comment5','ACCTTGTTACTTTCC'),
--                ('s6 comment6','ACCTTGTTACTTT')]
--        seqs_fp = self.seqs_to_temp_fasta(seqs)
--        
--        # abundance sorting changes order 
--        app = UclustOtuPicker(params={'Similarity':0.80,
--                                      'enable_rev_strand_matching':False,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':True,
--                                      'save_uc_files':False})
--        obs = app(seqs_fp)
--        exp = {0:['s1','s4','s6','s3','s5','s2']}
--        self.assertEqual(obs,exp)
--        
--        # abundance sorting changes order -- same results with suppress_sort =
--        # True b/c (it gets set to True to when presorting by abundance)
--        app = UclustOtuPicker(params={'Similarity':0.80,
--                                      'enable_rev_strand_matching':False,
--                                      'suppress_sort':True,
--                                      'presort_by_abundance':True,
--                                      'save_uc_files':False})
--        obs = app(seqs_fp)
--        exp = {0:['s1','s4','s6','s3','s5','s2']}
--        self.assertEqual(obs,exp)
-+#    def test_abundance_sort(self):
-+#        """UclustOtuPicker: abundance sort functions as expected
-+#        """
-+#        #enable abundance sorting with suppress sort = False (it gets
-+#        # set to True internally, otherwise uclust's length sort would
-+#        # override the abundance sorting)
-+#        seqs = [('s1 comment1','ACCTTGTTACTTT'),  # three copies
-+#                ('s2 comment2','ACCTTGTTACTTTC'), # one copy
-+#                ('s3 comment3','ACCTTGTTACTTTCC'),# two copies
-+#                ('s4 comment4','ACCTTGTTACTTT'),
-+#                ('s5 comment5','ACCTTGTTACTTTCC'),
-+#                ('s6 comment6','ACCTTGTTACTTT')]
-+#        seqs_fp = self.seqs_to_temp_fasta(seqs)
-+#        
-+#        # abundance sorting changes order 
-+#        app = UclustOtuPicker(params={'Similarity':0.80,
-+#                                      'enable_rev_strand_matching':False,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':True,
-+#                                      'save_uc_files':False})
-+#        obs = app(seqs_fp)
-+#        exp = {0:['s1','s4','s6','s3','s5','s2']}
-+#        self.assertEqual(obs,exp)
-+#        
-+#        # abundance sorting changes order -- same results with suppress_sort =
-+#        # True b/c (it gets set to True to when presorting by abundance)
-+#        app = UclustOtuPicker(params={'Similarity':0.80,
-+#                                      'enable_rev_strand_matching':False,
-+#                                      'suppress_sort':True,
-+#                                      'presort_by_abundance':True,
-+#                                      'save_uc_files':False})
-+#        obs = app(seqs_fp)
-+#        exp = {0:['s1','s4','s6','s3','s5','s2']}
-+#        self.assertEqual(obs,exp)
- 
-     def test_call_default_params(self):
-         """UclustOtuPicker.__call__ returns expected clusters default params"""
-@@ -2542,35 +2542,35 @@ class UclustOtuPickerTests(TestCase):
-         self.assertEqual(obs_otu_ids, exp_otu_ids)
-         self.assertEqual(obs_clusters, exp_clusters)
-         
--    def test_call_default_params_suppress_sort(self):
--        """UclustOtuPicker.__call__ returns expected clusters default params"""
--
--        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
--        
--        exp_otu_ids = range(10)
--        exp_clusters = [['uclust_test_seqs_0'],
--                        ['uclust_test_seqs_1'],
--                        ['uclust_test_seqs_2'],
--                        ['uclust_test_seqs_3'],
--                        ['uclust_test_seqs_4'],
--                        ['uclust_test_seqs_5'],
--                        ['uclust_test_seqs_6'],
--                        ['uclust_test_seqs_7'],
--                        ['uclust_test_seqs_8'],
--                        ['uclust_test_seqs_9']]
--        
--        app = UclustOtuPicker(params={'save_uc_files':False,
--                                      'suppress_sort':True})
--        obs = app(self.tmp_seq_filepath1)
--        obs_otu_ids = obs.keys()
--        obs_otu_ids.sort()
--        obs_clusters = obs.values()
--        obs_clusters.sort()
--        # The relation between otu ids and clusters is abitrary, and 
--        # is not stable due to use of dicts when parsing clusters -- therefore
--        # just checks that we have the expected group of each
--        self.assertEqual(obs_otu_ids, exp_otu_ids)
--        self.assertEqual(obs_clusters, exp_clusters)
-+#    def test_call_default_params_suppress_sort(self):
-+#        """UclustOtuPicker.__call__ returns expected clusters default params"""
-+#
-+#        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
-+#        
-+#        exp_otu_ids = range(10)
-+#        exp_clusters = [['uclust_test_seqs_0'],
-+#                        ['uclust_test_seqs_1'],
-+#                        ['uclust_test_seqs_2'],
-+#                        ['uclust_test_seqs_3'],
-+#                        ['uclust_test_seqs_4'],
-+#                        ['uclust_test_seqs_5'],
-+#                        ['uclust_test_seqs_6'],
-+#                        ['uclust_test_seqs_7'],
-+#                        ['uclust_test_seqs_8'],
-+#                        ['uclust_test_seqs_9']]
-+#        
-+#        app = UclustOtuPicker(params={'save_uc_files':False,
-+#                                      'suppress_sort':True})
-+#        obs = app(self.tmp_seq_filepath1)
-+#        obs_otu_ids = obs.keys()
-+#        obs_otu_ids.sort()
-+#        obs_clusters = obs.values()
-+#        obs_clusters.sort()
-+#        # The relation between otu ids and clusters is abitrary, and 
-+#        # is not stable due to use of dicts when parsing clusters -- therefore
-+#        # just checks that we have the expected group of each
-+#        self.assertEqual(obs_otu_ids, exp_otu_ids)
-+#        self.assertEqual(obs_clusters, exp_clusters)
- 
- 
-     def test_call_default_params_save_uc_file(self):
-@@ -2627,69 +2627,69 @@ class UclustOtuPickerTests(TestCase):
-         self.assertEqual(obs_otu_ids, exp_otu_ids)
-         self.assertEqual(obs_clusters, exp_clusters)
-         
--    def test_call_alt_threshold(self):
--        """UclustOtuPicker.__call__ returns expected clusters with alt threshold
--        """
--        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
--        
--        exp_otu_ids = range(9)
--        exp_clusters = [['uclust_test_seqs_0'],
--                        ['uclust_test_seqs_1'],
--                        ['uclust_test_seqs_2'],
--                        ['uclust_test_seqs_3'],
--                        ['uclust_test_seqs_4'],
--                        ['uclust_test_seqs_5'],
--                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
--                        ['uclust_test_seqs_7'],
--                        ['uclust_test_seqs_9']]
--
--        app = UclustOtuPicker(params={'Similarity':0.90,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':False,
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath1)
--        obs_otu_ids = obs.keys()
--        obs_otu_ids.sort()
--        obs_clusters = obs.values()
--        obs_clusters.sort()
--        # The relation between otu ids and clusters is abitrary, and 
--        # is not stable due to use of dicts when parsing clusters -- therefore
--        # just checks that we have the expected group of each
--        self.assertEqual(obs_otu_ids, exp_otu_ids)
--        self.assertEqual(obs_clusters, exp_clusters)
--        
--        
--    def test_call_otu_id_prefix(self):
--        """UclustOtuPicker.__call__ returns expected clusters with alt threshold
--        """
--        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
--        
--        exp_otu_ids = ['my_otu_%d' % i for i in range(9)]
--        exp_clusters = [['uclust_test_seqs_0'],
--                        ['uclust_test_seqs_1'],
--                        ['uclust_test_seqs_2'],
--                        ['uclust_test_seqs_3'],
--                        ['uclust_test_seqs_4'],
--                        ['uclust_test_seqs_5'],
--                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
--                        ['uclust_test_seqs_7'],
--                        ['uclust_test_seqs_9']]
--
--        app = UclustOtuPicker(params={'Similarity':0.90,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':False,
--                                      'new_cluster_identifier':'my_otu_',
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath1)
--        obs_otu_ids = obs.keys()
--        obs_otu_ids.sort()
--        obs_clusters = obs.values()
--        obs_clusters.sort()
--        # The relation between otu ids and clusters is abitrary, and 
--        # is not stable due to use of dicts when parsing clusters -- therefore
--        # just checks that we have the expected group of each
--        self.assertEqual(obs_otu_ids, exp_otu_ids)
--        self.assertEqual(obs_clusters, exp_clusters)
-+#    def test_call_alt_threshold(self):
-+#        """UclustOtuPicker.__call__ returns expected clusters with alt threshold
-+#        """
-+#        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
-+#        
-+#        exp_otu_ids = range(9)
-+#        exp_clusters = [['uclust_test_seqs_0'],
-+#                        ['uclust_test_seqs_1'],
-+#                        ['uclust_test_seqs_2'],
-+#                        ['uclust_test_seqs_3'],
-+#                        ['uclust_test_seqs_4'],
-+#                        ['uclust_test_seqs_5'],
-+#                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
-+#                        ['uclust_test_seqs_7'],
-+#                        ['uclust_test_seqs_9']]
-+#
-+#        app = UclustOtuPicker(params={'Similarity':0.90,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':False,
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath1)
-+#        obs_otu_ids = obs.keys()
-+#        obs_otu_ids.sort()
-+#        obs_clusters = obs.values()
-+#        obs_clusters.sort()
-+#        # The relation between otu ids and clusters is abitrary, and 
-+#        # is not stable due to use of dicts when parsing clusters -- therefore
-+#        # just checks that we have the expected group of each
-+#        self.assertEqual(obs_otu_ids, exp_otu_ids)
-+#        self.assertEqual(obs_clusters, exp_clusters)
-+        
-+        
-+#    def test_call_otu_id_prefix(self):
-+#        """UclustOtuPicker.__call__ returns expected clusters with alt threshold
-+#        """
-+#        # adapted from test_app.test_cd_hit.test_cdhit_clusters_from_seqs
-+#        
-+#        exp_otu_ids = ['my_otu_%d' % i for i in range(9)]
-+#        exp_clusters = [['uclust_test_seqs_0'],
-+#                        ['uclust_test_seqs_1'],
-+#                        ['uclust_test_seqs_2'],
-+#                        ['uclust_test_seqs_3'],
-+#                        ['uclust_test_seqs_4'],
-+#                        ['uclust_test_seqs_5'],
-+#                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
-+#                        ['uclust_test_seqs_7'],
-+#                        ['uclust_test_seqs_9']]
-+#
-+#        app = UclustOtuPicker(params={'Similarity':0.90,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':False,
-+#                                      'new_cluster_identifier':'my_otu_',
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath1)
-+#        obs_otu_ids = obs.keys()
-+#        obs_otu_ids.sort()
-+#        obs_clusters = obs.values()
-+#        obs_clusters.sort()
-+#        # The relation between otu ids and clusters is abitrary, and 
-+#        # is not stable due to use of dicts when parsing clusters -- therefore
-+#        # just checks that we have the expected group of each
-+#        self.assertEqual(obs_otu_ids, exp_otu_ids)
-+#        self.assertEqual(obs_clusters, exp_clusters)
-         
-     def test_call_suppress_sort(self):
-         """UclustOtuPicker.__call__ handles suppress sort
-@@ -2716,131 +2716,131 @@ class UclustOtuPickerTests(TestCase):
-         self.assertEqual(obs_otu_ids, exp_otu_ids)
-         self.assertEqual(obs_clusters, exp_clusters)
-         
--    def test_call_rev_matching(self):
--        """UclustOtuPicker.__call__ handles reverse strand matching
--        """
--        exp_otu_ids = range(2)
--        exp_clusters = [['uclust_test_seqs_0'],['uclust_test_seqs_0_rc']]
--        app = UclustOtuPicker(params={'Similarity':0.90,
--                                      'enable_rev_strand_matching':False,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':False,
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath3)
--        obs_otu_ids = obs.keys()
--        obs_otu_ids.sort()
--        obs_clusters = obs.values()
--        obs_clusters.sort()
--        # The relation between otu ids and clusters is abitrary, and 
--        # is not stable due to use of dicts when parsing clusters -- therefore
--        # just checks that we have the expected group of each
--        self.assertEqual(obs_otu_ids, exp_otu_ids)
--        self.assertEqual(obs_clusters, exp_clusters)
--        
--        exp = {0: ['uclust_test_seqs_0','uclust_test_seqs_0_rc']}
--        app = UclustOtuPicker(params={'Similarity':0.90,
--                                      'enable_rev_strand_matching':True,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':False,
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath3)
--        self.assertEqual(obs, exp)
--        
--    def test_call_output_to_file(self):
--        """UclustHitOtuPicker.__call__ output to file functions as expected
--        """
--        
--        tmp_result_filepath = get_tmp_filename(\
--         prefix='UclustOtuPickerTest.test_call_output_to_file_',\
--         suffix='.txt')
--        
--        app = UclustOtuPicker(params={'Similarity':0.90,
--                                      'suppress_sort':False,
--                                      'presort_by_abundance':False,
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath1,result_path=tmp_result_filepath)
--        
--        result_file = open(tmp_result_filepath)
--        result_file_str = result_file.read()
--        result_file.close()
--        # remove the result file before running the test, so in 
--        # case it fails the temp file is still cleaned up
--        remove(tmp_result_filepath)
--
--        exp_otu_ids = map(str,range(9))
--        exp_clusters = [['uclust_test_seqs_0'],
--                        ['uclust_test_seqs_1'],
--                        ['uclust_test_seqs_2'],
--                        ['uclust_test_seqs_3'],
--                        ['uclust_test_seqs_4'],
--                        ['uclust_test_seqs_5'],
--                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
--                        ['uclust_test_seqs_7'],
--                        ['uclust_test_seqs_9']]
--        obs_otu_ids = []
--        obs_clusters = []
--        for line in result_file_str.split('\n'):
--            if line:
--                fields = line.split('\t')
--                obs_otu_ids.append(fields[0])
--                obs_clusters.append(fields[1:])
--        obs_otu_ids.sort()
--        obs_clusters.sort()
--        # The relation between otu ids and clusters is abitrary, and 
--        # is not stable due to use of dicts when parsing clusters -- therefore
--        # just checks that we have the expected group of each
--        self.assertEqual(obs_otu_ids, exp_otu_ids)
--        self.assertEqual(obs_clusters, exp_clusters)
--        # confirm that nothing is returned when result_path is specified
--        self.assertEqual(obs,None)
--        
--    def test_call_log_file(self):
--        """UclustOtuPicker.__call__ writes log when expected
--        """
--        
--        tmp_log_filepath = get_tmp_filename(\
--         prefix='UclustOtuPickerTest.test_call_output_to_file_l_',\
--         suffix='.txt')
--        tmp_result_filepath = get_tmp_filename(\
--         prefix='UclustOtuPickerTest.test_call_output_to_file_r_',\
--         suffix='.txt')
--        
--        app = UclustOtuPicker(params={'Similarity':0.99,
--                                      'save_uc_files':False})
--        obs = app(self.tmp_seq_filepath1,\
--         result_path=tmp_result_filepath,log_path=tmp_log_filepath)
--        
--        log_file = open(tmp_log_filepath)
--        log_file_str = log_file.read()
--        log_file.close()
--        # remove the temp files before running the test, so in 
--        # case it fails the temp file is still cleaned up
--        remove(tmp_log_filepath)
--        remove(tmp_result_filepath)
--        
--        log_file_99_exp = ["UclustOtuPicker parameters:",
--         "Similarity:0.99","Application:uclust",
--         "enable_rev_strand_matching:False",
--         "suppress_sort:True",
--         "optimal:False",
--         'max_accepts:20',
--         'max_rejects:500',
--         'stepwords:20',
--         'word_length:12',
--         "exact:False",
--         "Num OTUs:10",
--         "new_cluster_identifier:None",
--         "presort_by_abundance:True",
--         "stable_sort:True",
--         "output_dir:.",
--         "save_uc_files:False",
--         "prefilter_identical_sequences:True",
--         "Result path: %s" % tmp_result_filepath]
--        # compare data in log file to fake expected log file
--        # NOTE: Since app.params is a dict, the order of lines is not
--        # guaranteed, so testing is performed to make sure that 
--        # the equal unordered lists of lines is present in actual and expected
--        self.assertEqualItems(log_file_str.split('\n'), log_file_99_exp)
-+#    def test_call_rev_matching(self):
-+#        """UclustOtuPicker.__call__ handles reverse strand matching
-+#        """
-+#        exp_otu_ids = range(2)
-+#        exp_clusters = [['uclust_test_seqs_0'],['uclust_test_seqs_0_rc']]
-+#        app = UclustOtuPicker(params={'Similarity':0.90,
-+#                                      'enable_rev_strand_matching':False,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':False,
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath3)
-+#        obs_otu_ids = obs.keys()
-+#        obs_otu_ids.sort()
-+#        obs_clusters = obs.values()
-+#        obs_clusters.sort()
-+#        # The relation between otu ids and clusters is abitrary, and 
-+#        # is not stable due to use of dicts when parsing clusters -- therefore
-+#        # just checks that we have the expected group of each
-+#        self.assertEqual(obs_otu_ids, exp_otu_ids)
-+#        self.assertEqual(obs_clusters, exp_clusters)
-+#        
-+#        exp = {0: ['uclust_test_seqs_0','uclust_test_seqs_0_rc']}
-+#        app = UclustOtuPicker(params={'Similarity':0.90,
-+#                                      'enable_rev_strand_matching':True,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':False,
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath3)
-+#        self.assertEqual(obs, exp)
-+        
-+#    def test_call_output_to_file(self):
-+#        """UclustHitOtuPicker.__call__ output to file functions as expected
-+#        """
-+#        
-+#        tmp_result_filepath = get_tmp_filename(\
-+#         prefix='UclustOtuPickerTest.test_call_output_to_file_',\
-+#         suffix='.txt')
-+#        
-+#        app = UclustOtuPicker(params={'Similarity':0.90,
-+#                                      'suppress_sort':False,
-+#                                      'presort_by_abundance':False,
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath1,result_path=tmp_result_filepath)
-+#        
-+#        result_file = open(tmp_result_filepath)
-+#        result_file_str = result_file.read()
-+#        result_file.close()
-+#        # remove the result file before running the test, so in 
-+#        # case it fails the temp file is still cleaned up
-+#        remove(tmp_result_filepath)
-+#
-+#        exp_otu_ids = map(str,range(9))
-+#        exp_clusters = [['uclust_test_seqs_0'],
-+#                        ['uclust_test_seqs_1'],
-+#                        ['uclust_test_seqs_2'],
-+#                        ['uclust_test_seqs_3'],
-+#                        ['uclust_test_seqs_4'],
-+#                        ['uclust_test_seqs_5'],
-+#                        ['uclust_test_seqs_6','uclust_test_seqs_8'],
-+#                        ['uclust_test_seqs_7'],
-+#                        ['uclust_test_seqs_9']]
-+#        obs_otu_ids = []
-+#        obs_clusters = []
-+#        for line in result_file_str.split('\n'):
-+#            if line:
-+#                fields = line.split('\t')
-+#                obs_otu_ids.append(fields[0])
-+#                obs_clusters.append(fields[1:])
-+#        obs_otu_ids.sort()
-+#        obs_clusters.sort()
-+#        # The relation between otu ids and clusters is abitrary, and 
-+#        # is not stable due to use of dicts when parsing clusters -- therefore
-+#        # just checks that we have the expected group of each
-+#        self.assertEqual(obs_otu_ids, exp_otu_ids)
-+#        self.assertEqual(obs_clusters, exp_clusters)
-+#        # confirm that nothing is returned when result_path is specified
-+#        self.assertEqual(obs,None)
-+        
-+#    def test_call_log_file(self):
-+#        """UclustOtuPicker.__call__ writes log when expected
-+#        """
-+#        
-+#        tmp_log_filepath = get_tmp_filename(\
-+#         prefix='UclustOtuPickerTest.test_call_output_to_file_l_',\
-+#         suffix='.txt')
-+#        tmp_result_filepath = get_tmp_filename(\
-+#         prefix='UclustOtuPickerTest.test_call_output_to_file_r_',\
-+#         suffix='.txt')
-+#        
-+#        app = UclustOtuPicker(params={'Similarity':0.99,
-+#                                      'save_uc_files':False})
-+#        obs = app(self.tmp_seq_filepath1,\
-+#         result_path=tmp_result_filepath,log_path=tmp_log_filepath)
-+#        
-+#        log_file = open(tmp_log_filepath)
-+#        log_file_str = log_file.read()
-+#        log_file.close()
-+#        # remove the temp files before running the test, so in 
-+#        # case it fails the temp file is still cleaned up
-+#        remove(tmp_log_filepath)
-+#        remove(tmp_result_filepath)
-+#        
-+#        log_file_99_exp = ["UclustOtuPicker parameters:",
-+#         "Similarity:0.99","Application:uclust",
-+#         "enable_rev_strand_matching:False",
-+#         "suppress_sort:True",
-+#         "optimal:False",
-+#         'max_accepts:20',
-+#         'max_rejects:500',
-+#         'stepwords:20',
-+#         'word_length:12',
-+#         "exact:False",
-+#         "Num OTUs:10",
-+#         "new_cluster_identifier:None",
-+#         "presort_by_abundance:True",
-+#         "stable_sort:True",
-+#         "output_dir:.",
-+#         "save_uc_files:False",
-+#         "prefilter_identical_sequences:True",
-+#         "Result path: %s" % tmp_result_filepath]
-+#        # compare data in log file to fake expected log file
-+#        # NOTE: Since app.params is a dict, the order of lines is not
-+#        # guaranteed, so testing is performed to make sure that 
-+#        # the equal unordered lists of lines is present in actual and expected
-+#        self.assertEqualItems(log_file_str.split('\n'), log_file_99_exp)
-         
-         
-     def test_map_filtered_clusters_to_full_clusters(self):


=====================================
debian/patches/fix_binary_helper_location.patch deleted
=====================================
@@ -1,17 +0,0 @@
-Last-Update: Mon, 06 Aug 2012 09:22:42 +0200
-Description: Fix path to binary helper for denoiser
-
---- a/qiime/denoiser/utils.py
-+++ b/qiime/denoiser/utils.py
-@@ -60,8 +60,9 @@ def get_denoiser_data_dir():
- def get_flowgram_ali_exe():
-     """Return the path to the flowgram alignment prog
-     """
--    fp = get_qiime_scripts_dir() + "/FlowgramAli_4frame"
--    return fp
-+    #fp = get_qiime_scripts_dir() + "/FlowgramAli_4frame"
-+    #return fp
-+    return "/usr/lib/qiime/support_files/denoiser/bin/FlowgramAli_4frame"
- 
- def check_flowgram_ali_exe():
-    """Check if we have a working FlowgramAligner"""


=====================================
debian/patches/fix_path_for_support_files.patch deleted
=====================================
@@ -1,26 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>
-Last-Update: Mon, 10 Mar 2014 14:20:08 +0000
-Description: This may be a much simpler fix than patching every single mention
- of support_files, but I'm not sure what else this function is used
- to find?
-
---- a/qiime/util.py
-+++ b/qiime/util.py
-@@ -268,14 +268,10 @@
- 
- def get_qiime_project_dir():
-     """ Returns the top-level QIIME directory
--    """
--    # Get the full path of util.py
--    current_file_path = abspath(__file__)
--    # Get the directory containing util.py
--    current_dir_path = dirname(current_file_path)
--    # Return the directory containing the directory containing util.py
--    return dirname(current_dir_path)
- 
-+    In Debian we know this is always /usr/lib/[qiime]
-+    """
-+    return "/usr/lib"
- 
- def get_qiime_scripts_dir():
-     """Return the directory containing QIIME scripts.


=====================================
debian/patches/fix_script_usage_tests.patch deleted
=====================================
@@ -1,38 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>
-Last-Update: Mon, 10 Mar 2014 14:20:08 +0000
-Description: Enhancing unit tests
-
---- a/qiime/test.py
-+++ b/qiime/test.py
-@@ -15,7 +15,7 @@
- from os import chdir, getcwd
- from shutil import copytree, rmtree
- from glob import glob
--from site import addsitedir
-+import sys
- from tempfile import NamedTemporaryFile
- from traceback import format_exc
- from skbio.util import remove_files
-@@ -806,7 +806,9 @@
-         self._log('Scripts to test:\n %s' % ' '.join(scripts))
-         self._log('')
- 
--        addsitedir(scripts_dir)
-+        #addsitedir(scripts_dir)
-+        # This is not strong enough.  The scripts-dir must be the first thing in the PATH
-+        sys.path = [ scripts_dir ] + sys.path
- 
-         for script_name in scripts:
-             self.total_scripts += 1
---- a/tests/all_tests.py
-+++ b/tests/all_tests.py
-@@ -96,6 +96,9 @@
-                     bad_tests.append(unittest_name)
- 
-     qiime_test_data_dir = join(get_qiime_project_dir(), 'qiime_test_data')
-+    #Allow tests to be run without installing test data to system dir
-+    if exists("../qiime_test_data"):
-+	qiime_test_data_dir = "../qiime_test_data"
-     qiime_test_data_dir_exists = exists(qiime_test_data_dir)
-     if not opts.suppress_script_usage_tests and qiime_test_data_dir_exists:
-         if opts.script_usage_tests is not None:


=====================================
debian/patches/make_qiime_accept_new_rdp_classifier.patch deleted
=====================================
@@ -1,50 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>
-Last-Update: Tue, 11 Jun 2013 16:49:19 +0100
-Description: This patch twists QIIME's arm to accept running a newer version
- of the RDP classifier by setting RDP_JAR_VERSION_OK, which is done
- by the QIIME wrapper.
- This is a nasty hack and hopefully the patch can be dropped for QIIME 1.6
-
---- a/qiime/assign_taxonomy.py
-+++ b/qiime/assign_taxonomy.py
-@@ -54,14 +54,19 @@
-             "http://qiime.org/install/install.html#rdp-install"
-         )
- 
--    rdp_jarname = os.path.basename(rdp_jarpath)
--    version_match = re.search("\d\.\d", rdp_jarname)
--    if version_match is None:
--        raise RuntimeError(
--            "Unable to detect RDP Classifier version in file %s" % rdp_jarname
--        )
-+    #Patch for Bio-Linux/Debian.  Allow us to reassure QIIME about the version
-+    #of RDP Classifier using an environment variable.
-+    if os.getenv('RDP_JAR_VERSION_OK') is not None :
-+	version = os.getenv('RDP_JAR_VERSION_OK')
-+    else :
-+	rdp_jarname = os.path.basename(rdp_jarpath)
-+	version_match = re.search("\d\.\d", rdp_jarname)
-+	if version_match is None:
-+	    raise RuntimeError(
-+		"Unable to detect RDP Classifier version in file %s" % rdp_jarname
-+		)
-+        version = float(version_match.group())
- 
--    version = float(version_match.group())
-     if version < 2.1:
-         raise RuntimeError(
-             "RDP Classifier does not look like version 2.2 or greater."
---- a/scripts/assign_taxonomy.py
-+++ b/scripts/assign_taxonomy.py
-@@ -366,6 +366,11 @@
-             'training_data_properties_fp'] = opts.training_data_properties_fp
-         params['max_memory'] = "%sM" % opts.rdp_max_memory
- 
-+	#Record actual RDP version.  This shouldn't fail as it was called once
-+	#already.
-+	params['real_rdp_version'] = str(validate_rdp_version())
-+
-+
-     elif assignment_method == 'rtax':
-         params['id_to_taxonomy_fp'] = opts.id_to_taxonomy_fp
-         params['reference_sequences_fp'] = opts.reference_seqs_fp


=====================================
debian/patches/prevent_download_on_builds.patch deleted
=====================================
@@ -1,18 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>, Andreas Tille <tille at debian.org>
-Last-Update: Wed, 22 Jan 2014 08:51:45 +0100
-Description: Do not try to download uclust at build time
-
---- a/setup.py
-+++ b/setup.py
-@@ -358,9 +358,8 @@
-         chdir(cwd)
- 
- 
--# don't build any of the non-Python dependencies if the following modes are
--# invoked
--if all([e not in sys.argv for e in 'egg_info', 'sdist', 'register']):
-+# don't build any of the non-Python dependencies - let DPKG handle it
-+if False:
-     catch_install_errors(build_denoiser, 'denoiser')
-     catch_install_errors(download_UCLUST, 'UCLUST')
-     catch_install_errors(build_FastTree, 'FastTree')


=====================================
debian/patches/prevent_google_addsense.patch deleted
=====================================
@@ -1,28 +0,0 @@
-Author: Andreas Tille <tille at debian.org>
-Last-Update: Sat, 21 Dec 2013 08:50:20 +0100
-Description: Remove Google Addsense from user documentation
- to save user privacy
-
-
---- a/doc/_templates/layout.html
-+++ b/doc/_templates/layout.html
-@@ -2,8 +2,6 @@
- 
- {% block extrahead %}
- <meta http-equiv="Content-Style-Type" content="text/css" />
--<script type="text/javascript" src="http://www.google.com/jsapi?key=ABQIAAAAbW_pA971hrPgosv-Msv7hRRE2viNBUPuU405tK6p2cguOFmlFBQSwZMG6_q_v6Z42nkdo9ejT1aHmA"></script>
--<script type="text/javascript" src="{{ pathto("_static/google_feed.js",1)}}"></script>
- {% endblock %}
- 
- {% block relbar1 %}
-@@ -47,10 +45,6 @@
- <br /></div>
- {{ super() }}
- <script type="text/javascript">
--var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
--document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
--</script>
--<script type="text/javascript">
- try {
- var pageTracker = _gat._getTracker("UA-6636235-4");
- pageTracker._trackPageview();


=====================================
debian/patches/relax_mothur_blast_raxml_versions.patch deleted
=====================================
@@ -1,60 +0,0 @@
-Author: Tim Booth <tbooth at ceh.ac.uk>
-Last-Update: Mon, 10 Mar 2014 14:20:08 +0000
-Description: Enable running more tests successfully by relaxing version number
-
---- a/scripts/print_qiime_config.py
-+++ b/scripts/print_qiime_config.py
-@@ -326,7 +326,7 @@
-         version_string = stdout.strip().split('v')[-1].strip('q')
-         try:
-             version = tuple(map(int, version_string.split('.')))
--            pass_test = version == acceptable_version
-+            pass_test = version >= acceptable_version
-         except ValueError:
-             pass_test = False
-             version_string = stdout
-@@ -360,7 +360,7 @@
- 
-         try:
-             version = tuple(map(int, version_str.split('.')))
--            pass_test = version == acceptable_version
-+            pass_test = version >= acceptable_version
-         except ValueError:
-             pass_test = False
- 
-@@ -460,7 +460,7 @@
-         version_string = stdout.strip().split(' ')[1].strip()
-         try:
-             version = tuple(map(int, version_string.split('.')))
--            pass_test = version == acceptable_version
-+            pass_test = version >= acceptable_version
-         except ValueError:
-             pass_test = False
-             version_string = stdout
-@@ -481,7 +481,7 @@
-         version_string = stdout.strip().split(' ')[2].strip()
-         try:
-             version = tuple(map(int, version_string.split('.')))
--            pass_test = version == acceptable_version
-+            pass_test = version >= acceptable_version
-         except ValueError:
-             pass_test = False
-             version_string = stdout
-@@ -548,7 +548,7 @@
-         version_string = stdout.strip().split(' ')[1].strip('v.')
-         try:
-             version = tuple(map(int, version_string.split('.')))
--            pass_test = version == acceptable_version
-+            pass_test = version >= acceptable_version
-         except ValueError:
-             pass_test = False
-             version_string = stdout
-@@ -573,7 +573,7 @@
- 
-     def test_raxmlHPC_supported_version(self):
-         """raxmlHPC is in path and version is supported """
--        acceptable_version = [(7, 3, 0), (7, 3, 0)]
-+        acceptable_version = [(7, 3, 0), (7, 3, 5)]
-         self.assertTrue(which('raxmlHPC'),
-                         "raxmlHPC not found. This may or may not be a problem depending on " +
-                         "which components of QIIME you plan to use.")


=====================================
debian/patches/series
=====================================
@@ -1,10 +1 @@
-#allow_empty_default_taxonomy.patch
-#check_config_file_in_new_location.patch
-#make_qiime_accept_new_rdp_classifier.patch
-#fix_path_for_support_files.patch
-#relax_mothur_blast_raxml_versions.patch
-#prevent_google_addsense.patch
-#fix_script_usage_tests.patch
-#prevent_download_on_builds.patch
-##exclude_tests_that_need_to_fail.patch
 0000_fixme_hack_around_UnicodeDecodeError_in_bibtex.patch


=====================================
debian/qiime-doc.doc-base deleted
=====================================
@@ -1,38 +0,0 @@
-Document: qiime
-Title: QIIME: Quantitative Insights Into Microbial Ecology
-Author: Greg Caporaso <gregcaporaso at gmail.com>
-Abstract: Quantitative Insights Into Microbial Ecology
- QIIME (canonically pronounced ‘Chime’) is a pipeline for performing
- microbial community analysis that integrates many third party tools
- which have become standard in the field. 
- .
- Rather than reimplementing commonly used algorithms, QIIME wraps popular
- implementations of those algorithms. This allows us to make use of the
- many excellent tools available in this area, and allows faster
- integration of new tools. If you use tools that you think would be
- useful additions to QIIME, consider submitting a feature request.
- .
- A standard QIIME analysis begins with sequence data from one or more
- sequencing platforms, including Sanger, Roche/454, and Illumina GAIIx.
- QIIME can perform library de-multiplexing and quality filtering;
- denoising with AmpliconNoise or the QIIME Denoiser; OTU and
- representative set picking with uclust, cdhit, mothur, BLAST, or other
- tools; taxonomy assignment with BLAST or the RDP classifier; sequence
- alignment with PyNAST, muscle, infernal, or other tools; phylogeny
- reconstruction with FastTree, raxml, clearcut, or other tools; alpha
- diversity and rarefaction, including visualization of results, using
- over 20 metrics including Phylogenetic Diversity, chao1, and observed
- species; beta diversity and rarefaction, including visualization of
- results, using over 25 metrics including weighted and unweighted
- UniFrac, Euclidean distance, and Bray-Curtis; summarization and
- visualization of taxonomic composition of samples using area, bar and
- pie charts along with distance histograms; and many other features.
- While QIIME is primarily used for analysis of amplicon data, many of the
- downstream analysis pipeline (such as alpha rarefaction and jackknifed
- beta diversity) can be performed on any type of sample x observation
- tables if they are formatted correctly.
-Section: Science/Biology
-
-Format: html
-Files: /usr/share/doc/qiime/html/*
-Index: /usr/share/doc/qiime/html/index.html


=====================================
debian/qiime-doc.links deleted
=====================================
@@ -1,2 +0,0 @@
-usr/share/javascript/jquery/jquery.js		usr/share/doc/qiime/html/_static/jquery.js
-usr/share/javascript/underscore/underscore.js	usr/share/doc/qiime/html/_static/underscore.js


=====================================
debian/qiime_config deleted
=====================================
@@ -1,28 +0,0 @@
-# qiime_config
-# WARNING: DO NOT EDIT OR DELETE /etc/qiime/qiime_config
-# To overwrite defaults, copy this file to $HOME/.qiime_config or a path
-# specified by $QIIME_CONFIG_FP and edit that copy of the file.
-
-# This file refers to default GreenGenes data files installed by the qiime-data
-# package.
-
-cluster_jobs_fp
-python_exe_fp	    python
-working_dir	    .
-blastmat_dir	    /usr/share/ncbi/data
-blastall_fp	    blastall
-pynast_template_alignment_fp	/usr/share/qiime/data/core_set_aligned.fasta.imputed
-#template_alignment_lanemask_fp	/usr/share/qiime/data/lanemask_in_1s_and_0s
-pynast_template_alignment_blastdb
-jobs_to_start	    1
-seconds_to_sleep    60
-qiime_scripts_dir   /usr/lib/qiime/bin/
-temp_dir	    /tmp
-
-# Uncomment these to enable always re-training the RDP classifier, which is the
-# default behaviour for QIIME 1.9.  This takes a lot of time and RAM, so
-# for Bio-Linux the default behaviour is to quickly use the built-in index.
-# If you are using another assignment method you may need to specify these settings
-# in all cases.
-assign_taxonomy_id_to_taxonomy_fp  # /usr/share/qiime/data/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt
-assign_taxonomy_reference_seqs_fp  # /usr/share/qiime/data/gg_13_8_otus/rep_set/97_otus.fasta


=====================================
debian/scripts/print_qiime_config_all deleted
=====================================
@@ -1,18 +0,0 @@
-#!/bin/sh
-
-print_qiime_config.py -t
-
-echo
-
-echo "Here are the versions of the packages that QIIME depends on as reported by"
-echo "the system package manager:"
-echo
-
-dpkg -s qiime | perl -ne '/^Depends: (.*)/ &&
-  map {s/[ :].*//;
-       printf "%-26s: %s", $_, `dpkg -s "$_" | sed -n "/Version:/s/.* //p"`
-   }
-   grep {! /^lib/}
-   sort
-   split(/, /,"$1, mothur")' | uniq \
-&& { echo ; echo OK ; }


=====================================
debian/source/lintian-overrides deleted
=====================================
@@ -1,4 +0,0 @@
-# These files are actually editable JS source files and the lintian error is a false positive
-qiime source: source-is-missing qiime/support_files/js/overlib.js*
-qiime source: source-is-missing qiime/support_files/js/otu_count_display.js*
-qiime source: source-is-missing qiime_test_data/make_2d_plots/js/overlib.js*


=====================================
debian/watch
=====================================
@@ -1,4 +1,3 @@
 version=4
 
-##opts="repacksuffix=+dfsg,dversionmangle=auto,repack,compression=xz" \
 https://github.com/qiime2/qiime2/releases .*/archive/(\d[\d.]+)@ARCHIVE_EXT@



View it on GitLab: https://salsa.debian.org/med-team/qiime/compare/c4c1063f9ef9a29ea1c74344ab0d9e8a9e1cf98d...7d6b53826988db773b76d73d5606098497efba53

-- 
View it on GitLab: https://salsa.debian.org/med-team/qiime/compare/c4c1063f9ef9a29ea1c74344ab0d9e8a9e1cf98d...7d6b53826988db773b76d73d5606098497efba53
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20181028/e94bc712/attachment-0001.html>


More information about the debian-med-commit mailing list