[med-svn] [Git][med-team/python-cutadapt][master] 3 commits: New upstream version 1.16

Andreas Tille gitlab at salsa.debian.org
Thu Mar 1 22:03:59 UTC 2018


Andreas Tille pushed to branch master at Debian Med / python-cutadapt


Commits:
de9edc7e by Andreas Tille at 2018-03-01T22:49:43+01:00
New upstream version 1.16
- - - - -
cb56526a by Andreas Tille at 2018-03-01T22:49:45+01:00
Update upstream source from tag 'upstream/1.16'

Update to upstream version '1.16'
with Debian dir 3500cf0e5c958dfb03aef20070fb0ce7da703997
- - - - -
3d2dd1cf by Andreas Tille at 2018-03-01T22:50:32+01:00
New upstream version

- - - - -


16 changed files:

- CHANGES.rst
- LICENSE
- PKG-INFO
- debian/changelog
- doc/conf.py
- doc/develop.rst
- doc/guide.rst
- setup.cfg
- src/cutadapt.egg-info/PKG-INFO
- src/cutadapt/__main__.py
- src/cutadapt/_align.c
- src/cutadapt/_qualtrim.c
- src/cutadapt/_seqio.c
- src/cutadapt/_version.py
- src/cutadapt/pipeline.py
- src/cutadapt/seqio.py


Changes:

=====================================
CHANGES.rst
=====================================
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -2,6 +2,14 @@
 Changes
 =======
 
+v1.16 (2018-02-21)
+------------------
+
+* Fix :issue:`291`: When processing paired-end reads with multiple cores, there
+  could be errors about incomplete FASTQs although the files are intact.
+* Fix :issue:`280`: Quality trimming statistics incorrectly show the same
+  values for R1 and R2.
+
 v1.15 (2017-11-23)
 ------------------
 
@@ -15,10 +23,9 @@ v1.15 (2017-11-23)
 * The plan is to make multi-core the default (automatically using as many cores as
   are available) in future releases, so please test it and `report an
   issue <https://github.com/marcelm/cutadapt/issues/>`_ if you find problems!
-* `Issue #256 <https://github.com/marcelm/cutadapt/issues/256>`_: ``--discard-untrimmed`` did not
+* Issue :issue:`256`: ``--discard-untrimmed`` did not
   have an effect on non-anchored linked adapters.
-* `Issue #118 <https://github.com/marcelm/cutadapt/issues/118>`_:
-  Added support for demultiplexing of paired-end data.
+* Issue :issue:`118`: Added support for demultiplexing of paired-end data.
 
 
 v1.14 (2017-06-16)


=====================================
LICENSE
=====================================
--- a/LICENSE
+++ b/LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) 2010-2017 Marcel Martin <marcel.martin at scilifelab.se>
+Copyright (c) 2010-2018 Marcel Martin <marcel.martin at scilifelab.se>
 
 Permission is hereby granted, free of charge, to any person obtaining a copy
 of this software and associated documentation files (the "Software"), to deal


=====================================
PKG-INFO
=====================================
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: cutadapt
-Version: 1.15
+Version: 1.16
 Summary: trim adapters from high-throughput sequencing reads
 Home-page: https://cutadapt.readthedocs.io/
 Author: Marcel Martin


=====================================
debian/changelog
=====================================
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+python-cutadapt (1.16-1) unstable; urgency=medium
+
+  * New upstream release
+
+ -- Andreas Tille <tille at debian.org>  Thu, 01 Mar 2018 22:50:26 +0100
+
 python-cutadapt (1.15-1) unstable; urgency=medium
 
   * New upstream release


=====================================
doc/conf.py
=====================================
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -30,6 +30,7 @@ sys.path.insert(0, os.path.abspath(os.path.join(os.pardir, 'src')))
 # ones.
 extensions = [
 	'sphinx.ext.autodoc',
+	'sphinx_issues',
 ]
 
 # Add any paths that contain templates here, relative to this directory.
@@ -46,7 +47,7 @@ master_doc = 'index'
 
 # General information about the project.
 project = u'cutadapt'
-copyright = u'2010-2017, Marcel Martin'
+copyright = u'2010-2018, Marcel Martin'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
@@ -66,6 +67,8 @@ if version.endswith('.dirty') and os.environ.get('READTHEDOCS') == 'True':
 # The full version, including alpha/beta/rc tags.
 release = version
 
+issues_uri = 'https://github.com/marcelm/cutadapt/issues/{issue}'
+
 suppress_warnings = ['image.nonlocal_uri']
 
 # The language for content autogenerated by Sphinx. Refer to documentation


=====================================
doc/develop.rst
=====================================
--- a/doc/develop.rst
+++ b/doc/develop.rst
@@ -17,7 +17,7 @@ using a virtualenv. This sequence of commands should work::
 	git clone https://github.com/marcelm/cutadapt.git  # or clone your own fork
 	cd cutadapt
 	virtualenv -p python3 venv  # or omit the "-p python3" for Python 2
-	venv/bin/pip3 install Cython nose tox  # pip3 becomes just pip for Python 2
+	venv/bin/pip3 install Cython pytest nose tox  # pip3 becomes just pip for Python 2
 	venv/bin/pip3 install -e .
 
 Then you can run Cutadapt like this (or activate the virtualenv and omit the
@@ -27,7 +27,7 @@ Then you can run Cutadapt like this (or activate the virtualenv and omit the
 
 The tests can then be run like this::
 
-	venv/bin/nosetests
+	venv/bin/pytest
 
 Or with tox (but then you will need to have binaries for all tested Python
 versions installed)::
@@ -38,13 +38,14 @@ versions installed)::
 Development installation (without virtualenv)
 ---------------------------------------------
 
-Alternatively, if you do not want to use virtualenv, you can do the following
-from within the cloned repository::
+Alternatively, if you do not want to use virtualenv, running the following may
+work from within the cloned repository::
 
 	python3 setup.py build_ext -i  # omit the "3" for Python 2
-	nosetests
+	pytest
 
-This requires Cython and nose to be installed.
+This requires Cython and pytest to be installed. Avoid this method and use a
+virtualenv instead if you can.
 
 
 Code style


=====================================
doc/guide.rst
=====================================
--- a/doc/guide.rst
+++ b/doc/guide.rst
@@ -143,6 +143,10 @@ There are some limitations:
       - ``--untrimmed-output``, ``--untrimmed-paired-output``
       - ``--too-short-output``, ``--too-short-paired-output``
       - ``--too-long-output``, ``--too-long-paired-output``
+
+* Additionally, the following command-line arguments are not compatible with
+  multi-core:
+
       - ``--format``
       - ``--colorspace``
 


=====================================
setup.cfg
=====================================
--- a/setup.cfg
+++ b/setup.cfg
@@ -8,6 +8,6 @@ parentdir_prefix = cutadapt-
 
 [egg_info]
 tag_build = 
-tag_svn_revision = 0
 tag_date = 0
+tag_svn_revision = 0
 


=====================================
src/cutadapt.egg-info/PKG-INFO
=====================================
--- a/src/cutadapt.egg-info/PKG-INFO
+++ b/src/cutadapt.egg-info/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: cutadapt
-Version: 1.15
+Version: 1.16
 Summary: trim adapters from high-throughput sequencing reads
 Home-page: https://cutadapt.readthedocs.io/
 Author: Marcel Martin


=====================================
src/cutadapt/__main__.py
=====================================
--- a/src/cutadapt/__main__.py
+++ b/src/cutadapt/__main__.py
@@ -707,7 +707,12 @@ def main(cmdlineargs=None, default_outfile=sys.stdout):
 				logger.error('Running in parallel is not supported on Python 2')
 			else:
 				logger.error('Running in parallel is currently not supported for '
-					'the given combination of command-line parameters.')
+					'the given combination of command-line parameters.\nThese '
+					'options are not supported: --info-file, --rest-file, '
+					'--wildcard-file, --untrimmed-output, '
+					'--untrimmed-paired-output, --too-short-output, '
+					'--too-short-paired-output, --too-long-output, '
+					'--too-long-paired-output, --format, --colorspace')
 			sys.exit(1)
 	else:
 		runner = pipeline


=====================================
src/cutadapt/_align.c
=====================================
The diff for this file was not included because it is too large.

=====================================
src/cutadapt/_qualtrim.c
=====================================
--- a/src/cutadapt/_qualtrim.c
+++ b/src/cutadapt/_qualtrim.c
@@ -705,7 +705,7 @@ static const char __pyx_k_Quality_trimming[] = "\nQuality trimming.\n";
 static const char __pyx_k_cutadapt__qualtrim[] = "cutadapt._qualtrim";
 static const char __pyx_k_nextseq_trim_index[] = "nextseq_trim_index";
 static const char __pyx_k_quality_trim_index[] = "quality_trim_index";
-static const char __pyx_k_home_marcel_scm_cutadapt_cutada[] = "/home/marcel/scm/cutadapt/cutadapt/src/cutadapt/_qualtrim.pyx";
+static const char __pyx_k_home_marcel_scm_cutadapt_src_cu[] = "/home/marcel/scm/cutadapt/src/cutadapt/_qualtrim.pyx";
 static PyObject *__pyx_n_s_G;
 static PyObject *__pyx_n_s_base;
 static PyObject *__pyx_n_s_bases;
@@ -713,7 +713,7 @@ static PyObject *__pyx_n_s_cutadapt__qualtrim;
 static PyObject *__pyx_n_s_cutoff;
 static PyObject *__pyx_n_s_cutoff_back;
 static PyObject *__pyx_n_s_cutoff_front;
-static PyObject *__pyx_kp_s_home_marcel_scm_cutadapt_cutada;
+static PyObject *__pyx_kp_s_home_marcel_scm_cutadapt_src_cu;
 static PyObject *__pyx_n_s_i;
 static PyObject *__pyx_n_s_main;
 static PyObject *__pyx_n_s_max_i;
@@ -1162,7 +1162,7 @@ static __Pyx_StringTabEntry __pyx_string_tab[] = {
   {&__pyx_n_s_cutoff, __pyx_k_cutoff, sizeof(__pyx_k_cutoff), 0, 0, 1, 1},
   {&__pyx_n_s_cutoff_back, __pyx_k_cutoff_back, sizeof(__pyx_k_cutoff_back), 0, 0, 1, 1},
   {&__pyx_n_s_cutoff_front, __pyx_k_cutoff_front, sizeof(__pyx_k_cutoff_front), 0, 0, 1, 1},
-  {&__pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_k_home_marcel_scm_cutadapt_cutada, sizeof(__pyx_k_home_marcel_scm_cutadapt_cutada), 0, 0, 1, 0},
+  {&__pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_k_home_marcel_scm_cutadapt_src_cu, sizeof(__pyx_k_home_marcel_scm_cutadapt_src_cu), 0, 0, 1, 0},
   {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1},
   {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},
   {&__pyx_n_s_max_i, __pyx_k_max_i, sizeof(__pyx_k_max_i), 0, 0, 1, 1},
@@ -1201,12 +1201,12 @@ static int __Pyx_InitCachedConstants(void) {
   __pyx_tuple_ = PyTuple_Pack(9, __pyx_n_s_qualities, __pyx_n_s_cutoff_front, __pyx_n_s_cutoff_back, __pyx_n_s_base, __pyx_n_s_s, __pyx_n_s_max_qual, __pyx_n_s_stop, __pyx_n_s_start, __pyx_n_s_i); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 7, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple_);
   __Pyx_GIVEREF(__pyx_tuple_);
-  __pyx_codeobj__2 = (PyObject*)__Pyx_PyCode_New(4, 0, 9, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple_, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_quality_trim_index, 7, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__2)) __PYX_ERR(0, 7, __pyx_L1_error)
+  __pyx_codeobj__2 = (PyObject*)__Pyx_PyCode_New(4, 0, 9, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple_, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_quality_trim_index, 7, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__2)) __PYX_ERR(0, 7, __pyx_L1_error)
 
   __pyx_tuple__3 = PyTuple_Pack(10, __pyx_n_s_sequence, __pyx_n_s_cutoff, __pyx_n_s_base, __pyx_n_s_bases, __pyx_n_s_qualities, __pyx_n_s_s, __pyx_n_s_max_qual, __pyx_n_s_max_i, __pyx_n_s_i, __pyx_n_s_q); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 52, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__3);
   __Pyx_GIVEREF(__pyx_tuple__3);
-  __pyx_codeobj__4 = (PyObject*)__Pyx_PyCode_New(3, 0, 10, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__3, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_nextseq_trim_index, 52, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__4)) __PYX_ERR(0, 52, __pyx_L1_error)
+  __pyx_codeobj__4 = (PyObject*)__Pyx_PyCode_New(3, 0, 10, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__3, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_nextseq_trim_index, 52, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__4)) __PYX_ERR(0, 52, __pyx_L1_error)
   __Pyx_RefNannyFinishContext();
   return 0;
   __pyx_L1_error:;


=====================================
src/cutadapt/_seqio.c
=====================================
--- a/src/cutadapt/_seqio.c
+++ b/src/cutadapt/_seqio.c
@@ -1137,7 +1137,7 @@ static const char __pyx_k_Expected_at_least_d_arguments[] = "Expected at least %
 static const char __pyx_k_Sequence_name_0_r_sequence_1_r[] = "<Sequence(name={0!r}, sequence={1!r}{2})>";
 static const char __pyx_k_At_line_0_Sequence_descriptions[] = "At line {0}: Sequence descriptions in the FASTQ file don't match ({1!r} != {2!r}).\nThe second sequence description must be either empty or equal to the first description.";
 static const char __pyx_k_Reader_for_FASTQ_files_Does_not[] = "\n\tReader for FASTQ files. Does not support multi-line FASTQ files.\n\t";
-static const char __pyx_k_home_marcel_scm_cutadapt_cutada[] = "/home/marcel/scm/cutadapt/cutadapt/src/cutadapt/_seqio.pyx";
+static const char __pyx_k_home_marcel_scm_cutadapt_src_cu[] = "/home/marcel/scm/cutadapt/src/cutadapt/_seqio.pyx";
 static const char __pyx_k_Function_call_with_ambiguous_arg[] = "Function call with ambiguous argument types";
 static const char __pyx_k_In_read_named_0_r_length_of_qual[] = "In read named {0!r}: length of quality sequence ({1}) and length of read ({2}) do not match";
 static const char __pyx_k_Line_0_in_FASTQ_file_is_expected[] = "Line {0} in FASTQ file is expected to start with '@', but found {1!r}";
@@ -1189,7 +1189,7 @@ static PyObject *__pyx_n_s_file;
 static PyObject *__pyx_n_s_file_2;
 static PyObject *__pyx_n_s_format;
 static PyObject *__pyx_n_s_head;
-static PyObject *__pyx_kp_s_home_marcel_scm_cutadapt_cutada;
+static PyObject *__pyx_kp_s_home_marcel_scm_cutadapt_src_cu;
 static PyObject *__pyx_n_s_i;
 static PyObject *__pyx_n_s_import;
 static PyObject *__pyx_n_s_init;
@@ -6254,7 +6254,7 @@ static __Pyx_StringTabEntry __pyx_string_tab[] = {
   {&__pyx_n_s_file_2, __pyx_k_file_2, sizeof(__pyx_k_file_2), 0, 0, 1, 1},
   {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1},
   {&__pyx_n_s_head, __pyx_k_head, sizeof(__pyx_k_head), 0, 0, 1, 1},
-  {&__pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_k_home_marcel_scm_cutadapt_cutada, sizeof(__pyx_k_home_marcel_scm_cutadapt_cutada), 0, 0, 1, 0},
+  {&__pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_k_home_marcel_scm_cutadapt_src_cu, sizeof(__pyx_k_home_marcel_scm_cutadapt_src_cu), 0, 0, 1, 0},
   {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1},
   {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},
   {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1},
@@ -6366,27 +6366,27 @@ static int __Pyx_InitCachedConstants(void) {
   __pyx_tuple__23 = PyTuple_Pack(6, __pyx_n_s_buf, __pyx_n_s_lines, __pyx_n_s_pos, __pyx_n_s_linebreaks_seen, __pyx_n_s_length, __pyx_n_s_data); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(0, 20, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__23);
   __Pyx_GIVEREF(__pyx_tuple__23);
-  __pyx_codeobj__24 = (PyObject*)__Pyx_PyCode_New(2, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_head, 20, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__24)) __PYX_ERR(0, 20, __pyx_L1_error)
+  __pyx_codeobj__24 = (PyObject*)__Pyx_PyCode_New(2, 0, 6, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_head, 20, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__24)) __PYX_ERR(0, 20, __pyx_L1_error)
 
   __pyx_tuple__25 = PyTuple_Pack(7, __pyx_n_s_buf, __pyx_n_s_end, __pyx_n_s_pos, __pyx_n_s_linebreaks, __pyx_n_s_length, __pyx_n_s_data, __pyx_n_s_record_start); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(0, 38, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__25);
   __Pyx_GIVEREF(__pyx_tuple__25);
-  __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(2, 0, 7, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_fastq_head, 38, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(0, 38, __pyx_L1_error)
+  __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(2, 0, 7, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_fastq_head, 38, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(0, 38, __pyx_L1_error)
 
   __pyx_tuple__27 = PyTuple_Pack(11, __pyx_n_s_buf1, __pyx_n_s_buf2, __pyx_n_s_end1, __pyx_n_s_end2, __pyx_n_s_pos1, __pyx_n_s_pos2, __pyx_n_s_linebreaks, __pyx_n_s_data1, __pyx_n_s_data2, __pyx_n_s_record_start1, __pyx_n_s_record_start2); if (unlikely(!__pyx_tuple__27)) __PYX_ERR(0, 69, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__27);
   __Pyx_GIVEREF(__pyx_tuple__27);
-  __pyx_codeobj__28 = (PyObject*)__Pyx_PyCode_New(4, 0, 11, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__27, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_two_fastq_heads, 69, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__28)) __PYX_ERR(0, 69, __pyx_L1_error)
+  __pyx_codeobj__28 = (PyObject*)__Pyx_PyCode_New(4, 0, 11, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__27, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_two_fastq_heads, 69, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__28)) __PYX_ERR(0, 69, __pyx_L1_error)
 
   __pyx_tuple__29 = PyTuple_Pack(3, __pyx_n_s_self, __pyx_n_s_file, __pyx_n_s_sequence_class); if (unlikely(!__pyx_tuple__29)) __PYX_ERR(0, 175, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__29);
   __Pyx_GIVEREF(__pyx_tuple__29);
-  __pyx_codeobj__30 = (PyObject*)__Pyx_PyCode_New(3, 0, 3, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__29, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_init, 175, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__30)) __PYX_ERR(0, 175, __pyx_L1_error)
+  __pyx_codeobj__30 = (PyObject*)__Pyx_PyCode_New(3, 0, 3, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__29, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_init, 175, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__30)) __PYX_ERR(0, 175, __pyx_L1_error)
 
   __pyx_tuple__31 = PyTuple_Pack(11, __pyx_n_s_self, __pyx_n_s_i, __pyx_n_s_strip, __pyx_n_s_line, __pyx_n_s_name, __pyx_n_s_qualities, __pyx_n_s_sequence, __pyx_n_s_name2, __pyx_n_s_sequence_class, __pyx_n_s_it, __pyx_n_s_second_header); if (unlikely(!__pyx_tuple__31)) __PYX_ERR(0, 184, __pyx_L1_error)
   __Pyx_GOTREF(__pyx_tuple__31);
   __Pyx_GIVEREF(__pyx_tuple__31);
-  __pyx_codeobj__32 = (PyObject*)__Pyx_PyCode_New(1, 0, 11, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__31, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_cutada, __pyx_n_s_iter, 184, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__32)) __PYX_ERR(0, 184, __pyx_L1_error)
+  __pyx_codeobj__32 = (PyObject*)__Pyx_PyCode_New(1, 0, 11, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__31, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_home_marcel_scm_cutadapt_src_cu, __pyx_n_s_iter, 184, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__32)) __PYX_ERR(0, 184, __pyx_L1_error)
   __Pyx_RefNannyFinishContext();
   return 0;
   __pyx_L1_error:;


=====================================
src/cutadapt/_version.py
=====================================
--- a/src/cutadapt/_version.py
+++ b/src/cutadapt/_version.py
@@ -11,8 +11,8 @@ version_json = '''
 {
  "dirty": false,
  "error": null,
- "full-revisionid": "8d0b61222e77973592bd2fb9e2ce57445fccf8dd",
- "version": "1.15"
+ "full-revisionid": "77ade52bc2a7fe2d278fdb4256c5b46936011c2c",
+ "version": "1.16"
 }
 '''  # END VERSION_JSON
 


=====================================
src/cutadapt/pipeline.py
=====================================
--- a/src/cutadapt/pipeline.py
+++ b/src/cutadapt/pipeline.py
@@ -4,6 +4,7 @@ import io
 import os
 import re
 import sys
+import copy
 import logging
 import functools
 from multiprocessing import Process, Pipe, Queue
@@ -262,7 +263,8 @@ class PairedEndPipeline(Pipeline):
 		"""
 		self._modifiers.append(modifier)
 		if not self._modify_first_read_only:
-			self._modifiers2.append(modifier)
+			modifier2 = copy.copy(modifier)
+			self._modifiers2.append(modifier2)
 		else:
 			self._should_warn_legacy = True
 


=====================================
src/cutadapt/seqio.py
=====================================
--- a/src/cutadapt/seqio.py
+++ b/src/cutadapt/seqio.py
@@ -882,7 +882,7 @@ def read_paired_chunks(f, f2, buffer_size=4*1024**2):
 	buf1 = bytearray(buffer_size)
 	buf2 = bytearray(buffer_size)
 
-	# Read one byte to make sure are processing FASTQ
+	# Read one byte to make sure we are processing FASTQ
 	start1 = f.readinto(memoryview(buf1)[0:1])
 	start2 = f2.readinto(memoryview(buf2)[0:1])
 	if (start1 == 1 and buf1[0:1] != b'@') or (start2 == 1 and buf2[0:1] != b'@'):
@@ -890,10 +890,8 @@ def read_paired_chunks(f, f2, buffer_size=4*1024**2):
 
 	while True:
 		bufend1 = f.readinto(memoryview(buf1)[start1:]) + start1
-		if start1 == bufend1:
-			break
 		bufend2 = f2.readinto(memoryview(buf2)[start2:]) + start2
-		if start2 == bufend2:
+		if start1 == bufend1 and start2 == bufend2:
 			break
 
 		end1, end2 = two_fastq_heads(buf1, buf2, bufend1, bufend2)



View it on GitLab: https://salsa.debian.org/med-team/python-cutadapt/compare/416427a03251f997cb9de7a623071f691e4a839e...3d2dd1cfa45795b25fcc9d546db36e82c392058a

---
View it on GitLab: https://salsa.debian.org/med-team/python-cutadapt/compare/416427a03251f997cb9de7a623071f691e4a839e...3d2dd1cfa45795b25fcc9d546db36e82c392058a
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/debian-med-commit/attachments/20180301/51711b5f/attachment-0001.html>


More information about the debian-med-commit mailing list