[med-svn] [Git][med-team/pycoqc][upstream] New upstream version 2.5.0.23+dfsg

Nilesh Patra gitlab at salsa.debian.org
Wed Aug 19 15:50:31 BST 2020



Nilesh Patra pushed to branch upstream at Debian Med / pycoqc


Commits:
ac4408fc by Nilesh Patra at 2020-08-19T20:14:09+05:30
New upstream version 2.5.0.23+dfsg
- - - - -


7 changed files:

- README.md
- docs/index.md
- meta.yaml
- pycoQC/Fast5_to_seq_summary.py
- pycoQC/__init__.py
- pycoQC/templates/spectre.html.j2
- setup.py


Changes:

=====================================
README.md
=====================================
@@ -8,9 +8,13 @@
 
 [![PyPI version](https://badge.fury.io/py/pycoQC.svg)](https://badge.fury.io/py/pycoQC)
 [![Downloads](https://pepy.tech/badge/pycoqc)](https://pepy.tech/project/pycoqc)
+
 [![Anaconda Version](https://anaconda.org/aleg/pycoqc/badges/version.svg)](https://anaconda.org/aleg/pycoqc)
 [![Anaconda Downloads](https://anaconda.org/aleg/pycoqc/badges/downloads.svg)](https://anaconda.org/aleg/pycoqc)
 
+[![install with bioconda](https://img.shields.io/badge/install%20with-bioconda-brightgreen.svg?style=flat)](http://bioconda.github.io/recipes/pycoqc/README.html)
+[![Bioconda Downloads](https://anaconda.org/bioconda/pycoqc/badges/downloads.svg)](https://anaconda.org/bioconda/pycoqc)
+
 [![Build Status](https://travis-ci.com/a-slide/pycoQC.svg?branch=master)](https://travis-ci.com/a-slide/pycoQC)
 [![Codacy Badge](https://api.codacy.com/project/badge/Grade/07db58961a3c4fc1b6dc34c54079b477)](https://www.codacy.com/app/a-slide/pycoQC?utm_source=github.com&utm_medium=referral&utm_content=a-slide/pycoQC&utm_campaign=Badge_Grade)
 ---
@@ -23,6 +27,8 @@
 
 PycoQC relies on the *sequencing_summary.txt* file generated by Albacore and Guppy, but if needed it can also generates a summary file from basecalled fast5 files. The package supports 1D and 1D2 runs generated with Minion, Gridion and Promethion devices and basecalled with Albacore 1.2.1+ or Guppy 2.1.3+. PycoQC is written in pure Python3. **Python 2 is not supported**.
 
+Great tutorial with detailed explanations by [Tim Kahlke](https://github.com/timkahlke) available at https://timkahlke.github.io/LongRead_tutorials/QC_P.html
+
 ## Gallery
 
 ![summary](./docs/pictures/summary.gif)


=====================================
docs/index.md
=====================================
@@ -6,6 +6,7 @@
 
 PycoQC relies on the *sequencing_summary.txt* file generated by Albacore and Guppy, but if needed it can also generates a summary file from basecalled fast5 files. The package supports 1D and 1D2 runs generated with Minion, Gridion and Promethion devices and basecalled with Albacore 1.2.1+ or Guppy 2.1.3+. PycoQC is written in pure Python3. **Python 2 is not supported**.
 
+Great tutorial with detailed explanations by [Tim Kahlke](https://github.com/timkahlke) available at https://timkahlke.github.io/LongRead_tutorials/QC_P.html
 
 ## Gallery
 


=====================================
meta.yaml
=====================================
@@ -1,4 +1,4 @@
-{% set version = "2.5.0.21" %}
+{% set version = "2.5.0.23" %}
 {% set name = "pycoQC" %}
 
 package:
@@ -23,7 +23,7 @@ requirements:
     - pip>=19.2.1
     - ripgrep>=11.0.1
   run:
-    - python=3.6
+    - python>=3.6
     - numpy=1.17.1
     - scipy=1.3.1
     - pandas=0.25.1
@@ -45,5 +45,9 @@ test:
 
 about:
   home: "https://github.com/a-slide/pycoQC"
-  license: "GPLv3"
+  license: "GNU General Public v3 (GPLv3)"
+  license_file: LICENSE
+  license_family: GPL
   summary: "PycoQC computes metrics and generates interactive QC plots for Oxford Nanopore technologies sequencing data"
+  doc_url: "https://a-slide.github.io/pycoQC/"
+  dev_url: ""


=====================================
pycoQC/Fast5_to_seq_summary.py
=====================================
@@ -197,54 +197,73 @@ class Fast5_to_seq_summary ():
 
             for fast5_fn in iter(in_q.get, None):
 
-                # Try to extract data from the fast5 file
-                d = OrderedDict()
                 with h5py.File(fast5_fn, "r") as h5_fp:
 
-                    # Define group names for current read
-                    grp_dict = {
-                        "raw_read" : "/Raw/Reads/{}/".format(list(h5_fp["/Raw/Reads"].keys())[0]),
-                        "summary_basecall" : "/Analyses/Basecall_1D_{:03}/Summary/basecall_1d_template/".format(self.basecall_id),
-                        "summary_calibration" : "/Analyses/Calibration_Strand_Detection_{:03}/Summary/calibration_strand_template/".format(self.basecall_id),
-                        "summary_barcoding" : "/Analyses/Barcoding_{:03}/Summary/barcoding/".format(self.basecall_id),
-                        "tracking_id" : "UniqueGlobalKey/tracking_id",
-                        "channel_id" : "UniqueGlobalKey/channel_id"}
-
-                    # Fetch required fields is available
-                    for field in self.fields:
-
-                        # Special case for start time
-                        if field == "start_time":
-                            start_time = self._get_h5_attrs (fp=h5_fp,
-                                grp=grp_dict[self.attrs_grp_dict["start_time"]["grp"]],
-                                attrs=self.attrs_grp_dict["start_time"]["attrs"])
-                            sampling_rate = self._get_h5_attrs (fp=h5_fp,
-                                grp=grp_dict[self.attrs_grp_dict["channel_sampling_rate"]["grp"]],
-                                attrs=self.attrs_grp_dict["channel_sampling_rate"]["attrs"])
-                            if start_time and sampling_rate:
-                                d[field] = int(start_time/sampling_rate)
-                                c["fields_found"][field] +=1
-                            else:
-                                c["fields_not_found"][field] +=1
-                        # Everything else
-                        else:
-                            v = self._get_h5_attrs (
-                                fp=h5_fp, grp=grp_dict[self.attrs_grp_dict[field]["grp"]], attrs=self.attrs_grp_dict[field]["attrs"])
-                            if v:
-                                d[field] = v
-                                c["fields_found"][field] +=1
-                            else:
-                                c["fields_not_found"][field] +=1
+                    multi_read = 'file_type' in h5_fp.attrs.keys() and h5_fp.attrs['file_type'] == b'multi-read'
+
+                    if multi_read:
+                        read_ids =  list(h5_fp["/"].keys())
+                    else: 
+                        read_ids = list(h5_fp["/Raw/Reads"].keys())
 
-                if self.include_path:
-                    d["path"] = os.path.abspath(fast5_fn)
+                    for read_id in read_ids:
+                        # Try to extract data from the fast5 file
+                        d = OrderedDict()
 
-                # Put read data in queue
-                if d:
-                    out_q.put(d)
-                    c["overall"]["valid files"] += 1
-                else:
-                    c["overall"]["invalid files"] += 1
+                        if multi_read:
+                            grp_dict = {
+                                "raw_read" : "/{}/Raw/".format(read_id),
+                                "summary_basecall" : "/{read_id}/Analyses/Basecall_1D_{bc_id:03}/Summary/basecall_1d_template/".format(read_id=read_id, bc_id=self.basecall_id),
+                                "summary_calibration" : "/{read_id}/Analyses/Calibration_Strand_Detection_{bc_id:03}/Summary/calibration_strand_template/".format(read_id=read_id, bc_id=self.basecall_id),
+                                "summary_barcoding" : "/{read_id}/Analyses/Barcoding_{bc_id:03}/Summary/barcoding/".format(read_id=read_id, bc_id=self.basecall_id),
+                                "tracking_id" : "/{}/tracking_id".format(read_id),
+                                "channel_id" : "/{}/channel_id".format(read_id)}
+
+                        else:
+                            # Define group names for current read
+                            grp_dict = {
+                                "raw_read" : "/Raw/Reads/{}/".format(read_id),
+                                "summary_basecall" : "/Analyses/Basecall_1D_{:03}/Summary/basecall_1d_template/".format(self.basecall_id),
+                                "summary_calibration" : "/Analyses/Calibration_Strand_Detection_{:03}/Summary/calibration_strand_template/".format(self.basecall_id),
+                                "summary_barcoding" : "/Analyses/Barcoding_{:03}/Summary/barcoding/".format(self.basecall_id),
+                                "tracking_id" : "UniqueGlobalKey/tracking_id",
+                                "channel_id" : "UniqueGlobalKey/channel_id"}
+
+                        # Fetch required fields is available
+                        for field in self.fields:
+
+                            # Special case for start time
+                            if field == "start_time":
+                                start_time = self._get_h5_attrs (fp=h5_fp,
+                                    grp=grp_dict[self.attrs_grp_dict["start_time"]["grp"]],
+                                    attrs=self.attrs_grp_dict["start_time"]["attrs"])
+                                sampling_rate = self._get_h5_attrs (fp=h5_fp,
+                                    grp=grp_dict[self.attrs_grp_dict["channel_sampling_rate"]["grp"]],
+                                    attrs=self.attrs_grp_dict["channel_sampling_rate"]["attrs"])
+                                if start_time and sampling_rate:
+                                    d[field] = int(start_time/sampling_rate)
+                                    c["fields_found"][field] +=1
+                                else:
+                                    c["fields_not_found"][field] +=1
+                            # Everything else
+                            else:
+                                v = self._get_h5_attrs (
+                                    fp=h5_fp, grp=grp_dict[self.attrs_grp_dict[field]["grp"]], attrs=self.attrs_grp_dict[field]["attrs"])
+                                if v:
+                                    d[field] = v
+                                    c["fields_found"][field] +=1
+                                else:
+                                    c["fields_not_found"][field] +=1
+
+                        if self.include_path:
+                            d["path"] = os.path.abspath(fast5_fn)
+
+                        # Put read data in queue
+                        if d:
+                            out_q.put(d)
+                            c["overall"]["valid files"] += 1
+                        else:
+                            c["overall"]["invalid files"] += 1
 
             # Put counter in counter queue
             counter_q.put(c)


=====================================
pycoQC/__init__.py
=====================================
@@ -1,4 +1,4 @@
 # -*- coding: utf-8 -*-
 
-__version__ = '2.5.0.21'
+__version__ = '2.5.0.23'
 __all__ = ["pycoQC", "Fast5_to_seq_summary", "Barcode_split", "common"]


=====================================
pycoQC/templates/spectre.html.j2
=====================================
@@ -68,10 +68,6 @@
                 {{ src_files }}
             </div>
 		</div>
-
-	<!-- <div class="column col-12 col-mx-auto text-center">
-		<h6>SHA256: {{ summary_file_hash }}</h6>
-	</div> -->
     </div>
 </div>
 </body>


=====================================
setup.py
=====================================
@@ -5,7 +5,7 @@ from setuptools import setup
 
 # Define package info
 name = "pycoQC"
-version = "2.5.0.21"
+version = "2.5.0.23"
 description = "PycoQC computes metrics and generates interactive QC plots for Oxford Nanopore technologies sequencing data"
 with open("README.md", "r") as fh:
     long_description = fh.read()



View it on GitLab: https://salsa.debian.org/med-team/pycoqc/-/commit/ac4408fcf50d6bea2a6caa9267f7f0e4788c7bb3

-- 
View it on GitLab: https://salsa.debian.org/med-team/pycoqc/-/commit/ac4408fcf50d6bea2a6caa9267f7f0e4788c7bb3
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20200819/5153b7df/attachment-0001.html>


More information about the debian-med-commit mailing list