[med-svn] [Git][med-team/snakemake][upstream] New upstream version 5.22.1

Rebecca N. Palmer gitlab at salsa.debian.org
Mon Aug 17 08:38:52 BST 2020



Rebecca N. Palmer pushed to branch upstream at Debian Med / snakemake


Commits:
ede461d5 by Rebecca N. Palmer at 2020-08-16T13:19:27+01:00
New upstream version 5.22.1
- - - - -


27 changed files:

- .github/workflows/main.yml
- CHANGELOG.rst
- + CODE_OF_CONDUCT.md
- Dockerfile
- README.md
- docs/_static/theme.css
- docs/conf.py
- docs/executing/cloud.rst
- + docs/executor_tutorial/azure_aks.rst
- docs/executor_tutorial/google_lifesciences.rst
- docs/executor_tutorial/tutorial.rst
- docs/index.rst
- docs/project_info/authors.rst
- docs/snakefiles/deployment.rst
- docs/snakefiles/remote_files.rst
- docs/snakefiles/reporting.rst
- setup.py
- snakemake/__init__.py
- snakemake/_version.py
- snakemake/executors/__init__.py
- snakemake/executors/google_lifesciences.py
- snakemake/remote/AzureStorage.py → snakemake/remote/AzBlob.py
- snakemake/remote/GS.py
- snakemake/resources.py
- test-environment.yml
- tests/common.py
- tests/test_remote_azure/Snakefile


Changes:

=====================================
.github/workflows/main.yml
=====================================
@@ -94,7 +94,7 @@ jobs:
           pytest -s -v -x tests/test_kubernetes.py
 
       - name: Test AWS execution
-        if: env.AWS_AVAILABLE && success()
+        if: env.AWS_AVAILABLE && (success() || failure())
         env: 
           CI: true
         run: |
@@ -105,7 +105,7 @@ jobs:
           pytest -v -x tests/test_tibanna.py
 
       - name: Test Google Life Sciences Executor
-        if: env.GCP_AVAILABLE
+        if: env.GCP_AVAILABLE && (success() || failure())
         run: |
           # activate conda env
           export PATH="/usr/share/miniconda/bin:$PATH"


=====================================
CHANGELOG.rst
=====================================
@@ -1,3 +1,29 @@
+[5.22.1] - 2020-08-14
+=====================
+Changed
+-------
+- Fixed a missing dependency for google storage in cloud execution.
+
+[5.22.0] - 2020-08-13
+=====================
+Added
+-----
+- Added short option ``-T`` for CLI parameter ``--restart-times`` (@mbhall88).
+
+Changed
+-------
+- Various small fixes for google storage and life sciences backends (@vsoch).
+
+
+[5.21.0] - 2020-08-11
+=====================
+
+Changed
+-------
+- Added default-remote-provider support for Azure storage (@andreas-wilm).
+- Various small bug fixes and documentation improvements.
+
+
 [5.20.1] - 2020-07-08
 =====================
 Changed


=====================================
CODE_OF_CONDUCT.md
=====================================
@@ -0,0 +1,129 @@
+
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+We as members, contributors, and leaders pledge to make participation in our
+community a harassment-free experience for everyone, regardless of age, body
+size, visible or invisible disability, ethnicity, sex characteristics, gender
+identity and expression, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity
+and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming,
+diverse, inclusive, and healthy community.
+
+## Our Standards
+
+Examples of behavior that contributes to a positive environment for our
+community include:
+
+* Demonstrating empathy and kindness toward other people
+* Being respectful of differing opinions, viewpoints, and experiences
+* Giving and gracefully accepting constructive feedback
+* Accepting responsibility and apologizing to those affected by our mistakes,
+  and learning from the experience
+* Focusing on what is best not just for us as individuals, but for the
+  overall community
+
+Examples of unacceptable behavior include:
+
+* The use of sexualized language or imagery, and sexual attention or
+  advances of any kind
+* Trolling, insulting or derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or email
+  address, without their explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+  professional setting
+
+## Enforcement Responsibilities
+
+Community leaders are responsible for clarifying and enforcing our standards of
+acceptable behavior and will take appropriate and fair corrective action in
+response to any behavior that they deem inappropriate, threatening, offensive,
+or harmful.
+
+Community leaders have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, and will communicate reasons for moderation
+decisions when appropriate.
+
+## Scope
+
+This Code of Conduct applies within all community spaces, and also applies when
+an individual is officially representing the community in public spaces.
+Examples of representing our community include using an official e-mail address,
+posting via an official social media account, or acting as an appointed
+representative at an online or offline event.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported via email to johannes.koester at uni-due.de.
+All complaints will be reviewed and investigated promptly and fairly.
+
+All community leaders are obligated to respect the privacy and security of the
+reporter of any incident.
+
+## Enforcement Guidelines
+
+Community leaders will follow these Community Impact Guidelines in determining
+the consequences for any action they deem in violation of this Code of Conduct:
+
+### 1. Correction
+
+**Community Impact**: Use of inappropriate language or other behavior deemed
+unprofessional or unwelcome in the community.
+
+**Consequence**: A private, written warning from community leaders, providing
+clarity around the nature of the violation and an explanation of why the
+behavior was inappropriate. A public apology may be requested.
+
+### 2. Warning
+
+**Community Impact**: A violation through a single incident or series
+of actions.
+
+**Consequence**: A warning with consequences for continued behavior. No
+interaction with the people involved, including unsolicited interaction with
+those enforcing the Code of Conduct, for a specified period of time. This
+includes avoiding interactions in community spaces as well as external channels
+like social media. Violating these terms may lead to a temporary or
+permanent ban.
+
+### 3. Temporary Ban
+
+**Community Impact**: A serious violation of community standards, including
+sustained inappropriate behavior.
+
+**Consequence**: A temporary ban from any sort of interaction or public
+communication with the community for a specified period of time. No public or
+private interaction with the people involved, including unsolicited interaction
+with those enforcing the Code of Conduct, is allowed during this period.
+Violating these terms may lead to a permanent ban.
+
+### 4. Permanent Ban
+
+**Community Impact**: Demonstrating a pattern of violation of community
+standards, including sustained inappropriate behavior,  harassment of an
+individual, or aggression toward or disparagement of classes of individuals.
+
+**Consequence**: A permanent ban from any sort of public interaction within
+the community.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 2.0, available at
+https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
+
+Community Impact Guidelines were inspired by [Mozilla's code of conduct
+enforcement ladder](https://github.com/mozilla/diversity).
+
+[homepage]: https://www.contributor-covenant.org
+
+For answers to common questions about this code of conduct, see the FAQ at
+https://www.contributor-covenant.org/faq. Translations are available at
+https://www.contributor-covenant.org/translations.
+


=====================================
Dockerfile
=====================================
@@ -8,15 +8,15 @@ ENV SHELL /bin/bash
 RUN /bin/bash -c "install_packages wget bzip2 ca-certificates gnupg2 squashfs-tools git && \
     wget -O- https://neuro.debian.net/lists/xenial.us-ca.full > /etc/apt/sources.list.d/neurodebian.sources.list && \
     wget -O- https://neuro.debian.net/_static/neuro.debian.net.asc | apt-key add - && \
-    install_packages singularity-container && \
-    wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
+    install_packages singularity-container"
+RUN /bin/bash -c "wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
     bash Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda && \
-    rm Miniconda3-latest-Linux-x86_64.sh && \
-    conda install -y -c conda-forge mamba && \
+    rm Miniconda3-latest-Linux-x86_64.sh"
+RUN /bin/bash -c "conda install -y -c conda-forge mamba && \
     mamba create -q -y -c conda-forge -c bioconda -n snakemake snakemake snakemake-minimal --only-deps && \
     conda clean --all -y && \
     source activate snakemake && \
     which python && \
-    pip install ."
+    pip install .[reports,messaging,google-cloud]"
 RUN echo "source activate snakemake" > ~/.bashrc
 ENV PATH /opt/conda/envs/snakemake/bin:${PATH}


=====================================
README.md
=====================================
@@ -6,6 +6,7 @@
 [![Stack Overflow](https://img.shields.io/badge/stack-overflow-orange.svg)](https://stackoverflow.com/questions/tagged/snakemake)
 [![Twitter](https://img.shields.io/twitter/follow/johanneskoester.svg?style=social&label=Follow)](https://twitter.com/search?l=&q=%23snakemake%20from%3Ajohanneskoester)
 [![Github stars](https://img.shields.io/github/stars/snakemake/snakemake?style=social)](https://github.com/snakemake/snakemake/stargazers)
+[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](code_of_conduct.md) 
 
 # Snakemake
 


=====================================
docs/_static/theme.css
=====================================
The diff for this file was not included because it is too large.

=====================================
docs/conf.py
=====================================
@@ -41,8 +41,8 @@ extensions = [
     'sphinxarg.ext'
 ]
 
-# TODO enable once new theme is final
-# html_style = "theme.css"
+# Snakemake theme (made by SciAni).
+html_css_files = ["theme.css"]
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -153,7 +153,7 @@ html_static_path = ['_static']
 # Add any extra paths that contain custom files (such as robots.txt or
 # .htaccess) here, relative to this directory. These files are copied
 # directly to the root of the documentation.
-#html_extra_path = []
+#html_extra_path = ["_static/css"]
 
 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
 # using the given strftime format.
@@ -168,7 +168,7 @@ html_static_path = ['_static']
 
 # Additional templates that should be rendered to pages, maps page names to
 # template names.
-#html_additional_pages = {}
+#html_additional_pages = {"index": "index.html"}
 
 # If false, no module index is generated.
 #html_domain_indices = True


=====================================
docs/executing/cloud.rst
=====================================
@@ -207,7 +207,7 @@ a full machine type:
 
 .. code-block:: console
 
-    --default-resources machine_type="n1-standard"
+    --default-resources "machine_type=n1-standard"
 
 
 If you want to specify the machine type as a resource, you can do that too:
@@ -330,4 +330,4 @@ When executing, Snakemake will make use of the defined resources and threads
 to schedule jobs to the correct nodes. In particular, it will forward memory requirements
 defined as `mem_mb` to Tibanna. Further, it will propagate the number of threads
 a job intends to use, such that Tibanna can allocate it to the most cost-effective
-cloud compute instance available.
\ No newline at end of file
+cloud compute instance available.


=====================================
docs/executor_tutorial/azure_aks.rst
=====================================
@@ -0,0 +1,258 @@
+.. _tutorial-azure-aks:
+
+Auto-scaling Azure Kubernetes cluster without shared filesystem
+---------------------------------------------------------------
+
+In this tutorial we will show how to execute a Snakemake workflow
+on an auto-scaling Azure Kubernetes cluster without a shared file-system..
+While Kubernetes is mainly known as microservice orchestration system with
+self-healing properties, we will use it here simply as auto-scaling
+compute orchestrator. One could use `persistent volumes in
+Kubernetes <https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv>`__
+as shared file system, but this adds an unnecessary level of complexity
+and most importantly costs. Instead we use cheap Azure Blob storage,
+which is used by Snakemake to automatically stage data in and out for
+every job.
+
+Following the steps below you will
+
+#. set up Azure Blob storage, download the Snakemake tutorial data and upload to Azure
+#. then create an Azure Kubernetes (AKS) cluster
+#. and finally run the analysis with Snakemake on the cluster 
+
+
+Setup
+:::::
+
+To go through this tutorial, you need the following software installed:
+
+* Python_ ≥3.5
+* Snakemake_ ≥5.17
+
+You should install conda as outlined in the :ref:`tutorial <tutorial-setup>`,
+and then install full snakemake with:
+
+.. code:: console
+
+    conda create -c bioconda -c conda-forge -n snakemake snakemake
+
+Make sure that the ``kubernetes`` and ``azure-storage-blob`` modules are installed
+in this environment. Should they be missing install with:
+
+.. code:: console
+
+   pip install kubernetes
+   pip install azure-storage-blob
+
+In addition you will need the
+`Azure CLI command <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>`__ 
+installed.
+
+Create an Azure storage account and upload data
+:::::::::::::::::::::::::::::::::::::::::::::::
+
+We will be starting from scratch, i.e. we will 
+create a new resource group and storage account. You can obviously reuse 
+existing resources instead.
+
+.. code:: console
+
+   # change the following names as required
+   # azure region where to run:
+   region=southeastasia
+   # name of the resource group to create:
+   resgroup=snakemaks-rg
+   # name of storage account to create (all lowercase, no hyphens etc.):
+   stgacct=snakemaksstg
+
+   # create a resource group with name and in region as defined above
+   az group create --name $resgroup --location $region
+   # create a general purpose storage account with cheapest SKU
+   az storage account create -n $stgacct -g $resgroup --sku Standard_LRS -l $region
+
+Get a key for that account and save it as ``stgkey`` for later use:
+
+.. code:: console
+
+   stgkey=$(az storage account keys list -g $resgroup -n $storageacct | head -n1 | cut -f 3)
+
+Next, you will create a storage container (think: bucket) to upload the Snakemake tutorial data to:
+
+.. code:: console
+
+   az storage container create --resource-group $resgroup --account-name $stgacct \
+       --account-key $stgkey --name snakemake-tutorial
+   cd /tmp
+   git clone https://github.com/snakemake/snakemake-tutorial-data.git
+   cd snakemake-tutorial-data
+   az storage blob upload-batch -d snakemake-tutorial --account-name $stgacct \
+       --account-key $stgkey -s data/ --destination-path data
+
+We are using `az storage blob` for uploading, because that `az` is already installed.
+A  more efficient way of uploading would be to use
+`azcopy <https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10>`__.
+
+Create an auto-scaling Kubernetes cluster
+:::::::::::::::::::::::::::::::::::::::::
+
+.. code:: console
+
+   # change the cluster name as you like
+   clustername=snakemaks-aks
+   az aks create --resource-group $resgroup --name $clustername \
+       --vm-set-type VirtualMachineScaleSets --load-balancer-sku standard --enable-cluster-autoscaler \
+       --node-count 1 --min-count 1 --max-count 3 --node-vm-size Standard_D3_v2
+
+There is a lot going on here, so let’s unpack it: this creates an
+`auto-scaling Kubernetes
+cluster <https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler>`__
+(``--enable-cluster-autoscaler``) called ``$clustername`` (i.e. ``snakemaks-aks``), which starts
+out with one node (``--node-count 1``) and has a maximum of three nodes
+(``--min-count 1 --max-count 3``). For real world applications you will
+want to increase the maximum count and also increase the VM size. You
+could for example choose a large instance from the DSv2 series and add a
+larger disk with (``--node-osdisk-size``) if needed. See `here for more
+info on Linux VM
+sizes <https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes>`__.
+
+Note, if you are creating the cluster in the Azure portal, click on the
+ellipsis under node-pools to find the auto-scaling option.
+
+Next, let’s fetch the credentials for this cluster, so that we can
+actually interact with it.
+
+.. code:: console
+
+   az aks get-credentials --resource-group $resgroup --name $clustername
+   # print basic cluster info
+   kubectl cluster-info
+
+
+
+Running the workflow
+::::::::::::::::::::
+
+Below we will task Snakemake to install software on the fly with conda.
+For this we need a Snakefile with corresponding conda environment
+yaml files. You can download the package containing all those files `here <workflow/snakedir.zip>`__.
+After downloading, unzip it and cd into the newly created directory.
+
+.. code:: console
+
+   $ cd /tmp
+   $ unzip ~/Downloads/snakedir.zip
+   $ cd snakedir
+   $ find .
+   .
+   ./Snakefile
+   ./envs
+   ./envs/calling.yaml
+   ./envs/mapping.yaml
+
+
+Now, we will need to setup the credentials that allow the Kubernetes nodes to
+read and write from blob storage. For the AzBlob storage provider in
+Snakemake this is done through the environment variables
+``AZ_BLOB_ACCOUNT_URL`` and optionally ``AZ_BLOB_CREDENTIAL``. See the
+`documentation <snakefiles/remote_files.html#microsoft-azure-storage>`__ for more info.
+``AZ_BLOB_ACCOUNT_URL`` takes the form
+``https://<accountname>.blob.core.windows.net`` and may also contain a
+shared access signature (SAS), which is a powerful way to define fine grained
+and even time controlled access to storage on Azure. The SAS can be part of the
+URL, but if it’s missing, then you can set it with
+``AZ_BLOB_CREDENTIAL`` or alternatively use the storage account key. To
+keep things simple we’ll use the storage key here, since we already have it available,
+but a SAS is generally more powerful. We’ll pass those variables on to the Kubernetes
+with ``--envvars`` (see below).
+
+Now you are ready to run the analysis:
+
+.. code:: console
+
+   export AZ_BLOB_ACCOUNT_URL="https://${stgacct}.blob.core.windows.net"
+   export AZ_BLOB_CREDENTIAL="$stgkey"
+   snakemake --kubernetes \
+       --default-remote-prefix snakemake-tutorial --default-remote-provider AzBlob \
+       --envvars AZ_BLOB_ACCOUNT_URL AZ_BLOB_CREDENTIAL --use-conda --jobs 3
+
+This will use the default Snakemake image from Dockerhub. If you would like to use your
+own, make sure that the image contains the same Snakemake version as installed locally
+and also supports Azure Blob storage. If you plan to use your own image hosted on
+ Azure Container Registries (ACR), make sure to attach the ACR to your Kubernetes 
+ cluster. See `here <https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration>`__ for more info.
+
+
+While Snakemake is running the workflow, it prints handy debug
+statements per job, e.g.:
+
+.. code:: console
+
+   kubectl describe pod snakejob-c4d9bf9e-9076-576b-a1f9-736ec82afc64
+   kubectl logs snakejob-c4d9bf9e-9076-576b-a1f9-736ec82afc64
+
+With these you can also follow the scale-up of the cluster:
+
+.. code:: console
+
+   Events:
+   Type     Reason             Age                From                Message
+   ----     ------             ----               ----                -------
+   Warning  FailedScheduling   60s (x3 over 62s)  default-scheduler   0/1 nodes are available: 1 Insufficient cpu.
+   Normal   TriggeredScaleUp   50s                cluster-autoscaler  pod triggered scale-up: [{aks-nodepool1-17839284-vmss 1->3 (max: 3)}]
+
+After a while you will see three nodes (each running one BWA job), which
+was defined as the maximum above while creating your Kubernetes cluster:
+
+.. code:: console
+
+   $ kubectl get nodes
+   NAME                                STATUS   ROLES   AGE   VERSION
+   aks-nodepool1-17839284-vmss000000   Ready    agent   74m   v1.15.11
+   aks-nodepool1-17839284-vmss000001   Ready    agent   11s   v1.15.11
+   aks-nodepool1-17839284-vmss000002   Ready    agent   62s   v1.15.11
+
+To get detailed information including historical data about used
+resources, check Insights in the Azure portal under your AKS cluster
+Monitoring/Insights. The alternative is an instant snapshot on the
+command line:
+
+::
+
+   $ kubectl top node
+   NAME                                CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
+   aks-nodepool1-17839284-vmss000000   217m         5%     1796Mi          16%
+   aks-nodepool1-17839284-vmss000001   1973m        51%    529Mi           4%
+   aks-nodepool1-17839284-vmss000002   698m         18%    1485Mi          13%
+
+After completion all results including
+logs can be found in the blob container. You will also find results
+listed in the first Snakefile target downloaded to the working directoy.
+
+::
+
+   $ find snakemake-tutorial/
+   snakemake-tutorial/
+   snakemake-tutorial/calls
+   snakemake-tutorial/calls/all.vcf
+
+
+   $ az storage blob list  --container-name snakemake-tutorial --account-name $stgacct --account-key $stgkey -o table
+   Name                     Blob Type    Blob Tier    Length    Content Type                       Last Modified              Snapshot
+   -----------------------  -----------  -----------  --------  ---------------------------------  -------------------------  ----------
+   calls/all.vcf            BlockBlob    Hot          90986     application/octet-stream           2020-06-08T05:11:31+00:00
+   data/genome.fa           BlockBlob    Hot          234112    application/octet-stream           2020-06-08T03:26:54+00:00
+   # etc.
+   logs/mapped_reads/A.log  BlockBlob    Hot          346       application/octet-stream           2020-06-08T04:59:50+00:00
+   mapped_reads/A.bam       BlockBlob    Hot          2258058   application/octet-stream           2020-06-08T04:59:50+00:00
+   sorted_reads/A.bam       BlockBlob    Hot          2244660   application/octet-stream           2020-06-08T05:03:41+00:00
+   sorted_reads/A.bam.bai   BlockBlob    Hot          344       application/octet-stream           2020-06-08T05:06:25+00:00
+   # same for samples B and C
+
+Now that the execution is complete, the AKS cluster will scale down
+automatically. If you are not planning to run anything else, it makes
+sense to shut down it down entirely:
+
+::
+
+   az aks delete --name akscluster --resource-group $resgroup
+


=====================================
docs/executor_tutorial/google_lifesciences.rst
=====================================
@@ -17,12 +17,33 @@ To go through this tutorial, you need the following software installed:
 * Snakemake_ ≥5.16
 * git
 
-You should install conda as outlined in the :ref:`tutorial <tutorial-setup>`,
-and then install full snakemake with:
+First, you have to install the Miniconda Python3 distribution.
+See `here <https://conda.io/en/latest/miniconda.html>`_ for installation instructions.
+Make sure to ...
 
-.. code:: console
+* Install the **Python 3** version of Miniconda.
+* Answer yes to the question whether conda shall be put into your PATH.
+
+The default conda solver is a bit slow and sometimes has issues with `selecting the latest package releases <https://github.com/conda/conda/issues/9905>`_. Therefore, we recommend to install `Mamba <https://github.com/QuantStack/mamba>`_ as a drop-in replacement via
+
+.. code-block:: console
+
+    $ conda install -c conda-forge mamba
+
+Then, you can install Snakemake with
+
+.. code-block:: console
+
+    $ mamba create -c conda-forge -c bioconda -n snakemake snakemake
+
+from the `Bioconda <https://bioconda.github.io>`_ channel.
+This will install snakemake into an isolated software environment, that has to be activated with
+
+.. code-block:: console
+
+    $ conda activate snakemake
+    $ snakemake --help
 
-    conda create -c bioconda -c conda-forge -n snakemake snakemake
 
 
 Credentials
@@ -73,7 +94,7 @@ If you wanted to upload to a "subfolder" path in a bucket, you would do that as
 .. code:: console
 
     export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
-    python upload_google_storage.py snakemake-testing-data/subfolder
+    python upload_google_storage.py snakemake-testing-data/subfolder data/
 
 Your bucket (and the folder prefix) will be referred to as the
 `--default-remote-prefix` when you run snakemake. You can visually
@@ -89,8 +110,9 @@ Step 2: Write your Snakefile, Environment File, and Scripts
 Now that we've exported our credentials and have all dependencies installed, let's
 get our workflow! This is the exact same workflow from the :ref:`basic tutorial<tutorial-basics>`,
 so if you need a refresher on the design or basics, please see those pages.
-We won't need to clone data from `snakemake-tutorial-data <https://github.com/snakemake/snakemake-tutorial-data>`_
-because we will be using publicly accessible data on Google Storage.
+You can find the Snakefile, supporting scripts for plotting and environment in the
+ `snakemake-tutorial-data <https://github.com/snakemake/snakemake-tutorial-data>`_
+repository.
 
 First, how does a working directory work for this executor? The present
 working directory, as identified by Snakemake that has the Snakefile, and where
@@ -102,9 +124,9 @@ package includes the .snakemake folder that would have been generated locally.
 The build package is then downloaded and extracted by each cloud executor, which
 is a Google Compute instance.
 
-We next need an `environment.yml` file that will define the dependencies
+We next need an `environment.yaml` file that will define the dependencies
 that we want installed with conda for our job. If you cloned the "snakemake-tutorial-data"
-repository you will already have this, and you are good to go. If not, save this to `environment.yml`
+repository you will already have this, and you are good to go. If not, save this to `environment.yaml`
 in your working directory:
 
 .. code:: yaml
@@ -124,13 +146,13 @@ in your working directory:
       - pysam =0.15.0
     
 
-Notice that we reference this `environment.yml` file in the Snakefile below.
+Notice that we reference this `environment.yaml` file in the Snakefile below.
 Importantly, if you were optimizing a pipeline, you would likely have a folder
 "envs" with more than one environment specification, one for each step.
 This workflow uses the same environment (with many dependencies) instead of
 this strategy to minimize the number of files for you.
 
-The Snakefile then has the following content. It's important to note
+The Snakefile (also included in the repository) then has the following content. It's important to note
 that we have not customized this file from the basic tutorial to hard code 
 any storage. We will be telling snakemake to use the remote bucket as 
 storage instead of the local filesystem.
@@ -145,16 +167,16 @@ storage instead of the local filesystem.
 
     rule bwa_map:
         input:
-        fastq="samples/{sample}.fastq",
-        idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
-    conda:
-        "environment.yml"
-    output:
-        "mapped_reads/{sample}.bam"
-    params:
-        idx=lambda w, input: os.path.splitext(input.idx[0])[0]
-    shell:
-        "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
+            fastq="samples/{sample}.fastq",
+            idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
+        conda:
+            "environment.yaml"
+        output:
+            "mapped_reads/{sample}.bam"
+        params:
+            idx=lambda w, input: os.path.splitext(input.idx[0])[0]
+        shell:
+            "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
 
     rule samtools_sort:
         input:
@@ -162,7 +184,7 @@ storage instead of the local filesystem.
         output:
             "sorted_reads/{sample}.bam"
         conda:
-            "environment.yml"
+            "environment.yaml"
         shell:
             "samtools sort -T sorted_reads/{wildcards.sample} "
             "-O bam {input} > {output}"
@@ -173,7 +195,7 @@ storage instead of the local filesystem.
         output:
             "sorted_reads/{sample}.bam.bai"
         conda:
-            "environment.yml"
+            "environment.yaml"
         shell:
             "samtools index {input}"
 
@@ -185,7 +207,7 @@ storage instead of the local filesystem.
         output:
             "calls/all.vcf"
         conda:
-            "environment.yml"
+            "environment.yaml"
         shell:
             "samtools mpileup -g -f {input.fa} {input.bam} | "
             "bcftools call -mv - > {output}"
@@ -196,18 +218,22 @@ storage instead of the local filesystem.
         output:
             "plots/quals.svg"
         conda:
-            "environment.yml"
+            "environment.yaml"
         script:
             "plot-quals.py"
 
 
 
-And let's also write the script in our present working directory for the last step
-to do the plotting - call this `plot-quals.py`:
+And make sure you also have the script `plot-quals.py` in your present working directory for the last step.
+This script will help us to do the plotting, and is also included in the 
+ `snakemake-tutorial-data <https://github.com/snakemake/snakemake-tutorial-data>`_
+repository.
+
 
 .. code:: python
 
     import matplotlib
+
     matplotlib.use("Agg")
     import matplotlib.pyplot as plt
     from pysam import VariantFile
@@ -491,7 +517,7 @@ Step 5: Debugging
 :::::::::::::::::
 
 Let's introduce an error (purposefully) into our Snakefile to practice debugging.
-Let's remove the conda environment.yml file for the first rule, so we would
+Let's remove the conda environment.yaml file for the first rule, so we would
 expect that Snakemake won't be able to find the executables for bwa and samtools.
 In your Snakefile, change this:
 
@@ -499,16 +525,16 @@ In your Snakefile, change this:
 
     rule bwa_map:
         input:
-        fastq="samples/{sample}.fastq",
-        idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
-    conda:
-        "environment.yml"
-    output:
-        "mapped_reads/{sample}.bam"
-    params:
-        idx=lambda w, input: os.path.splitext(input.idx[0])[0]
-    shell:
-        "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
+            fastq="samples/{sample}.fastq",
+            idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
+        conda:
+            "environment.yaml"
+        output:
+            "mapped_reads/{sample}.bam"
+        params:
+            idx=lambda w, input: os.path.splitext(input.idx[0])[0]
+        shell:
+            "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
 
 
 to this:
@@ -517,14 +543,14 @@ to this:
 
     rule bwa_map:
         input:
-        fastq="samples/{sample}.fastq",
-        idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
-    output:
-        "mapped_reads/{sample}.bam"
-    params:
-        idx=lambda w, input: os.path.splitext(input.idx[0])[0]
-    shell:
-        "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
+            fastq="samples/{sample}.fastq",
+            idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
+        output:
+            "mapped_reads/{sample}.bam"
+        params:
+            idx=lambda w, input: os.path.splitext(input.idx[0])[0]
+        shell:
+            "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
 
 
 And then for the same command to run everything again, you would need to remove the 
@@ -870,16 +896,16 @@ to look like this:
 
     rule bwa_map:
         input:
-        fastq="samples/{sample}.fastq",
-        idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
-    output:
-        "mapped_reads/{sample}.bam"
-    params:
-        idx=lambda w, input: os.path.splitext(input.idx[0])[0]
-    shell:
-        "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
-    log:
-        "logs/bwa_map/{sample}.log" 
+            fastq="samples/{sample}.fastq",
+            idx=multiext("genome.fa", ".amb", ".ann", ".bwt", ".pac", ".sa")
+        output:
+            "mapped_reads/{sample}.bam"
+        params:
+            idx=lambda w, input: os.path.splitext(input.idx[0])[0]
+        shell:
+            "bwa mem {params.idx} {input.fastq} | samtools view -Sb - > {output}"
+        log:
+            "logs/bwa_map/{sample}.log" 
 
 
 In the above, we would write a log file to storage in a "subfolder" of the
@@ -892,6 +918,7 @@ good to remember when debugging that:
 
  - You should not make assumptions about anything's existence. Use print statements to verify.
  - The biggest errors tend to be syntax and/or path errors
+ - If you want to test a different snakemake container, you can use the `--container` flag.
  - If the error is especially challenging, set up a small toy example that implements the most basic functionality that you want to achieve.
  - If you need help, reach out to ask for it! If there is an issue with the Google Life Sciences workflow executor, please `open an issue <https://github.com/snakemake/snakemake/issues>`_.
- - It also sometimes helps to take a break from working on somethig, and coming back with fresh eyes.
+ - It also sometimes helps to take a break from working on something, and coming back with fresh eyes.


=====================================
docs/executor_tutorial/tutorial.rst
=====================================
@@ -24,3 +24,7 @@ We ensured that no bioinformatics knowledge is needed to understand the tutorial
    :maxdepth: 2
 
    google_lifesciences
+
+   azure_aks
+
+   


=====================================
docs/index.rst
=====================================
@@ -37,53 +37,8 @@ Workflows are described via a human readable, Python based language.
 They can be seamlessly scaled to server, cluster, grid and cloud environments, without the need to modify the workflow definition.
 Finally, Snakemake workflows can entail a description of required software, which will be automatically deployed to any execution environment.
 
-Snakemake is **highly popular** with, `~3 new citations per week <https://badge.dimensions.ai/details/id/pub.1018944052>`_.
-
-
-.. _manual-quick_example:
-
--------------
-Quick Example
--------------
-
-Snakemake workflows are essentially Python scripts extended by declarative code to define **rules**.
-Rules describe how to create **output files** from **input files**.
-
-.. code-block:: python
-
-    rule targets:
-        input:
-            "plots/myplot.pdf"
-
-    rule transform:
-        input:
-            "raw/{dataset}.csv"
-        output:
-            "transformed/{dataset}.csv"
-        singularity:
-            "docker://somecontainer:v1.0"
-        shell:
-            "somecommand {input} {output}"
-
-    rule aggregate_and_plot:
-        input:
-            expand("transformed/{dataset}.csv", dataset=[1, 2])
-        output:
-            "plots/myplot.pdf"
-        conda:
-            "envs/matplotlib.yaml"
-        script:
-            "scripts/plot.py"
-
-
-* Similar to GNU Make, you specify targets in terms of a pseudo-rule at the top.
-* For each target and intermediate file, you create rules that define how they are created from input files.
-* Snakemake determines the rule dependencies by matching file names.
-* Input and output files can contain multiple named wildcards.
-* Rules can either use shell commands, plain Python code or external Python or R scripts to create output files from input files.
-* Snakemake workflows can be easily executed on **workstations**, **clusters**, **the grid**, and **in the cloud** without modification. The job scheduling can be constrained by arbitrary resources like e.g. available CPU cores, memory or GPUs.
-* Snakemake can automatically deploy required software dependencies of a workflow using `Conda <https://conda.io>`_ or `Singularity <https://sylabs.io/docs/>`_.
-* Snakemake can use Amazon S3, Google Storage, Dropbox, FTP, WebDAV, SFTP and iRODS to access input or output files and further access input files via HTTP and HTTPS.
+Snakemake is **highly popular** with, `>5 new citations per week <https://badge.dimensions.ai/details/id/pub.1018944052>`_.
+For an introduction, please visit https://snakemake.github.io.
 
 
 .. _main-getting-started:
@@ -92,10 +47,10 @@ Rules describe how to create **output files** from **input files**.
 Getting started
 ---------------
 
-To get a first impression, see our `introductory slides <https://slides.com/johanneskoester/snakemake-short>`_ or watch the `live demo video <https://youtu.be/hPrXcUUp70Y>`_.
+To get a first impression, please visit https://snakemake.github.io.
 News about Snakemake are published via `Twitter <https://twitter.com/search?l=&q=%23snakemake%20from%3Ajohanneskoester>`_.
 To learn Snakemake, please do the :ref:`tutorial`, and see the :ref:`FAQ <project_info-faq>`.
-For more advanced usage of different executors, see the :ref:`executor_tutorial`.
+For more advanced usage on various platforms, see the :ref:`executor_tutorial`.
 
 .. _main-support:
 
@@ -105,10 +60,10 @@ Support
 
 * For releases, see :ref:`Changelog <changelog>`.
 * Check :ref:`frequently asked questions (FAQ) <project_info-faq>`.
-* In case of questions, please post on `stack overflow <https://stackoverflow.com/questions/tagged/snakemake>`_.
-* To discuss with other Snakemake users, you can use the `mailing list <https://groups.google.com/forum/#!forum/snakemake>`_. **Please do not post questions there. Use stack overflow for questions.**
-* For bugs and feature requests, please use the `issue tracker <https://github.com/snakemake/snakemake/issues>`_.
-* For contributions, visit Snakemake on `Github <https://github.com/snakemake/snakemake>`_ and read the :ref:`guidelines <project_info-contributing>`.
+* In case of **questions**, please post on `stack overflow <https://stackoverflow.com/questions/tagged/snakemake>`_.
+* To **discuss** with other Snakemake users, you can use the `mailing list <https://groups.google.com/forum/#!forum/snakemake>`_. **Please do not post questions there. Use stack overflow for questions.**
+* For **bugs and feature requests**, please use the `issue tracker <https://github.com/snakemake/snakemake/issues>`_.
+* For **contributions**, visit Snakemake on `Github <https://github.com/snakemake/snakemake>`_ and read the :ref:`guidelines <project_info-contributing>`.
 
 --------
 Citation
@@ -137,62 +92,6 @@ Resources
 `Bioconda <https://bioconda.github.io/>`_
     Bioconda can be used from Snakemake for creating completely reproducible workflows by defining the used software versions and providing binaries.
 
-
-.. project_info-publications_using:
-
-----------------------------
-Publications using Snakemake
-----------------------------
-
-In the following you find an **incomplete list** of publications making use of Snakemake for their analyses.
-Please consider to add your own.
-
-* Rubert et al. 2020. _`Analysis of local genome rearrangement improves resolution of ancestral genomic maps in plants <https://doi.org/10.1186/s12864-020-6609-x>`_. BMC Genomics.
-* Kuzniar et al. 2020. `sv-callers: a highly portable parallel workflow for structural variant detection in whole-genome sequence data <https://doi.org/10.7717/peerj.8214>`_. PeerJ.
-* Doris et al. 2018. `Spt6 is required for the fidelity of promoter selection <https://doi.org/10.1016/j.molcel.2018.09.005>`_. Molecular Cell.
-* Karlsson et al. 2018. `Four evolutionary trajectories underlie genetic intratumoral variation in childhood cancer <https://www.nature.com/articles/s41588-018-0131-y>`_. Nature Genetics.
-* Planchard et al. 2018. `The translational landscape of Arabidopsis mitochondria <https://academic.oup.com/nar/advance-article/doi/10.1093/nar/gky489/5033161>`_. Nucleic acids research.
-* Schult et al. 2018. `Effect of UV irradiation on Sulfolobus acidocaldarius and involvement of the general transcription factor TFB3 in the early UV response <https://academic.oup.com/nar/article/46/14/7179/5047281>`_. Nucleic acids research.
-* Goormaghtigh et al. 2018. `Reassessing the Role of Type II Toxin-Antitoxin Systems in Formation of Escherichia coli Type II Persister Cells <https://mbio.asm.org/content/mbio/9/3/e00640-18.full.pdf>`_. mBio.
-* Ramirez et al. 2018. `Detecting macroecological patterns in bacterial communities across independent studies of global soils <https://www.nature.com/articles/s41564-017-0062-x>`_. Nature microbiology.
-* Amato et al. 2018. `Evolutionary trends in host physiology outweigh dietary niche in structuring primate gut microbiomes <https://www.nature.com/articles/s41396-018-0175-0>`_. The ISME journal.
-* Uhlitz et al. 2017. `An immediate–late gene expression module decodes ERK signal duration <https://msb.embopress.org/content/13/5/928>`_. Molecular Systems Biology.
-* Akkouche et al. 2017. `Piwi Is Required during Drosophila Embryogenesis to License Dual-Strand piRNA Clusters for Transposon Repression in Adult Ovaries <https://www.sciencedirect.com/science/article/pii/S1097276517302071>`_. Molecular Cell.
-* Beatty et al. 2017. `Giardia duodenalis induces pathogenic dysbiosis of human intestinal microbiota biofilms <https://www.ncbi.nlm.nih.gov/pubmed/28237889>`_. International Journal for Parasitology.
-* Meyer et al. 2017. `Differential Gene Expression in the Human Brain Is Associated with Conserved, but Not Accelerated, Noncoding Sequences <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5400397/>`_. Molecular Biology and Evolution.
-* Lonardo et al. 2017. `Priming of soil organic matter: Chemical structure of added compounds is more important than the energy content <https://www.sciencedirect.com/science/article/pii/S0038071716304539>`_. Soil Biology and Biochemistry.
-* Beisser et al. 2017. `Comprehensive transcriptome analysis provides new insights into nutritional strategies and phylogenetic relationships of chrysophytes <https://peerj.com/articles/2832/>`_. PeerJ.
-* Piro et al 2017. `MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling <https://microbiomejournal.biomedcentral.com/articles/10.1186/s40168-017-0318-y>`_. Microbiome.
-* Dimitrov et al 2017. `Successive DNA extractions improve characterization of soil microbial communities <https://peerj.com/articles/2915/>`_. PeerJ.
-* de Bourcy et al. 2016. `Phylogenetic analysis of the human antibody repertoire reveals quantitative signatures of immune senescence and aging <https://www.pnas.org/content/114/5/1105.short>`_. PNAS.
-* Bray et al. 2016. `Near-optimal probabilistic RNA-seq quantification <https://www.nature.com/nbt/journal/v34/n5/abs/nbt.3519.html>`_. Nature Biotechnology.
-* Etournay et al. 2016. `TissueMiner: a multiscale analysis toolkit to quantify how cellular processes create tissue dynamics <https://elifesciences.org/content/5/e14334>`_. eLife Sciences.
-* Townsend et al. 2016. `The Public Repository of Xenografts Enables Discovery and Randomized Phase II-like Trials in Mice <https://www.cell.com/cancer-cell/abstract/S1535-6108%2816%2930090-3>`_. Cancer Cell.
-* Burrows et al. 2016. `Genetic Variation, Not Cell Type of Origin, Underlies the Majority of Identifiable Regulatory Differences in iPSCs <https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005793>`_. PLOS Genetics.
-* Ziller et al. 2015. `Coverage recommendations for methylation analysis by whole-genome bisulfite sequencing <https://www.nature.com/nmeth/journal/v12/n3/full/nmeth.3152.html>`_. Nature Methods.
-* Li et al. 2015. `Quality control, modeling, and visualization of CRISPR screens with MAGeCK-VISPR <https://genomebiology.biomedcentral.com/articles/10.1186/s13059-015-0843-6>`_. Genome Biology.
-* Schmied et al. 2015. `An automated workflow for parallel processing of large multiview SPIM recordings <https://bioinformatics.oxfordjournals.org/content/32/7/1112>`_. Bioinformatics.
-* Chung et al. 2015. `Whole-Genome Sequencing and Integrative Genomic Analysis Approach on Two 22q11.2 Deletion Syndrome Family Trios for Genotype to Phenotype Correlations <https://onlinelibrary.wiley.com/doi/10.1002/humu.22814/full>`_. Human Mutation.
-* Kim et al. 2015. `TUT7 controls the fate of precursor microRNAs by using three different uridylation mechanisms <https://emboj.embopress.org/content/34/13/1801.long>`_. The EMBO Journal.
-* Park et al. 2015. `Ebola Virus Epidemiology, Transmission, and Evolution during Seven Months in Sierra Leone <https://doi.org/10.1016/j.cell.2015.06.007>`_. Cell.
-* Břinda et al. 2015. `RNF: a general framework to evaluate NGS read mappers <https://bioinformatics.oxfordjournals.org/content/early/2015/09/30/bioinformatics.btv524>`_. Bioinformatics.
-* Břinda et al. 2015. `Spaced seeds improve k-mer-based metagenomic classification <https://bioinformatics.oxfordjournals.org/content/early/2015/08/10/bioinformatics.btv419>`_. Bioinformatics.
-* Spjuth et al. 2015. `Experiences with workflows for automating data-intensive bioinformatics <https://biologydirect.biomedcentral.com/articles/10.1186/s13062-015-0071-8>`_. Biology Direct.
-* Schramm et al. 2015. `Mutational dynamics between primary and relapse neuroblastomas <https://www.nature.com/ng/journal/v47/n8/full/ng.3349.html>`_. Nature Genetics.
-* Berulava et al. 2015. `N6-Adenosine Methylation in MiRNAs <https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118438>`_. PLOS ONE.
-* The Genome of the Netherlands Consortium 2014. `Whole-genome sequence variation, population structure and demographic history of the Dutch population <https://www.nature.com/ng/journal/v46/n8/full/ng.3021.html>`_. Nature Genetics.
-*  Patterson et al. 2014. `WhatsHap: Haplotype Assembly for Future-Generation Sequencing Reads <https://online.liebertpub.com/doi/10.1089/cmb.2014.0157>`_. Journal of Computational Biology.
-* Fernández et al. 2014. `H3K4me1 marks DNA regions hypomethylated during aging in human stem and differentiated cells <https://genome.cshlp.org/content/25/1/27.long>`_. Genome Research.
-* Köster et al. 2014. `Massively parallel read mapping on GPUs with the q-group index and PEANUT <https://peerj.com/articles/606/>`_. PeerJ.
-* Chang et al. 2014. `TAIL-seq: Genome-wide Determination of Poly(A) Tail Length and 3′ End Modifications <https://www.cell.com/molecular-cell/abstract/S1097-2765(14)00121-X>`_. Molecular Cell.
-* Althoff et al. 2013. `MiR-137 functions as a tumor suppressor in neuroblastoma by downregulating KDM1A <https://onlinelibrary.wiley.com/doi/10.1002/ijc.28091/abstract;jsessionid=33613A834E2A2FDCCA49246C23DF777E.f04t02>`_. International Journal of Cancer.
-* Marschall et al. 2013. `MATE-CLEVER: Mendelian-Inheritance-Aware Discovery and Genotyping of Midsize and Long Indels <https://bioinformatics.oxfordjournals.org/content/29/24/3143.long>`_. Bioinformatics.
-* Rahmann et al. 2013. `Identifying transcriptional miRNA biomarkers by integrating high-throughput sequencing and real-time PCR data <https://www.sciencedirect.com/science/article/pii/S1046202312002605>`_. Methods.
-* Martin et al. 2013. `Exome sequencing identifies recurrent somatic mutations in EIF1AX and SF3B1 in uveal melanoma with disomy 3 <https://www.nature.com/ng/journal/v45/n8/full/ng.2674.html>`_. Nature Genetics.
-* Czeschik et al. 2013. `Clinical and mutation data in 12 patients with the clinical diagnosis of Nager syndrome <https://link.springer.com/article/10.1007%2Fs00439-013-1295-2>`_. Human Genetics.
-* Marschall et al. 2012. `CLEVER: Clique-Enumerating Variant Finder <https://bioinformatics.oxfordjournals.org/content/28/22/2875.long>`_. Bioinformatics.
-
-
 .. toctree::
    :caption: Getting started
    :name: getting_started


=====================================
docs/project_info/authors.rst
=====================================
@@ -21,6 +21,7 @@ Development Team
 - Wibowo Arindrarto
 - Rasmus Ågren
 - Soohyun Lee
+- Vanessa Sochat
 
 Contributors
 ------------
@@ -57,4 +58,5 @@ In alphabetical order
 - Sean Davis
 - Simon Ye
 - Tobias Marschall
+- Vanessa Sochat
 - Willem Ligtenberg


=====================================
docs/snakefiles/deployment.rst
=====================================
@@ -222,10 +222,10 @@ Snakemake allows to define environment modules per rule:
         shell:
             "bwa mem {input} | samtools view -Sbh - > {output}"
 
-Here, when Snakemake is executed with `snakemake --use-envmodules`, it will load the defined modules in the given order, instead of using the also defined conda environment.
+Here, when Snakemake is executed with ``snakemake --use-envmodules``, it will load the defined modules in the given order, instead of using the also defined conda environment.
 Note that although not mandatory, one should always provide either a conda environment or a container (see above), along with environment module definitions.
 The reason is that environment modules are often highly platform specific, and cannot be assumed to be available somewhere else, thereby limiting reproducibility.
-By definition an equivalent conda environment or container as a fallback, people outside of the HPC system where the workflow has been designed can still execute it, e.g. by running `snakemake --use-conda` instead of `snakemake --use-envmodules`.
+By definition an equivalent conda environment or container as a fallback, people outside of the HPC system where the workflow has been designed can still execute it, e.g. by running ``snakemake --use-conda`` instead of ``snakemake --use-envmodules``.
 
 --------------------------------------
 Sustainable and reproducible archiving


=====================================
docs/snakefiles/remote_files.rst
=====================================
@@ -14,7 +14,7 @@ Snakemake includes the following remote providers, supported by the correspondin
 
 * Amazon Simple Storage Service (AWS S3): ``snakemake.remote.S3``
 * Google Cloud Storage (GS): ``snakemake.remote.GS``
-* Microsoft Azure Storage: ``snakemake.remote.AzureStorage``
+* Microsoft Azure Blob Storage: ``snakemake.remote.AzBlob``
 * File transfer over SSH (SFTP): ``snakemake.remote.SFTP``
 * Read-only web (HTTP[S]): ``snakemake.remote.HTTP``
 * File transfer protocol (FTP): ``snakemake.remote.FTP``
@@ -139,28 +139,37 @@ In the Snakefile, no additional authentication information has to be provided:
             GS.remote("bucket-name/file.txt")
 
 
-Microsoft Azure Storage
-=======================
+Microsoft Azure Blob Storage
+=============================
+
+Usage of the Azure Blob Storage provider is similar to the S3 provider. For
+authentication, an account name and shared access signature (SAS) or key can be used. If these
+variables are not passed directly to AzureRemoteProvider (see
+[BlobServiceClient
+class](https://docs.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient?view=azure-python)
+for naming), they will be read from environment variables, named
+`AZ_BLOB_ACCOUNT_URL` and `AZ_BLOB_CREDENTIAL`. `AZ_BLOB_ACCOUNT_URL` takes the form
+`https://<accountname>.blob.core.windows.net` and may also contain a SAS. If
+a SAS is not part of the URL, `AZ_BLOB_CREDENTIAL` has to be set to the SAS or alternatively to
+the storage account key.
 
-Usage of the Azure Storage provider is similar to the S3 provider.
-For authentication, one needs to provide an account name and a key
-or SAS token (without leading question mark), which can for example
-be read from environment variables.
+When using AzBlob as default remote provider you will almost always want to
+pass these environment variables on to the remote execution environment (e.g.
+Kubernetes) with `--envvars`, e.g
+`--envvars AZ_BLOB_ACCOUNT_URL AZ_BLOB_CREDENTIAL`.
 
 .. code-block:: python
 
-    from snakemake.remote.AzureStorage import RemoteProvider as AzureRemoteProvider
-    account_name=os.environ['AZURE_ACCOUNT']
-    account_key=os.environ.get('AZURE_KEY')
-    sas_token=os.environ.get('SAS_TOKEN')
-    assert account_key or sas_token
-    AS = AzureRemoteProvider(account_name=account_name, account_key=account_key, sas_token=sas_token)
+    from snakemake.remote.AzBlob import RemoteProvider as AzureRemoteProvider
+    AS = AzureRemoteProvider()# assumes env vars AZ_BLOB_ACCOUNT_URL and possibly AZ_BLOB_CREDENTIAL are set
 
     rule a:
         input:
             AS.remote("path/to/file.txt")
 
 
+
+
 File transfer over SSH (SFTP)
 =============================
 


=====================================
docs/snakefiles/reporting.rst
=====================================
@@ -100,8 +100,8 @@ You can define an institute specific stylesheet with:
 
 In particular, this allows you to e.g. set a logo at the top (by using CSS to inject a background for the placeholder ``<div id="brand">``, or overwrite colors.
 For an example custom stylesheet defining the logo, see :download:`here <../../tests/test_report/custom-stylesheet.css>`.
-The report for above example can be found :download:`here <../../tests/test_report/report.html>` (with a custom branding for the University of Duisburg-Essen).
-The full example source code can be found `here <https://github.com/snakemake/snakemake/src/master/tests/test_report/>`_.
+The report for above example can be found :download:`here <../../tests/test_report/expected-results/report.html>` (with a custom branding for the University of Duisburg-Essen).
+The full example source code can be found `here <https://github.com/snakemake/snakemake/tree/master/tests/test_report/>`_.
 
 Note that the report can be restricted to particular jobs and results by specifying targets at the command line, analog to normal Snakemake execution.
 For example, with


=====================================
setup.py
=====================================
@@ -72,8 +72,8 @@ setup(
         "reports": ["jinja2", "networkx", "pygments", "pygraphviz"],
         "messaging": ["slacker"],
         "google-cloud": [
-            "crc32c",
             "oauth2client",
+            "google-crc32c",
             "google-api-python-client",
             "google-cloud-storage",
         ],


=====================================
snakemake/__init__.py
=====================================
@@ -444,7 +444,8 @@ def snakemake(
     if cluster_mode > 1:
         logger.error("Error: cluster and drmaa args are mutually exclusive")
         return False
-    if debug and (cluster_mode or cores > 1):
+
+    if debug and (cluster_mode or cores is not None and cores > 1):
         logger.error(
             "Error: debug mode cannot be used with more than one core or cluster execution."
         )
@@ -1533,6 +1534,7 @@ def get_argument_parser(profile=None):
         "fractions allowed.",
     )
     group_behavior.add_argument(
+        "-T",
         "--restart-times",
         default=0,
         type=int,
@@ -1555,7 +1557,17 @@ def get_argument_parser(profile=None):
     )
     group_behavior.add_argument(
         "--default-remote-provider",
-        choices=["S3", "GS", "FTP", "SFTP", "S3Mocked", "gfal", "gridftp", "iRODS"],
+        choices=[
+            "S3",
+            "GS",
+            "FTP",
+            "SFTP",
+            "S3Mocked",
+            "gfal",
+            "gridftp",
+            "iRODS",
+            "AzBlob",
+        ],
         help="Specify default remote provider to be used for "
         "all input and output files that don't yet specify "
         "one.",


=====================================
snakemake/_version.py
=====================================
@@ -22,9 +22,9 @@ def get_keywords():
     # setup.py/versioneer.py will grep for the variable names, so they must
     # each be defined on a line of their own. _version.py will just call
     # get_keywords().
-    git_refnames = " (tag: v5.20.1)"
-    git_full = "7dd7d8b1cad48f9369deb42b54be3d52fcd65dd9"
-    git_date = "2020-07-09 11:13:42 +0200"
+    git_refnames = " (HEAD -> master, tag: v5.22.1)"
+    git_full = "982f1d1f5bb55eda1caa8f15853041922ef4bea0"
+    git_date = "2020-08-14 08:41:41 +0200"
     keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
     return keywords
 


=====================================
snakemake/executors/__init__.py
=====================================
@@ -1363,7 +1363,7 @@ class KubernetesExecutor(ClusterExecutor):
             "cp -rf /source/. . && "
             "snakemake {target} --snakefile {snakefile} "
             "--force -j{cores} --keep-target-files  --keep-remote "
-            "--latency-wait 0 "
+            "--latency-wait {latency_wait} "
             " --attempt {attempt} {use_threads} "
             "--wrapper-prefix {workflow.wrapper_prefix} "
             "{overwrite_config} {printshellcmds} {rules} --nocolor "


=====================================
snakemake/executors/google_lifesciences.py
=====================================
@@ -3,6 +3,7 @@ __copyright__ = "Copyright 2015-2020, Johannes Köster"
 __email__ = "koester at jimmy.harvard.edu"
 __license__ = "MIT"
 
+import logging
 import os
 import sys
 import time
@@ -21,6 +22,8 @@ from snakemake.exceptions import WorkflowError
 from snakemake.executors import ClusterExecutor, sleep
 from snakemake.common import get_container_image, get_file_hash
 
+# https://github.com/googleapis/google-api-python-client/issues/299#issuecomment-343255309
+logging.getLogger("googleapiclient.discovery_cache").setLevel(logging.ERROR)
 
 GoogleLifeSciencesJob = namedtuple(
     "GoogleLifeSciencesJob", "job jobname jobid callback error_callback"
@@ -144,9 +147,15 @@ class GoogleLifeSciencesExecutor(ClusterExecutor):
             raise ex
 
         # Discovery clients for Google Cloud Storage and Life Sciences API
-        self._storage_cli = discovery_build("storage", "v1", credentials=creds)
-        self._compute_cli = discovery_build("compute", "v1", credentials=creds)
-        self._api = discovery_build("lifesciences", "v2beta", credentials=creds)
+        self._storage_cli = discovery_build(
+            "storage", "v1", credentials=creds, cache_discovery=False
+        )
+        self._compute_cli = discovery_build(
+            "compute", "v1", credentials=creds, cache_discovery=False
+        )
+        self._api = discovery_build(
+            "lifesciences", "v2beta", credentials=creds, cache_discovery=False
+        )
         self._bucket_service = storage.Client()
 
     def _get_bucket(self):
@@ -641,7 +650,7 @@ class GoogleLifeSciencesExecutor(ClusterExecutor):
         commands = [
             "/bin/bash",
             "-c",
-            "mkdir -p /workdir && cd /workdir && wget -O /download.py https://gist.githubusercontent.com/vsoch/84886ef6469bedeeb9a79a4eb7aec0d1/raw/181499f8f17163dcb2f89822079938cbfbd258cc/download.py && chmod +x /download.py && source activate snakemake || true && pip install crc32c && python /download.py download %s %s /tmp/workdir.tar.gz && tar -xzvf /tmp/workdir.tar.gz && %s"
+            "mkdir -p /workdir && cd /workdir && wget -O /download.py https://gist.githubusercontent.com/vsoch/84886ef6469bedeeb9a79a4eb7aec0d1/raw/181499f8f17163dcb2f89822079938cbfbd258cc/download.py && chmod +x /download.py && source activate snakemake || true && python /download.py download %s %s /tmp/workdir.tar.gz && tar -xzvf /tmp/workdir.tar.gz && %s"
             % (self.bucket.name, self.pipeline_package, exec_job),
         ]
 


=====================================
snakemake/remote/AzureStorage.py → snakemake/remote/AzBlob.py
=====================================
@@ -1,3 +1,6 @@
+"""Azure Blob Storage handling
+"""
+
 __author__ = "Sebastian Kurscheid"
 __copyright__ = "Copyright 2019, Sebastian Kurscheid"
 __email__ = "sebastian.kurscheid at anu.edu.au"
@@ -6,12 +9,9 @@ __license__ = "MIT"
 # built-ins
 import os
 import re
-import math
-import functools
-import concurrent.futures
 
 # snakemake specific
-from snakemake.common import lazy_property
+# /
 
 # module specific
 from snakemake.exceptions import WorkflowError, AzureFileException
@@ -19,12 +19,11 @@ from snakemake.remote import AbstractRemoteObject, AbstractRemoteProvider
 
 # service provider support
 try:
-    from azure.storage.common.cloudstorageaccount import (
-        CloudStorageAccount as AzureStorageAccount,
-    )
+    from azure.storage.blob import BlobServiceClient
+    import azure.core.exceptions
 except ImportError as e:
     raise WorkflowError(
-        "The Python 3 packages 'azure-storage' and 'azure-storage-common' "
+        "The Python 3 package 'azure-storage-blob' "
         "need to be installed to use Azure Storage remote() file functionality. %s"
         % e.msg
     )
@@ -76,26 +75,25 @@ class RemoteObject(AbstractRemoteObject):
     def exists(self):
         if self._matched_as_path:
             return self._as.exists_in_container(self.container_name, self.blob_name)
-        else:
-            raise AzureFileException(
-                "The file cannot be parsed as an Azure Blob path in form 'container/blob': %s"
-                % self.local_file()
-            )
+        raise AzureFileException(
+            "The file cannot be parsed as an Azure Blob path in form 'container/blob': %s"
+            % self.local_file()
+        )
 
     def mtime(self):
         if self.exists():
+            # b = self.blob_service_client.get_blob_client(self.container_name, self.blob_name)
+            # return b.get_blob_properties().last_modified
             t = self._as.blob_last_modified(self.container_name, self.blob_name)
             return t
-        else:
-            raise AzureFileException(
-                "The file does not seem to exist remotely: %s" % self.local_file()
-            )
+        raise AzureFileException(
+            "The file does not seem to exist remotely: %s" % self.local_file()
+        )
 
     def size(self):
         if self.exists():
             return self._as.blob_size(self.container_name, self.blob_name)
-        else:
-            return self._iofile.size_local
+        return self._iofile.size_local
 
     def download(self):
         if self.exists():
@@ -142,6 +140,7 @@ class RemoteObject(AbstractRemoteObject):
 
     @property
     def container_name(self):
+        "return container name component of the path"
         if len(self._matched_as_path.groups()) == 2:
             return self._matched_as_path.group("container_name")
         return None
@@ -152,8 +151,10 @@ class RemoteObject(AbstractRemoteObject):
 
     @property
     def blob_name(self):
+        "return the blob name component of the path"
         if len(self._matched_as_path.groups()) == 2:
             return self._matched_as_path.group("blob_name")
+        return None
 
 
 # Actual Azure specific functions, adapted from S3.py
@@ -164,14 +165,29 @@ class AzureStorageHelper(object):
         if "stay_on_remote" in kwargs:
             del kwargs["stay_on_remote"]
 
-        self.azure = AzureStorageAccount(**kwargs).create_block_blob_service()
+        # if not handed down explicitely, try to read credentials from
+        # environment variables.
+        for (csavar, envvar) in [
+            ("account_url", "AZ_BLOB_ACCOUNT_URL"),
+            ("credential", "AZ_BLOB_CREDENTIAL"),
+        ]:
+            if csavar not in kwargs and envvar in os.environ:
+                kwargs[csavar] = os.environ.get(envvar)
+        assert (
+            "account_url" in kwargs
+        ), "Missing AZ_BLOB_ACCOUNT_URL env var (and possibly AZ_BLOB_CREDENTIAL)"
+        # remove leading '?' from SAS if needed
+        # if kwargs.get("sas_token", "").startswith("?"):
+        #    kwargs["sas_token"] = kwargs["sas_token"][1:]
+
+        # by right only account_key or sas_token should be set, but we let
+        # BlobServiceClient deal with the ambiguity
+        self.blob_service_client = BlobServiceClient(**kwargs)
 
     def container_exists(self, container_name):
-        try:
-            self.azure.exists(container_name=container_name)
-            return True
-        except:
-            return False
+        return any(
+            True for _ in self.blob_service_client.list_containers(container_name)
+        )
 
     def upload_to_azure_storage(
         self,
@@ -201,8 +217,12 @@ class AzureStorageHelper(object):
             "The file path specified does not appear to be a file: %s" % file_path
         )
 
-        if not self.azure.exists(container_name):
-            self.azure.create_container(container_name=container_name)
+        container_client = self.blob_service_client.get_container_client(container_name)
+        try:
+            container_client.create_container()
+        except azure.core.exceptions.ResourceExistsError:
+            pass
+
         if not blob_name:
             if use_relative_path_for_blob_name:
                 if relative_start_dir:
@@ -212,14 +232,17 @@ class AzureStorageHelper(object):
             else:
                 path_blob_name = os.path.basename(file_path)
             blob_name = path_blob_name
-        b = self.azure
+        blob_client = container_client.get_blob_client(blob_name)
+
+        # upload_blob fails, if blob exists
+        if self.exists_in_container(container_name, blob_name):
+            blob_client.delete_blob()
         try:
-            b.create_blob_from_path(
-                container_name, file_path=file_path, blob_name=blob_name
-            )
-            return b.get_blob_properties(container_name, blob_name=blob_name).name
-        except:
-            raise WorkflowError("Error in creating blob. %s" % e.msg)
+            with open(file_path, "rb") as data:
+                blob_client.upload_blob(data, blob_type="BlockBlob")
+            return blob_client.get_blob_properties().name
+        except Exception as e:
+            raise WorkflowError("Error in creating blob. %s" % str(e))
             # return None
 
     def download_from_azure_storage(
@@ -260,24 +283,19 @@ class AzureStorageHelper(object):
         # if the destination path does not exist
         if make_dest_dirs:
             os.makedirs(os.path.dirname(destination_path), exist_ok=True)
-        b = self.azure
-        try:
-            if not create_stub_only:
-                b.get_blob_to_path(
-                    container_name=container_name,
-                    blob_name=blob_name,
-                    file_path=destination_path,
+        b = self.blob_service_client.get_blob_client(container_name, blob_name)
+        if not create_stub_only:
+            with open(destination_path, "wb") as my_blob:
+                blob_data = b.download_blob()
+                blob_data.readinto(my_blob)
+        else:
+            # just create an empty file with the right timestamps
+            ts = b.get_blob_properties().last_modified.timestamp()
+            with open(destination_path, "wb") as fp:
+                os.utime(
+                    fp.name, (ts, ts),
                 )
-            else:
-                # just create an empty file with the right timestamps
-                with open(destination_path, "wb") as fp:
-                    os.utime(
-                        fp.name,
-                        (b.last_modified.timestamp(), b.last_modified.timestamp()),
-                    )
-            return destination_path
-        except:
-            return None
+        return destination_path
 
     def delete_from_container(self, container_name, blob_name):
         """ Delete a file from Azure Storage container
@@ -293,8 +311,8 @@ class AzureStorageHelper(object):
         """
         assert container_name, "container_name must be specified"
         assert blob_name, "blob_name must be specified"
-        b = self.azure
-        b.delete_blob(container_name, blob_name)
+        b = self.blob_service_client.get_blob_client(container_name, blob_name)
+        b.delete_blob()
 
     def exists_in_container(self, container_name, blob_name):
         """ Returns whether the blob exists in the container
@@ -306,12 +324,13 @@ class AzureStorageHelper(object):
             Returns:
                 True | False
         """
-        assert container_name, "container_name must be specified"
+
+        assert (
+            container_name
+        ), 'container_name must be specified (did you try to write to "root" or forgot to set --default-remote-prefix?)'
         assert blob_name, "blob_name must be specified"
-        try:
-            return self.azure.exists(container_name, blob_name)
-        except:
-            return None
+        cc = self.blob_service_client.get_container_client(container_name)
+        return any(True for _ in cc.list_blobs(name_starts_with=blob_name))
 
     def blob_size(self, container_name, blob_name):
         """ Returns the size of a blob
@@ -326,12 +345,8 @@ class AzureStorageHelper(object):
         assert container_name, "container_name must be specified"
         assert blob_name, "blob_name must be specified"
 
-        try:
-            b = self.azure.get_blob_properties(container_name, blob_name)
-            return b.properties.content_length // 1024
-        except:
-            print("blob or container do not exist")
-            return None
+        b = self.blob_service_client.get_blob_client(container_name, blob_name)
+        return b.get_blob_properties().size // 1024
 
     def blob_last_modified(self, container_name, blob_name):
         """ Returns a timestamp of a blob
@@ -345,12 +360,8 @@ class AzureStorageHelper(object):
         """
         assert container_name, "container_name must be specified"
         assert blob_name, "blob_name must be specified"
-        try:
-            b = self.azure.get_blob_properties(container_name, blob_name)
-            return b.properties.last_modified.timestamp()
-        except:
-            print("blob or container do not exist")
-            return None
+        b = self.blob_service_client.get_blob_client(container_name, blob_name)
+        return b.get_blob_properties().last_modified.timestamp()
 
     def list_blobs(self, container_name):
         """ Returns a list of blobs from the container
@@ -362,9 +373,5 @@ class AzureStorageHelper(object):
                 list of blobs
         """
         assert container_name, "container_name must be specified"
-        try:
-            b = self.azure.list_blobs(container_name)
-            return [o.name for o in b]
-        except:
-            print("Did you provide a valid container_name?")
-            return None
+        c = self.blob_service_client.get_container_client(container_name)
+        return [b.name for b in c.list_blobs()]


=====================================
snakemake/remote/GS.py
=====================================
@@ -18,10 +18,10 @@ try:
     import google.cloud
     from google.cloud import storage
     from google.api_core import retry
-    from crc32c import crc32
+    from google_crc32c import Checksum
 except ImportError as e:
     raise WorkflowError(
-        "The Python 3 packages 'google-cloud-sdk' and `crc32c` "
+        "The Python 3 packages 'google-cloud-sdk' and `google-crc32c` "
         "need to be installed to use GS remote() file functionality. %s" % e.msg
     )
 
@@ -56,7 +56,7 @@ class Crc32cCalculator:
 
     def __init__(self, fileobj):
         self._fileobj = fileobj
-        self.digest = 0
+        self.checksum = Checksum()
 
     def write(self, chunk):
         self._fileobj.write(chunk)
@@ -65,14 +65,14 @@ class Crc32cCalculator:
     def _update(self, chunk):
         """Given a chunk from the read in file, update the hexdigest
         """
-        self.digest = crc32(chunk, self.digest)
+        self.checksum.update(chunk)
 
     def hexdigest(self):
         """Return the hexdigest of the hasher.
            The Base64 encoded CRC32c is in big-endian byte order.
            See https://cloud.google.com/storage/docs/hashes-etags
         """
-        return base64.b64encode(struct.pack(">I", self.digest)).decode("utf-8")
+        return base64.b64encode(self.checksum.digest()).decode("utf-8")
 
 
 class RemoteProvider(AbstractRemoteProvider):
@@ -138,17 +138,17 @@ class RemoteObject(AbstractRemoteObject):
             - cache_mtime
             - cache.size
         """
-        for blob in self.client.list_blobs(
-            self.bucket_name, prefix=os.path.dirname(self.blob.name)
-        ):
+        subfolder = os.path.dirname(self.blob.name)
+        for blob in self.client.list_blobs(self.bucket_name, prefix=subfolder):
             # By way of being listed, it exists. mtime is a datetime object
             name = "{}/{}".format(blob.bucket.name, blob.name)
             cache.exists_remote[name] = True
             cache.mtime[name] = blob.updated
             cache.size[name] = blob.size
-        # Mark bucket as having an inventory, such that this method is
-        # only called once for this bucket.
-        cache.has_inventory.add(self.bucket_name)
+
+        # Mark bucket and prefix as having an inventory, such that this method is
+        # only called once for the subfolder in the bucket.
+        cache.has_inventory.add("%s/%s" % (self.bucket_name, subfolder))
 
     # === Implementations of abstract class members ===
 
@@ -173,8 +173,10 @@ class RemoteObject(AbstractRemoteObject):
         else:
             return self._iofile.size_local
 
-    @retry.Retry(predicate=google_cloud_retry_predicate)
+    @retry.Retry(predicate=google_cloud_retry_predicate, deadline=600)
     def download(self):
+        """Download with maximum retry duration of 600 seconds (10 minutes)
+        """
         if not self.exists():
             return None
 
@@ -188,6 +190,9 @@ class RemoteObject(AbstractRemoteObject):
             self.blob.download_to_file(parser)
         os.sync()
 
+        # **Important** hash can be incorrect or missing if not refreshed
+        self.blob.reload()
+
         # Compute local hash and verify correct
         if parser.hexdigest() != self.blob.crc32c:
             os.remove(self.local_file())


=====================================
snakemake/resources.py
=====================================
@@ -7,9 +7,13 @@ class DefaultResources:
 
         def fallback(val):
             def callable(wildcards, input, attempt, threads, rulename):
-                value = eval(
-                    val, {"input": input, "attempt": attempt, "threads": threads}
-                )
+                try:
+                    value = eval(
+                        val, {"input": input, "attempt": attempt, "threads": threads}
+                    )
+                # Triggers for string arguments like n1-standard-4
+                except NameError:
+                    return val
                 return value
 
             return callable


=====================================
test-environment.yml
=====================================
@@ -30,8 +30,7 @@ dependencies:
   - google-cloud-storage
   - google-api-python-client
   - oauth2client
-  - azure-storage
-  - azure-storage-common
+  - azure-storage-blob
   - ratelimiter
   - configargparse
   - appdirs


=====================================
tests/common.py
=====================================
@@ -147,6 +147,7 @@ def run(
         config=config,
         verbose=True,
         conda_frontend=conda_frontend,
+        container_image=container_image,
         **params
     )
 


=====================================
tests/test_remote_azure/Snakefile
=====================================
@@ -2,16 +2,11 @@ import os
 import fnmatch
 import snakemake
 from snakemake.exceptions import MissingInputException
-from snakemake.remote.AzureStorage import RemoteProvider as AzureRemoteProvider
+from snakemake.remote.AzBlob import RemoteProvider as AzureRemoteProvider
 
 # setup Azure Storage for remote access
-# for testing these variable can be added to CircleCI
-account_name=os.environ['AZURE_ACCOUNT']
-account_key=os.environ.get('AZURE_KEY')
-sas_token=os.environ.get('SAS_TOKEN')
-assert sas_token or account_key, ("Either SAS_TOKEN or AZURE_KEY have to be set")
-AS = AzureRemoteProvider(account_name=account_name,
-    account_key=account_key, sas_token=sas_token)
+# for testing, set AZ_AZURE_ACCOUNT and AZ_ACCOUNT_KEY or AZ_SAS_TOKEN as env vars to CircleCI
+AS = AzureRemoteProvider()
 
 
 rule upload_to_azure_storage:



View it on GitLab: https://salsa.debian.org/med-team/snakemake/-/commit/ede461d574af316927b0b43b0637e75795ba0d17

-- 
View it on GitLab: https://salsa.debian.org/med-team/snakemake/-/commit/ede461d574af316927b0b43b0637e75795ba0d17
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20200817/e75b9a90/attachment-0001.html>


More information about the debian-med-commit mailing list