Skip to content
Commits on Source (10)
[run]
source = snakemake
parallel = True
[report]
omit = tests/*
......@@ -55,16 +55,19 @@ jobs:
source activate snakemake
# run tests
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }}
export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
coverage run -m pytest tests/test*.py -v -x
# collect coverage report
coverage combine
coverage xml
#coverage combine
#coverage xml
- name: Upload coverage report
uses: codecov/codecov-action@v1.0.3
with:
token: ${{secrets.CODECOV_TOKEN}}
#- name: Upload coverage report
#uses: codecov/codecov-action@v1.0.3
#with:
#token: ${{secrets.CODECOV_TOKEN}}
- name: Build container image
run: docker build .
[5.8.1] - 2019-11-15
====================
Changed
-------
- Fixed a bug by adding a missing module.
[5.8.0] - 2019-11-15
====================
Added
-----
- Blockchain based caching between workflows (in collaboration with Sven Nahnsen from QBiC), see `the docs <https://snakemake.readthedocs.io/en/v5.8.0/executing/caching.html>`_.
- New flag --skip-cleanup-scripts, that leads to temporary scripts (coming from script or wrapper directive) are not deleted (by Vanessa Sochat).
Changed
-------
- Various bug fixes.
[5.7.4] - 2019-10-23
====================
Changed
......
Copyright (c) 2016 Johannes Köster <johannes.koester@tu-dortmund.de>
Copyright (c) 2012-2019 Johannes Köster <johannes.koester@tu-dortmund.de>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
......
[![CircleCI](https://circleci.com/gh/snakemake/snakemake/tree/master.svg?style=shield)](https://circleci.com/gh/snakemake/snakemake/tree/master)
[![GitHub actions status](https://github.com/snakemake/snakemake/workflows/CI/badge.svg?branch=master)](https://github.com/snakemake/snakemake/actions?query=branch%3Amaster+workflow%3ACI)
[![Sonarcloud Status](https://sonarcloud.io/api/project_badges/measure?project=snakemake_snakemake&metric=alert_status)](https://sonarcloud.io/dashboard?id=snakemake_snakemake)
[![Bioconda](https://img.shields.io/conda/dn/bioconda/snakemake.svg?label=Bioconda)](https://bioconda.github.io/recipes/snakemake/README.html)
[![Pypi](https://img.shields.io/pypi/pyversions/snakemake.svg)](https://pypi.org/project/snakemake)
......@@ -11,7 +11,7 @@
The Snakemake workflow management system is a tool to create **reproducible and scalable** data analyses.
Workflows are described via a human readable, Python based language.
They can be seamlessly scaled to server, cluster, grid and cloud environments, without the need to modify the workflow definition.
They can be seamlessly scaled to server, cluster, grid and cloud environments without the need to modify the workflow definition.
Finally, Snakemake workflows can entail a description of required software, which will be automatically deployed to any execution environment.
**Homepage: https://snakemake.readthedocs.io**
......
snakemake (5.7.4-2) UNRELEASED; urgency=medium
snakemake (5.8.1-1) unstable; urgency=medium
* Team upload.
* Quilt isn't needed for the autopkgtests, so drop it
-- Michael R. Crusoe <michael.crusoe@gmail.com> Wed, 30 Oct 2019 17:37:40 +0100
-- Michael R. Crusoe <michael.crusoe@gmail.com> Fri, 15 Nov 2019 15:55:01 +0100
snakemake (5.7.4-1) unstable; urgency=medium
......
......@@ -36,6 +36,9 @@ Build-Depends: debhelper-compat (= 12),
python3-sphinx-rtd-theme,
python3-wrapt,
python3-yaml,
python3-jinja2,
python3-pygments,
python3-pygraphviz,
r-cran-rmarkdown,
stress
# python3-irodsclient, # when that enters testing
......
......@@ -22,7 +22,7 @@ Description: Avoid privacy breach
<script type="text/javascript" src="http://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.5.4/bootstrap-select.min.js"></script>
--- snakemake.orig/docs/index.rst
+++ snakemake/docs/index.rst
@@ -4,25 +4,25 @@
@@ -4,13 +4,13 @@
Snakemake
=========
......@@ -39,21 +39,15 @@ Description: Avoid privacy breach
:target: https://pypi.python.org/pypi/snakemake
.. image:: https://img.shields.io/docker/cloud/build/snakemake/snakemake
:target: https://hub.docker.com/r/snakemake/snakemake
-.. image:: https://circleci.com/gh/snakemake/snakemake/tree/master.svg?style=shield
+.. image:: file:///usr/share/doc/snakemake/html/_svg/master.svg
:target: https://circleci.com/gh/snakemake/snakemake/tree/master
@@ -19,7 +19,7 @@
.. image:: https://github.com/snakemake/snakemake/workflows/CI/badge.svg?branch=master
:target: https://github.com/snakemake/snakemake/actions?query=branch%3Amaster+workflow%3ACI
-.. image:: https://img.shields.io/badge/stack-overflow-orange.svg
+.. image:: file:///usr/share/doc/snakemake/html/_svg/stack-overflow-orange.svg
:target: https://stackoverflow.com/questions/tagged/snakemake
-.. image:: https://img.shields.io/twitter/follow/johanneskoester.svg?style=social&label=Follow
+.. image:: file:///usr/share/doc/snakemake/html/_svg/johanneskoester_follow.svg
:target: https://twitter.com/search?l=&q=%23snakemake%20from%3Ajohanneskoester
.. image:: https://img.shields.io/github/stars/snakemake/snakemake?style=social
.. image:: https://img.shields.io/twitter/follow/johanneskoester.svg?style=social&label=Follow
--- snakemake.orig/docs/project_info/citations.rst
+++ snakemake/docs/project_info/citations.rst
@@ -38,7 +38,7 @@
......
......@@ -2,32 +2,14 @@ Description: Make the build reproducible
Author: Chris Lamb <lamby@debian.org>
Last-Update: 2019-07-15
--- snakemake.orig/snakemake/report/__init__.py
+++ snakemake/snakemake/report/__init__.py
@@ -102,12 +102,14 @@
def report(
text,
path,
- stylesheet=os.path.join(os.path.dirname(__file__), "report.css"),
+ stylesheet=None,
defaultenc="utf8",
template=None,
metadata=None,
**files
):
+ if stylesheet is None:
+ stylesheet = os.path.join(os.path.dirname(__file__), "report.css")
outmime, _ = mimetypes.guess_type(path)
if outmime != "text/html":
raise ValueError("Path to report output has to be an HTML file.")
--- snakemake.orig/snakemake/utils.py
+++ snakemake/snakemake/utils.py
@@ -248,6 +248,8 @@
metadata (str): E.g. an optional author name or email address.
@@ -276,6 +276,8 @@
Args:
code (str): R code to be executed
"""
+ if stylesheet is None:
+ stylesheet = os.path.join(os.path.dirname(__file__), "report.css")
try:
import snakemake.report
import rpy2.robjects as robjects
except ImportError:
......@@ -34,7 +34,7 @@ Subject: Use python3 instead of python
+ shell: "python3 -m snakemake -s Snakefile_inner --list-untracked 2> {output}"
--- snakemake.orig/tests/tests.py
+++ snakemake/tests/tests.py
@@ -929,7 +929,7 @@
@@ -949,7 +949,7 @@
workdir = dpath("test_convert_to_cwl")
# run(workdir, export_cwl=os.path.join(workdir, "workflow.cwl"))
shell(
......@@ -43,8 +43,8 @@ Subject: Use python3 instead of python
src=os.getcwd(),
)
shell("cd {workdir}; cwltool --singularity workflow.cwl")
@@ -985,7 +985,7 @@
)
@@ -1002,7 +1002,7 @@
def test_tibanna():
workdir = dpath("test_tibanna")
- subprocess.check_call(["python", "cleanup.py"], cwd=workdir)
......@@ -52,7 +52,7 @@ Subject: Use python3 instead of python
run(
workdir,
use_conda=True,
@@ -1025,7 +1025,7 @@
@@ -1041,7 +1041,7 @@
pdf_path = "fg.pdf"
# make sure the calls work
......
......@@ -7,7 +7,7 @@ export HOME=$(CURDIR)/fakehome
export PYBUILD_NAME=snakemake
export PYBUILD_DESTDIR_python3=debian/snakemake
export PYBUILD_BEFORE_TEST_python3=chmod +x {dir}/bin/snakemake; cp -r {dir}/bin {dir}/tests {build_dir}
export PYBUILD_TEST_ARGS=python{version} -m pytest tests/test*.py -n auto -k 'not report and not ancient and not test_script and not default_remote and not issue635 and not convert_to_cwl and not issue1083 and not issue1092 and not issue1093 and not test_remote and not test_default_resources'
export PYBUILD_TEST_ARGS=python{version} -m pytest tests/test*.py -n auto -k 'not report and not ancient and not test_script and not default_remote and not issue635 and not convert_to_cwl and not issue1083 and not issue1092 and not issue1093 and not test_remote and not test_default_resources and not test_tibanna and not test_github_issue78 and not test_output_file_cache_remote'
# test_report
# test_ancient
......
......@@ -17,5 +17,5 @@ cd "${AUTOPKGTEST_TMP}"
export HOME="${AUTOPKGTEST_TMP}"
python3 -m pytest ${ROOT}/tests/test*.py -n auto -k 'not report and not ancient and not test_script and not default_remote and not issue635 and not convert_to_cwl and not issue1083 and not issue1092 and not issue1093 and not test_remote and not test_default_resources and not test_singularity and not test_singularity_conda and not test_cwl_singularity and not test_cwl and not test_url_include'
python3 -m pytest ${ROOT}/tests/test*.py -n auto -k 'not report and not ancient and not test_script and not default_remote and not issue635 and not convert_to_cwl and not issue1083 and not issue1092 and not issue1093 and not test_remote and not test_default_resources and not test_singularity and not test_singularity_conda and not test_cwl_singularity and not test_cwl and not test_url_include and not test_tibanna and not test_github_issue78 and not test_output_file_cache_remote'
==========================================================
Caching and reusing intermediate results between workflows
==========================================================
Within certain data analysis fields, there are certain intermediate results that reoccur in exactly the same way in many analysis.
For example, in bioinformatics, reference genomes or annotations are downloaded, and read mapping indexes are built.
Since such steps are independent of the actual data or measurements that are analyzed, but still computationally or timely expensive to conduct, it has been common practice to externalize their computation and assume the presence of the resulting files before execution of a workflow.
From version 5.8.0 on, Snakemake offers a way to keep those steps inside the actual analysis without requiring from redundant computations.
By hashing all steps, parameters, software stacks (in terms of conda environments or containers), and raw input required up to a certain step in a `blockchain <https://en.wikipedia.org/wiki/Blockchain>`_, Snakemake is able to recognize **before** the computation whether a certain result is already available in a central cache on the same system.
**Note that this is explicitly intended for caching results between workflows! There is no need to use this feature to avoid redundant computations within a workflow. Snakemake does this already out of the box.**
Such caching has to be explitly activated per rule, which can be done via the command line interface.
For example,
.. code-block:: console
$ export SNAKEMAKE_OUTPUT_CACHE=/mnt/snakemake-cache/
$ snakemake --cache download_reference create_index
would instruct Snakemake to cache and reuse the results of the rules ``download_reference``and ``create_index``.
The environment variable definition that happens in the first line (defining the location of the cache) should of course be done only once and system wide in reality.
When Snakemake is executed without a shared filesystem (e.g., in the cloud, see :ref:`cloud`), the environment variable has to point to a location compatible with the given remote provider (e.g. an S3 or Google Storage bucket).
In any case, the provided location should be shared between all workflows of your group, institute or computing environment, in order to benefit from the reuse of previously obtained intermediate results.
Note that only rules with just a single output file are eligible for caching.
Also note that the rules need to retrieve all their parameters via the ``params`` directive (except input files).
It is not allowed to directly use ``wildcards``, ``config`` or any global variable in the shell command or script, because these are not captured in the hash (otherwise, reuse would be unnecessarily limited).
Also note that Snakemake will store everything in the cache as readable and writeable for **all users** on the system (except in the remote case, where permissions are not enforced and depend on your storage configuration).
Hence, caching is not intended for private data, just for steps that deal with publicly available resources.
Finally, be aware that the implementation has to be considered **experimental** until this note is removed.
\ No newline at end of file
.. _executable:
======================
Command line interface
======================
This part of the documentation describes the ``snakemake`` executable. Snakemake
is primarily a command-line tool, so the ``snakemake`` executable is the primary way
to execute, debug, and visualize workflows.
.. user_manual-snakemake_options:
-----------------------------
Useful Command Line Arguments
-----------------------------
If called without parameters, i.e.
.. code-block:: console
$ snakemake
Snakemake tries to execute the workflow specified in a file called ``Snakefile`` in the same directory (instead, the Snakefile can be given via the parameter ``-s``).
By issuing
.. code-block:: console
$ snakemake -n
a dry-run can be performed.
This is useful to test if the workflow is defined properly and to estimate the amount of needed computation.
Further, the reason for each rule execution can be printed via
.. code-block:: console
$ snakemake -n -r
Importantly, Snakemake can automatically determine which parts of the workflow can be run in parallel.
By specifying the number of available cores, i.e.
.. code-block:: console
$ snakemake --cores 4
one can tell Snakemake to use up to 4 cores and solve a binary knapsack problem to optimize the scheduling of jobs.
If the number is omitted (i.e., only ``--cores`` is given), the number of used cores is determined as the number of available CPU cores in the machine.
Dealing with very large workflows
---------------------------------
If your workflow has a lot of jobs, Snakemake might need some time to infer the dependencies (the job DAG) and which jobs are actually required to run.
The major bottleneck involved is the filesystem, which has to be queried for existence and modification dates of files.
To overcome this issue, Snakemake allows to run large workflows in batches.
This way, fewer files have to be evaluated at once, and therefore the job DAG can be inferred faster.
By running
.. code-block:: console
$ snakemake --cores 4 --batch myrule=1/3
you instruct to only compute the first of three batches of the inputs of the rule `myrule`.
To generate the second batch, run
.. code-block:: console
$ snakemake --cores 4 --batch myrule=2/3
Finally, when running
.. code-block:: console
$ snakemake --cores 4 --batch myrule=3/3
Snakemake will process beyond the rule `myrule`, because all of its input files have been generated, and complete the workflow.
Obviously, a good choice of the rule to perform the batching is a rule that has a lot of input files and upstream jobs, for example a central aggregation step within your workflow.
We advice all workflow developers to inform potential users of the best suited batching rule.
.. _profiles:
--------
Profiles
--------
Adapting Snakemake to a particular environment can entail many flags and options.
Therefore, since Snakemake 4.1, it is possible to specify a configuration profile
to be used to obtain default options:
.. code-block:: console
$ snakemake --profile myprofile
Here, a folder ``myprofile`` is searched in per-user and global configuration directories (on Linux, this will be ``$HOME/.config/snakemake`` and ``/etc/xdg/snakemake``, you can find the answer for your system via ``snakemake --help``).
Alternatively, an absolute or relative path to the folder can be given.
The profile folder is expected to contain a file ``config.yaml`` that defines default values for the Snakemake command line arguments.
For example, the file
.. code-block:: yaml
cluster: qsub
jobs: 100
would setup Snakemake to always submit to the cluster via the ``qsub`` command, and never use more than 100 parallel jobs in total.
Under https://github.com/snakemake-profiles/doc, you can find publicly available profiles.
Feel free to contribute your own.
The profile folder can additionally contain auxilliary files, e.g., jobscripts, or any kind of wrappers.
See https://github.com/snakemake-profiles/doc for examples.
.. _all_options:
-----------
All Options
-----------
.. argparse::
:module: snakemake
:func: get_argument_parser
:prog: snakemake
All command line options can be printed by calling ``snakemake -h``.
.. _getting_started-bash_completion:
---------------
Bash Completion
---------------
Snakemake supports bash completion for filenames, rulenames and arguments.
To enable it globally, just append
.. code-block:: bash
`snakemake --bash-completion`
including the accents to your ``.bashrc``.
This only works if the ``snakemake`` command is in your path.
\ No newline at end of file
.. _executable:
===================
Executing Snakemake
===================
This part of the documentation describes the ``snakemake`` executable. Snakemake
is primarily a command-line tool, so the ``snakemake`` executable is the primary way
to execute, debug, and visualize workflows.
.. user_manual-snakemake_options:
-----------------------------
Useful Command Line Arguments
-----------------------------
If called without parameters, i.e.
.. code-block:: console
$ snakemake
Snakemake tries to execute the workflow specified in a file called ``Snakefile`` in the same directory (instead, the Snakefile can be given via the parameter ``-s``).
By issuing
.. code-block:: console
$ snakemake -n
a dry-run can be performed.
This is useful to test if the workflow is defined properly and to estimate the amount of needed computation.
Further, the reason for each rule execution can be printed via
.. code-block:: console
$ snakemake -n -r
Importantly, Snakemake can automatically determine which parts of the workflow can be run in parallel.
By specifying the number of available cores, i.e.
.. code-block:: console
$ snakemake -j 4
one can tell Snakemake to use up to 4 cores and solve a binary knapsack problem to optimize the scheduling of jobs.
If the number is omitted (i.e., only ``-j`` is given), the number of used cores is determined as the number of available CPU cores in the machine.
===========================
Cluster and cloud execution
===========================
.. _cloud:
-------------
Cloud Support
......@@ -236,6 +192,8 @@ a job intends to use, such that Tibanna can allocate it to the most cost-effecti
cloud compute instance available.
.. _cluster:
-----------------
Cluster Execution
-----------------
......@@ -316,38 +274,6 @@ When executing a workflow on a cluster using the ``--cluster`` parameter (see be
os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))
.. _profiles:
--------
Profiles
--------
Adapting Snakemake to a particular environment can entail many flags and options.
Therefore, since Snakemake 4.1, it is possible to specify a configuration profile
to be used to obtain default options:
.. code-block:: console
$ snakemake --profile myprofile
Here, a folder ``myprofile`` is searched in per-user and global configuration directories (on Linux, this will be ``$HOME/.config/snakemake`` and ``/etc/xdg/snakemake``, you can find the answer for your system via ``snakemake --help``).
Alternatively, an absolute or relative path to the folder can be given.
The profile folder is expected to contain a file ``config.yaml`` that defines default values for the Snakemake command line arguments.
For example, the file
.. code-block:: yaml
cluster: qsub
jobs: 100
would setup Snakemake to always submit to the cluster via the ``qsub`` command, and never use more than 100 parallel jobs in total.
Under https://github.com/snakemake-profiles/doc, you can find publicly available profiles.
Feel free to contribute your own.
The profile folder can additionally contain auxilliary files, e.g., jobscripts, or any kind of wrappers.
See https://github.com/snakemake-profiles/doc for examples.
.. _getting_started-visualization:
-------------
......@@ -375,60 +301,3 @@ To visualize the whole DAG regardless of the eventual presence of files, the ``f
$ snakemake --forceall --dag | dot -Tpdf > dag.pdf
Of course the visual appearance can be modified by providing further command line arguments to ``dot``.
.. _cwl_export:
----------
CWL export
----------
Snakemake workflows can be exported to `CWL <http://www.commonwl.org/>`_, such that they can be executed in any `CWL-enabled workflow engine <https://www.commonwl.org/#Implementations>`_.
Since, CWL is less powerful for expressing workflows than Snakemake (most importantly Snakemake offers more flexible scatter-gather patterns, since full Python can be used), export works such that every Snakemake job is encoded into a single step in the CWL workflow.
Moreover, every step of that workflow calls Snakemake again to execute the job. The latter enables advanced Snakemake features like scripts, benchmarks and remote files to work inside CWL.
So, when exporting keep in mind that the resulting CWL file can become huge, depending on the number of jobs in your workflow.
To export a Snakemake workflow to CWL, simply run
.. code-block:: console
$ snakemake --export-cwl workflow.cwl
The resulting workflow will by default use the `Snakemake docker image <https://hub.docker.com/r/snakemake/snakemake>`_ for every step, but this behavior can be overwritten via the CWL execution environment.
Then, the workflow can be executed in the same working directory with, e.g.,
.. code-block:: console
$ cwltool workflow.cwl
Note that due to limitations in CWL, it seems currently impossible to avoid that all target files (output files of target jobs), are written directly to the workdir, regardless of their relative paths in the Snakefile.
Note that export is impossible in case the workflow contains :ref:`dynamic output files <snakefiles-dynamic_files>` or output files with absolute paths.
.. _all_options:
-----------
All Options
-----------
.. argparse::
:module: snakemake
:func: get_argument_parser
:prog: snakemake
All command line options can be printed by calling ``snakemake -h``.
.. _getting_started-bash_completion:
---------------
Bash Completion
---------------
Snakemake supports bash completion for filenames, rulenames and arguments.
To enable it globally, just append
.. code-block:: bash
`snakemake --bash-completion`
including the accents to your ``.bashrc``.
This only works if the ``snakemake`` command is in your path.
================
Interoperability
================
.. _cwl_export:
----------
CWL export
----------
Snakemake workflows can be exported to `CWL <http://www.commonwl.org/>`_, such that they can be executed in any `CWL-enabled workflow engine <https://www.commonwl.org/#Implementations>`_.
Since, CWL is less powerful for expressing workflows than Snakemake (most importantly Snakemake offers more flexible scatter-gather patterns, since full Python can be used), export works such that every Snakemake job is encoded into a single step in the CWL workflow.
Moreover, every step of that workflow calls Snakemake again to execute the job. The latter enables advanced Snakemake features like scripts, benchmarks and remote files to work inside CWL.
So, when exporting keep in mind that the resulting CWL file can become huge, depending on the number of jobs in your workflow.
To export a Snakemake workflow to CWL, simply run
.. code-block:: console
$ snakemake --export-cwl workflow.cwl
The resulting workflow will by default use the `Snakemake docker image <https://hub.docker.com/r/snakemake/snakemake>`_ for every step, but this behavior can be overwritten via the CWL execution environment.
Then, the workflow can be executed in the same working directory with, e.g.,
.. code-block:: console
$ cwltool workflow.cwl
Note that due to limitations in CWL, it seems currently impossible to avoid that all target files (output files of target jobs), are written directly to the workdir, regardless of their relative paths in the Snakefile.
Note that export is impossible in case the workflow contains :ref:`dynamic output files <snakefiles-dynamic_files>` or output files with absolute paths.
\ No newline at end of file
......@@ -253,7 +253,7 @@ Assuming that the above file is saved as ``tex.rules``, the actual documents are
FIGURES = ['fig1.pdf']
include:
'tex.smrules'
'tex.rules'
rule all:
input:
......
......@@ -16,8 +16,8 @@ Snakemake
.. image:: https://img.shields.io/docker/cloud/build/snakemake/snakemake
:target: https://hub.docker.com/r/snakemake/snakemake
.. image:: https://circleci.com/gh/snakemake/snakemake/tree/master.svg?style=shield
:target: https://circleci.com/gh/snakemake/snakemake/tree/master
.. image:: https://github.com/snakemake/snakemake/workflows/CI/badge.svg?branch=master
:target: https://github.com/snakemake/snakemake/actions?query=branch%3Amaster+workflow%3ACI
.. image:: https://img.shields.io/badge/stack-overflow-orange.svg
:target: https://stackoverflow.com/questions/tagged/snakemake
......@@ -37,7 +37,7 @@ Workflows are described via a human readable, Python based language.
They can be seamlessly scaled to server, cluster, grid and cloud environments, without the need to modify the workflow definition.
Finally, Snakemake workflows can entail a description of required software, which will be automatically deployed to any execution environment.
Snakemake is **highly popular** with, on average, `a new citation every few days <https://badge.dimensions.ai/details/id/pub.1018944052>`_.
Snakemake is **highly popular** with, `~3 new citations per week <https://badge.dimensions.ai/details/id/pub.1018944052>`_.
.. _manual-quick_example:
......@@ -53,17 +53,28 @@ Rules describe how to create **output files** from **input files**.
rule targets:
input:
"plots/dataset1.pdf",
"plots/dataset2.pdf"
"plots/myplot.pdf"
rule plot:
rule transform:
input:
"raw/{dataset}.csv"
output:
"plots/{dataset}.pdf"
"transformed/{dataset}.csv"
singularity:
"docker://somecontainer:v1.0"
shell:
"somecommand {input} {output}"
rule aggregate_and_plot:
input:
expand("transformed/{dataset}.csv", dataset=[1, 2])
output:
"plots/myplot.pdf"
conda:
"envs/matplotlib.yaml"
script:
"scripts/plot.py"
* Similar to GNU Make, you specify targets in terms of a pseudo-rule at the top.
* For each target and intermediate file, you create rules that define how they are created from input files.
......@@ -197,7 +208,10 @@ Please consider to add your own.
:hidden:
:maxdepth: 1
executable
executing/cli
executing/cluster-cloud
executing/caching
executing/interoperability
.. toctree::
:caption: Defining workflows
......
......@@ -44,7 +44,7 @@ Contributing a new cluster or cloud execution backend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Execution backends are added by implementing a so-called ``Executor``.
All executors are located in `snakemake/executors.py <https://github.com/snakemake/snakemake/src/master/snakemake/executors.py>`_.
All executors are located in `snakemake/executors.py <https://github.com/snakemake/snakemake/tree/master/snakemake/executors.py>`_.
In order to implement a new executor, you have to inherit from the class ``ClusterExecutor``.
Below you find a skeleton
......
......@@ -5,7 +5,7 @@ Reports
-------
From Snakemake 5.1 on, it is possible to automatically generate detailed self-contained HTML reports that encompass runtime statistics, provenance information, workflow topology and results.
**A realistic example report from a real workflow can be found `here <https://koesterlab.github.io/resources/report.html>`_.**
**A realistic example report from a real workflow can be found** `here <https://koesterlab.github.io/resources/report.html>`_.
For including results into the report, the Snakefile has to be annotated with additional information.
Each output file that shall be part of the report has to be marked with the ``report`` flag, which optionally points to a caption in `restructured text format <http://docutils.sourceforge.net/rst.html>`_ and allows to define a ``category`` for grouping purposes.
......