Skip to content
Commits on Source (7)
......@@ -175,4 +175,18 @@ mypy3: ${PYSOURCES}
--warn-redundant-casts \
cwltool
release: FORCE
./release-test.sh
. testenv2/bin/activate && \
testenv2/src/${MODULE}/setup.py sdist bdist_wheel && \
pip install twine && \
twine upload testenv2/src/${MODULE}/dist/* && \
git tag ${VERSION} && git push --tags
FORCE:
# Use this to print the value of a Makefile variable
# Example `make print-VERSION`
# From https://www.cmcrossroads.com/article/printing-value-makefile-variable
print-% : ; @echo $* = $($*)
Metadata-Version: 1.1
Name: cwltool
Version: 1.0.20171221100033
Version: 1.0.20180211183944
Summary: Common workflow language reference implementation
Home-page: https://github.com/common-workflow-language/cwltool
Author: Common workflow language working group
......@@ -123,20 +123,31 @@ Description: ==================================================================
.. |Build Status| image:: https://ci.commonwl.org/buildStatus/icon?job=cwltool-conformance
:target: https://ci.commonwl.org/job/cwltool-conformance/
Running user-space implementations of Docker
--------------------------------------------
Using user-space replacements for Docker
----------------------------------------
Some compute environments disallow user-space installation of Docker due to incompatiblities in libraries or to meet security requirements. The CWL reference supports using a user space implementation with the `--user-space-docker-cmd` option.
Some shared computing environments don't support Docker software containers for technical or policy reasons.
As a work around, the CWL reference runner supports using a alternative ``docker`` implementations on Linux
with the ``--user-space-docker-cmd`` option.
Example using `dx-docker` (https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images):
One such "user space" friendly docker replacement is ``udocker`` https://github.com/indigo-dc/udocker and another
is ``dx-docker`` https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images
For use on Linux, install the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
udocker installation: https://github.com/indigo-dc/udocker/blob/master/doc/installation_manual.md#22-install-from-indigo-datacloud-repositories
dx-docker installation: start with the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
Run `cwltool` just as you normally would, but with the new option, e.g. from the conformance tests:
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker --outdir=/tmp/tmpidytmp v1.0/test-cwl-out2.cwl v1.0/empty.json
cwltool --user-space-docker-cmd=udocker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
or
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
Tool or workflow loading from remote or local locations
-------------------------------------------------------
......@@ -234,7 +245,7 @@ Description: ==================================================================
.. code:: yaml
- type: module
- type: modules
The outer list indicates that one plugin is being enabled, the plugin parameters are
defined as a dictionary for this one list item. There is only one required parameter
......@@ -409,18 +420,17 @@ Description: ==================================================================
underlying workflow, cwltool supports requirement "overrides".
The format of the "overrides" object is a mapping of item identifier (workflow,
workflow step, or command line tool) followed by a list of ProcessRequirements
that should be applied.
workflow step, or command line tool) to the process requirements that should be applied.
.. code:: yaml
cwltool:overrides:
echo.cwl:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
Overrides can be specified either on the command line, or as part of the job
input document. Workflow steps are identified using the name of the workflow
file followed by the step name as a document fragment identifier "#id".
......@@ -436,7 +446,8 @@ Description: ==================================================================
input_parameter2: value2
cwltool:overrides:
workflow.cwl#step1:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
......
......@@ -113,20 +113,31 @@ and ``--tmp-outdir-prefix`` to somewhere under ``/Users``::
.. |Build Status| image:: https://ci.commonwl.org/buildStatus/icon?job=cwltool-conformance
:target: https://ci.commonwl.org/job/cwltool-conformance/
Running user-space implementations of Docker
--------------------------------------------
Using user-space replacements for Docker
----------------------------------------
Some compute environments disallow user-space installation of Docker due to incompatiblities in libraries or to meet security requirements. The CWL reference supports using a user space implementation with the `--user-space-docker-cmd` option.
Some shared computing environments don't support Docker software containers for technical or policy reasons.
As a work around, the CWL reference runner supports using a alternative ``docker`` implementations on Linux
with the ``--user-space-docker-cmd`` option.
Example using `dx-docker` (https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images):
One such "user space" friendly docker replacement is ``udocker`` https://github.com/indigo-dc/udocker and another
is ``dx-docker`` https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images
For use on Linux, install the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
udocker installation: https://github.com/indigo-dc/udocker/blob/master/doc/installation_manual.md#22-install-from-indigo-datacloud-repositories
dx-docker installation: start with the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
Run `cwltool` just as you normally would, but with the new option, e.g. from the conformance tests:
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker --outdir=/tmp/tmpidytmp v1.0/test-cwl-out2.cwl v1.0/empty.json
cwltool --user-space-docker-cmd=udocker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
or
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
Tool or workflow loading from remote or local locations
-------------------------------------------------------
......@@ -224,7 +235,7 @@ the correct module environment before executing the above tool would simply be:
.. code:: yaml
- type: module
- type: modules
The outer list indicates that one plugin is being enabled, the plugin parameters are
defined as a dictionary for this one list item. There is only one required parameter
......@@ -399,18 +410,17 @@ environment or with a particular dataset. To avoid the need to modify the
underlying workflow, cwltool supports requirement "overrides".
The format of the "overrides" object is a mapping of item identifier (workflow,
workflow step, or command line tool) followed by a list of ProcessRequirements
that should be applied.
workflow step, or command line tool) to the process requirements that should be applied.
.. code:: yaml
cwltool:overrides:
echo.cwl:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
Overrides can be specified either on the command line, or as part of the job
input document. Workflow steps are identified using the name of the workflow
file followed by the step name as a document fragment identifier "#id".
......@@ -426,7 +436,8 @@ Override identifiers are relative to the toplevel workflow document.
input_parameter2: value2
cwltool:overrides:
workflow.cwl#step1:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
......
Metadata-Version: 1.1
Name: cwltool
Version: 1.0.20171221100033
Version: 1.0.20180211183944
Summary: Common workflow language reference implementation
Home-page: https://github.com/common-workflow-language/cwltool
Author: Common workflow language working group
......@@ -123,20 +123,31 @@ Description: ==================================================================
.. |Build Status| image:: https://ci.commonwl.org/buildStatus/icon?job=cwltool-conformance
:target: https://ci.commonwl.org/job/cwltool-conformance/
Running user-space implementations of Docker
--------------------------------------------
Using user-space replacements for Docker
----------------------------------------
Some compute environments disallow user-space installation of Docker due to incompatiblities in libraries or to meet security requirements. The CWL reference supports using a user space implementation with the `--user-space-docker-cmd` option.
Some shared computing environments don't support Docker software containers for technical or policy reasons.
As a work around, the CWL reference runner supports using a alternative ``docker`` implementations on Linux
with the ``--user-space-docker-cmd`` option.
Example using `dx-docker` (https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images):
One such "user space" friendly docker replacement is ``udocker`` https://github.com/indigo-dc/udocker and another
is ``dx-docker`` https://wiki.dnanexus.com/Developer-Tutorials/Using-Docker-Images
For use on Linux, install the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
udocker installation: https://github.com/indigo-dc/udocker/blob/master/doc/installation_manual.md#22-install-from-indigo-datacloud-repositories
dx-docker installation: start with the DNAnexus toolkit (see https://wiki.dnanexus.com/Downloads for instructions).
Run `cwltool` just as you normally would, but with the new option, e.g. from the conformance tests:
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker --outdir=/tmp/tmpidytmp v1.0/test-cwl-out2.cwl v1.0/empty.json
cwltool --user-space-docker-cmd=udocker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
or
.. code:: bash
cwltool --user-space-docker-cmd=dx-docker https://raw.githubusercontent.com/common-workflow-language/common-workflow-language/master/v1.0/v1.0/test-cwl-out2.cwl https://github.com/common-workflow-language/common-workflow-language/blob/master/v1.0/v1.0/empty.json
Tool or workflow loading from remote or local locations
-------------------------------------------------------
......@@ -234,7 +245,7 @@ Description: ==================================================================
.. code:: yaml
- type: module
- type: modules
The outer list indicates that one plugin is being enabled, the plugin parameters are
defined as a dictionary for this one list item. There is only one required parameter
......@@ -409,18 +420,17 @@ Description: ==================================================================
underlying workflow, cwltool supports requirement "overrides".
The format of the "overrides" object is a mapping of item identifier (workflow,
workflow step, or command line tool) followed by a list of ProcessRequirements
that should be applied.
workflow step, or command line tool) to the process requirements that should be applied.
.. code:: yaml
cwltool:overrides:
echo.cwl:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
Overrides can be specified either on the command line, or as part of the job
input document. Workflow steps are identified using the name of the workflow
file followed by the step name as a document fragment identifier "#id".
......@@ -436,7 +446,8 @@ Description: ==================================================================
input_parameter2: value2
cwltool:overrides:
workflow.cwl#step1:
- class: EnvVarRequirement
requirements:
EnvVarRequirement:
envDef:
MESSAGE: override_value
......
......@@ -7,6 +7,7 @@ setup.cfg
setup.py
cwltool/__init__.py
cwltool/__main__.py
cwltool/argparser.py
cwltool/builder.py
cwltool/cwlNodeEngine.js
cwltool/cwlNodeEngineJSConsole.js
......
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
import os
from typing import (Any, AnyStr, Dict, List, Sequence, Text, Union, cast)
from schema_salad.ref_resolver import file_uri
from .process import (Process, shortname)
from .resolver import ga4gh_tool_registries
from .software_requirements import (SOFTWARE_REQUIREMENTS_ENABLED)
_logger = logging.getLogger("cwltool")
defaultStreamHandler = logging.StreamHandler()
_logger.addHandler(defaultStreamHandler)
_logger.setLevel(logging.INFO)
def arg_parser(): # type: () -> argparse.ArgumentParser
parser = argparse.ArgumentParser(description='Reference executor for Common Workflow Language')
parser.add_argument("--basedir", type=Text)
parser.add_argument("--outdir", type=Text, default=os.path.abspath('.'),
help="Output directory, default current directory")
parser.add_argument("--no-container", action="store_false", default=True,
help="Do not execute jobs in a Docker container, even when specified by the CommandLineTool",
dest="use_container")
parser.add_argument("--preserve-environment", type=Text, action="append",
help="Preserve specific environment variable when running CommandLineTools. May be provided multiple times.",
metavar="ENVVAR",
default=["PATH"],
dest="preserve_environment")
parser.add_argument("--preserve-entire-environment", action="store_true",
help="Preserve entire parent environment when running CommandLineTools.",
default=False,
dest="preserve_entire_environment")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--rm-container", action="store_true", default=True,
help="Delete Docker container used by jobs after they exit (default)",
dest="rm_container")
exgroup.add_argument("--leave-container", action="store_false",
default=True, help="Do not delete Docker container used by jobs after they exit",
dest="rm_container")
cidgroup = parser.add_argument_group("Options for recording the Docker "
"container identifier into a file")
cidgroup.add_argument("--record-container-id", action="store_true",
default=False,
help="If enabled, store the Docker container ID into a file. "
"See --cidfile-dir to specify the directory.",
dest="record_container_id")
cidgroup.add_argument("--cidfile-dir", type=Text,
help="Directory for storing the Docker container ID file. "
"The default is the current directory",
default="",
dest="cidfile_dir")
cidgroup.add_argument("--cidfile-prefix", type=Text,
help="Specify a prefix to the container ID filename. "
"Final file name will be followed by a timestamp. "
"The default is no prefix.",
default="",
dest="cidfile_prefix")
parser.add_argument("--tmpdir-prefix", type=Text,
help="Path prefix for temporary directories",
default="tmp")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--tmp-outdir-prefix", type=Text,
help="Path prefix for intermediate output directories",
default="tmp")
exgroup.add_argument("--cachedir", type=Text, default="",
help="Directory to cache intermediate workflow outputs to avoid recomputing steps.")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--rm-tmpdir", action="store_true", default=True,
help="Delete intermediate temporary directories (default)",
dest="rm_tmpdir")
exgroup.add_argument("--leave-tmpdir", action="store_false",
default=True, help="Do not delete intermediate temporary directories",
dest="rm_tmpdir")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--move-outputs", action="store_const", const="move", default="move",
help="Move output files to the workflow output directory and delete intermediate output directories (default).",
dest="move_outputs")
exgroup.add_argument("--leave-outputs", action="store_const", const="leave", default="move",
help="Leave output files in intermediate output directories.",
dest="move_outputs")
exgroup.add_argument("--copy-outputs", action="store_const", const="copy", default="move",
help="Copy output files to the workflow output directory, don't delete intermediate output directories.",
dest="move_outputs")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--enable-pull", default=True, action="store_true",
help="Try to pull Docker images", dest="enable_pull")
exgroup.add_argument("--disable-pull", default=True, action="store_false",
help="Do not try to pull Docker images", dest="enable_pull")
parser.add_argument("--rdf-serializer",
help="Output RDF serialization format used by --print-rdf (one of turtle (default), n3, nt, xml)",
default="turtle")
parser.add_argument("--eval-timeout",
help="Time to wait for a Javascript expression to evaluate before giving an error, default 20s.",
type=float,
default=20)
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--print-rdf", action="store_true",
help="Print corresponding RDF graph for workflow and exit")
exgroup.add_argument("--print-dot", action="store_true",
help="Print workflow visualization in graphviz format and exit")
exgroup.add_argument("--print-pre", action="store_true", help="Print CWL document after preprocessing.")
exgroup.add_argument("--print-deps", action="store_true", help="Print CWL document dependencies.")
exgroup.add_argument("--print-input-deps", action="store_true", help="Print input object document dependencies.")
exgroup.add_argument("--pack", action="store_true", help="Combine components into single document and print.")
exgroup.add_argument("--version", action="store_true", help="Print version and exit")
exgroup.add_argument("--validate", action="store_true", help="Validate CWL document only.")
exgroup.add_argument("--print-supported-versions", action="store_true", help="Print supported CWL specs.")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--strict", action="store_true",
help="Strict validation (unrecognized or out of place fields are error)",
default=True, dest="strict")
exgroup.add_argument("--non-strict", action="store_false", help="Lenient validation (ignore unrecognized fields)",
default=True, dest="strict")
parser.add_argument("--skip-schemas", action="store_true",
help="Skip loading of schemas", default=True, dest="skip_schemas")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--verbose", action="store_true", help="Default logging")
exgroup.add_argument("--quiet", action="store_true", help="Only print warnings and errors.")
exgroup.add_argument("--debug", action="store_true", help="Print even more logging")
parser.add_argument("--timestamps", action="store_true", help="Add "
"timestamps to the errors, warnings, and "
"notifications.")
parser.add_argument("--js-console", action="store_true", help="Enable javascript console output")
parser.add_argument("--user-space-docker-cmd",
help="(Linux/OS X only) Specify a user space docker "
"command (like udocker or dx-docker) that will be "
"used to call 'pull' and 'run'")
dependency_resolvers_configuration_help = argparse.SUPPRESS
dependencies_directory_help = argparse.SUPPRESS
use_biocontainers_help = argparse.SUPPRESS
conda_dependencies = argparse.SUPPRESS
if SOFTWARE_REQUIREMENTS_ENABLED:
dependency_resolvers_configuration_help = "Dependency resolver configuration file describing how to adapt 'SoftwareRequirement' packages to current system."
dependencies_directory_help = "Defaut root directory used by dependency resolvers configuration."
use_biocontainers_help = "Use biocontainers for tools without an explicitly annotated Docker container."
conda_dependencies = "Short cut to use Conda to resolve 'SoftwareRequirement' packages."
parser.add_argument("--beta-dependency-resolvers-configuration", default=None, help=dependency_resolvers_configuration_help)
parser.add_argument("--beta-dependencies-directory", default=None, help=dependencies_directory_help)
parser.add_argument("--beta-use-biocontainers", default=None, help=use_biocontainers_help, action="store_true")
parser.add_argument("--beta-conda-dependencies", default=None, help=conda_dependencies, action="store_true")
parser.add_argument("--tool-help", action="store_true", help="Print command line help for tool")
parser.add_argument("--relative-deps", choices=['primary', 'cwd'],
default="primary", help="When using --print-deps, print paths "
"relative to primary file or current working directory.")
parser.add_argument("--enable-dev", action="store_true",
help="Enable loading and running development versions "
"of CWL spec.", default=False)
parser.add_argument("--enable-ext", action="store_true",
help="Enable loading and running cwltool extensions "
"to CWL spec.", default=False)
parser.add_argument("--default-container",
help="Specify a default docker container that will be used if the workflow fails to specify one.")
parser.add_argument("--no-match-user", action="store_true",
help="Disable passing the current uid to 'docker run --user`")
parser.add_argument("--disable-net", action="store_true",
help="Use docker's default networking for containers;"
" the default is to enable networking.")
parser.add_argument("--custom-net", type=Text,
help="Will be passed to `docker run` as the '--net' "
"parameter. Implies '--enable-net'.")
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--enable-ga4gh-tool-registry", action="store_true", help="Enable resolution using GA4GH tool registry API",
dest="enable_ga4gh_tool_registry", default=True)
exgroup.add_argument("--disable-ga4gh-tool-registry", action="store_false", help="Disable resolution using GA4GH tool registry API",
dest="enable_ga4gh_tool_registry", default=True)
parser.add_argument("--add-ga4gh-tool-registry", action="append", help="Add a GA4GH tool registry endpoint to use for resolution, default %s" % ga4gh_tool_registries,
dest="ga4gh_tool_registries", default=[])
parser.add_argument("--on-error",
help="Desired workflow behavior when a step fails. One of 'stop' or 'continue'. "
"Default is 'stop'.", default="stop", choices=("stop", "continue"))
exgroup = parser.add_mutually_exclusive_group()
exgroup.add_argument("--compute-checksum", action="store_true", default=True,
help="Compute checksum of contents while collecting outputs",
dest="compute_checksum")
exgroup.add_argument("--no-compute-checksum", action="store_false",
help="Do not compute checksum of contents while collecting outputs",
dest="compute_checksum")
parser.add_argument("--relax-path-checks", action="store_true",
default=False, help="Relax requirements on path names to permit "
"spaces and hash characters.", dest="relax_path_checks")
exgroup.add_argument("--make-template", action="store_true",
help="Generate a template input object")
parser.add_argument("--force-docker-pull", action="store_true",
default=False, help="Pull latest docker image even if"
" it is locally present", dest="force_docker_pull")
parser.add_argument("--no-read-only", action="store_true",
default=False, help="Do not set root directory in the"
" container as read-only", dest="no_read_only")
parser.add_argument("--overrides", type=str,
default=None, help="Read process requirement overrides from file.")
parser.add_argument("workflow", type=Text, nargs="?", default=None)
parser.add_argument("job_order", nargs=argparse.REMAINDER)
return parser
def get_default_args():
# type: () -> Dict[str, Any]
"""
Get default values of cwltool's command line options
"""
ap = arg_parser()
args = ap.parse_args()
return vars(args)
class FSAction(argparse.Action):
objclass = None # type: Text
def __init__(self, option_strings, dest, nargs=None, **kwargs):
# type: (List[Text], Text, Any, **Any) -> None
if nargs is not None:
raise ValueError("nargs not allowed")
super(FSAction, self).__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values, option_string=None):
# type: (argparse.ArgumentParser, argparse.Namespace, Union[AnyStr, Sequence[Any], None], AnyStr) -> None
setattr(namespace,
self.dest, # type: ignore
{"class": self.objclass,
"location": file_uri(str(os.path.abspath(cast(AnyStr, values))))})
class FSAppendAction(argparse.Action):
objclass = None # type: Text
def __init__(self, option_strings, dest, nargs=None, **kwargs):
# type: (List[Text], Text, Any, **Any) -> None
if nargs is not None:
raise ValueError("nargs not allowed")
super(FSAppendAction, self).__init__(option_strings, dest, **kwargs)
def __call__(self, parser, namespace, values, option_string=None):
# type: (argparse.ArgumentParser, argparse.Namespace, Union[AnyStr, Sequence[Any], None], AnyStr) -> None
g = getattr(namespace,
self.dest # type: ignore
)
if not g:
g = []
setattr(namespace,
self.dest, # type: ignore
g)
g.append(
{"class": self.objclass,
"location": file_uri(str(os.path.abspath(cast(AnyStr, values))))})
class FileAction(FSAction):
objclass = "File"
class DirectoryAction(FSAction):
objclass = "Directory"
class FileAppendAction(FSAppendAction):
objclass = "File"
class DirectoryAppendAction(FSAppendAction):
objclass = "Directory"
def add_argument(toolparser, name, inptype, records, description="",
default=None):
# type: (argparse.ArgumentParser, Text, Any, List[Text], Text, Any) -> None
if len(name) == 1:
flag = "-"
else:
flag = "--"
required = True
if isinstance(inptype, list):
if inptype[0] == "null":
required = False
if len(inptype) == 2:
inptype = inptype[1]
else:
_logger.debug(u"Can't make command line argument from %s", inptype)
return None
ahelp = description.replace("%", "%%")
action = None # type: Union[argparse.Action, Text]
atype = None # type: Any
if inptype == "File":
action = cast(argparse.Action, FileAction)
elif inptype == "Directory":
action = cast(argparse.Action, DirectoryAction)
elif isinstance(inptype, dict) and inptype["type"] == "array":
if inptype["items"] == "File":
action = cast(argparse.Action, FileAppendAction)
elif inptype["items"] == "Directory":
action = cast(argparse.Action, DirectoryAppendAction)
else:
action = "append"
elif isinstance(inptype, dict) and inptype["type"] == "enum":
atype = Text
elif isinstance(inptype, dict) and inptype["type"] == "record":
records.append(name)
for field in inptype['fields']:
fieldname = name + "." + shortname(field['name'])
fieldtype = field['type']
fielddescription = field.get("doc", "")
add_argument(
toolparser, fieldname, fieldtype, records,
fielddescription)
return
if inptype == "string":
atype = Text
elif inptype == "int":
atype = int
elif inptype == "double":
atype = float
elif inptype == "float":
atype = float
elif inptype == "boolean":
action = "store_true"
if default:
required = False
if not atype and not action:
_logger.debug(u"Can't make command line argument from %s", inptype)
return None
if inptype != "boolean":
typekw = {'type': atype}
else:
typekw = {}
toolparser.add_argument( # type: ignore
flag + name, required=required, help=ahelp, action=action,
default=default, **typekw)
def generate_parser(toolparser, tool, namemap, records):
# type: (argparse.ArgumentParser, Process, Dict[Text, Text], List[Text]) -> argparse.ArgumentParser
toolparser.add_argument("job_order", nargs="?", help="Job input json file")
namemap["job_order"] = "job_order"
for inp in tool.tool["inputs"]:
name = shortname(inp["id"])
namemap[name.replace("-", "_")] = name
inptype = inp["type"]
description = inp.get("doc", "")
default = inp.get("default", None)
add_argument(toolparser, name, inptype, records, description, default)
return toolparser
......@@ -230,7 +230,7 @@ class Builder(object):
return [prefix] if prefix else []
elif value is True and prefix:
return [prefix]
elif value is False or value is None:
elif value is False or value is None or (value is True and not prefix):
return []
else:
l = [value]
......@@ -251,7 +251,8 @@ class Builder(object):
return {k: self.do_eval(v, context, pull_image, recursive) for k, v in iteritems(ex)}
if isinstance(ex, list):
return [self.do_eval(v, context, pull_image, recursive) for v in ex]
if context is None and type(ex) is str and "self" in ex:
return None
return expression.do_eval(ex, self.job, self.requirements,
self.outdir, self.tmpdir,
self.resources,
......
......@@ -4,6 +4,7 @@ from typing import Callable as tCallable
from typing import Any, Dict, Text, Tuple, Union
from . import load_tool, main, workflow
from .argparser import get_default_args
from .process import Process
......@@ -41,7 +42,13 @@ class Factory(object):
# type: (...) -> None
self.makeTool = makeTool
self.executor = executor
self.execkwargs = execkwargs
kwargs = get_default_args()
kwargs.pop("job_order")
kwargs.pop("workflow")
kwargs.pop("outdir")
kwargs.update(execkwargs)
self.execkwargs = kwargs
def make(self, cwl):
"""Instantiate a CWL object from a CWl document."""
......
from __future__ import absolute_import
import codecs
import functools
import io
import json
......@@ -10,6 +11,7 @@ import stat
import subprocess
import sys
import tempfile
import datetime
from io import open
from typing import (IO, Any, Callable, Dict, Iterable, List, MutableMapping, Text,
Tuple, Union, cast)
......@@ -340,7 +342,7 @@ class DockerCommandLineJob(JobBase):
ensure_writable(host_outdir_tgt)
elif vol.type == "WritableDirectory":
if vol.resolved.startswith("_:"):
os.makedirs(vol.target, 0o0755)
os.makedirs(host_outdir_tgt, 0o0755)
else:
if self.inplace_update:
runtime.append(u"--volume=%s:%s:rw" % (
......@@ -362,8 +364,10 @@ class DockerCommandLineJob(JobBase):
docker_windows_path_adjust(vol.target)))
def run(self, pull_image=True, rm_container=True,
record_container_id=False, cidfile_dir="",
cidfile_prefix="",
rm_tmpdir=True, move_outputs="move", **kwargs):
# type: (bool, bool, bool, Text, **Any) -> None
# type: (bool, bool, bool, Text, Text, bool, Text, **Any) -> None
(docker_req, docker_is_req) = get_feature(self, "DockerRequirement")
......@@ -463,6 +467,26 @@ class DockerCommandLineJob(JobBase):
# runtime.append("--env=HOME=/tmp")
runtime.append(u"--env=HOME=%s" % self.builder.outdir)
# add parameters to docker to write a container ID file
if record_container_id:
if cidfile_dir != "":
if not os.path.isdir(cidfile_dir):
_logger.error("--cidfile-dir %s error:\n%s", cidfile_dir,
cidfile_dir + " is not a directory or "
"directory doesn't exist, please check it first")
exit(2)
if not os.path.exists(cidfile_dir):
_logger.error("--cidfile-dir %s error:\n%s", cidfile_dir,
"directory doesn't exist, please create it first")
exit(2)
else:
cidfile_dir = os.getcwd()
cidfile_name = datetime.datetime.now().strftime("%Y%m%d%H%M%S-%f")+".cid"
if cidfile_prefix != "":
cidfile_name = str(cidfile_prefix + "-" + cidfile_name)
cidfile_path = os.path.join(cidfile_dir, cidfile_name)
runtime.append(u"--cidfile=%s" % cidfile_path)
for t, v in self.environment.items():
runtime.append(u"--env=%s=%s" % (t, v))
......@@ -549,14 +573,14 @@ def _job_popen(
stdin_path=stdin_path,
)
with open(os.path.join(job_dir, "job.json"), "wb") as f:
json.dump(job_description, f)
json.dump(job_description, codecs.getwriter('utf-8')(f), ensure_ascii=False) # type: ignore
try:
job_script = os.path.join(job_dir, "run_job.bash")
with open(job_script, "wb") as f:
f.write(job_script_contents.encode('utf-8'))
job_run = os.path.join(job_dir, "run_job.py")
with open(job_run, "wb") as f:
f.write(PYTHON_RUN_SCRIPT)
f.write(PYTHON_RUN_SCRIPT.encode('utf-8'))
sp = subprocess.Popen(
["bash", job_script.encode("utf-8")],
shell=False,
......
"""Loads a CWL document."""
from __future__ import absolute_import
# pylint: disable=unused-import
"""Loads a CWL document."""
import logging
import os
......@@ -9,18 +9,19 @@ import uuid
import hashlib
import json
import copy
from typing import Any, Callable, Dict, List, Text, Tuple, Union, cast, Iterable
from typing import (Any, Callable, Dict, Iterable, List, Mapping, Optional,
Text, Tuple, Union, cast)
import requests.sessions
from six import itervalues, string_types
from six.moves import urllib
import schema_salad.schema as schema
from avro.schema import Names
from ruamel.yaml.comments import CommentedMap, CommentedSeq
from schema_salad.ref_resolver import Fetcher, Loader, file_uri
from schema_salad.ref_resolver import ContextType, Fetcher, Loader, file_uri
from schema_salad.sourceline import cmap
from schema_salad.validate import ValidationException
from six.moves import urllib
from . import process, update
from .errors import WorkflowException
......@@ -28,7 +29,6 @@ from .process import Process, shortname, get_schema
from .update import ALLUPDATES
_logger = logging.getLogger("cwltool")
jobloaderctx = {
u"cwl": "https://w3id.org/cwl/cwl#",
u"cwltool": "http://commonwl.org/cwltool#",
......@@ -36,7 +36,7 @@ jobloaderctx = {
u"location": {u"@type": u"@id"},
u"format": {u"@type": u"@id"},
u"id": u"@id"
}
} # type: ContextType
overrides_ctx = {
......@@ -45,21 +45,33 @@ overrides_ctx = {
u"overrides": {
"@id": "cwltool:overrides",
"mapSubject": "overrideTarget",
"mapPredicate": "override"
},
u"override": {
"@id": "cwltool:override",
"requirements": {
"@id": "https://w3id.org/cwl/cwl#requirements",
"mapSubject": "class"
}
} # type: Dict[Text, Union[Dict[Any, Any], Text, Iterable[Text]]]
} # type: ContextType
FetcherConstructorType = Callable[[Dict[Text, Union[Text, bool]],
requests.sessions.Session], Fetcher]
loaders = {} # type: Dict[FetcherConstructorType, Loader]
def default_loader(fetcher_constructor):
# type: (Optional[FetcherConstructorType]) -> Loader
if fetcher_constructor in loaders:
return loaders[fetcher_constructor]
else:
loader = Loader(jobloaderctx, fetcher_constructor=fetcher_constructor)
loaders[fetcher_constructor] = loader
return loader
def resolve_tool_uri(argsworkflow, # type: Text
resolver=None, # type: Callable[[Loader, Union[Text, Dict[Text, Any]]], Text]
fetcher_constructor=None,
# type: Callable[[Dict[Text, Text], requests.sessions.Session], Fetcher]
fetcher_constructor=None, # type: FetcherConstructorType
document_loader=None # type: Loader
):
# type: (...) -> Tuple[Text, Text]
): # type: (...) -> Tuple[Text, Text]
uri = None # type: Text
split = urllib.parse.urlsplit(argsworkflow)
......@@ -70,7 +82,7 @@ def resolve_tool_uri(argsworkflow, # type: Text
uri = file_uri(str(os.path.abspath(argsworkflow)))
elif resolver:
if document_loader is None:
document_loader = Loader(jobloaderctx, fetcher_constructor=fetcher_constructor) # type: ignore
document_loader = default_loader(fetcher_constructor) # type: ignore
uri = resolver(document_loader, argsworkflow)
if uri is None:
......@@ -85,18 +97,17 @@ def resolve_tool_uri(argsworkflow, # type: Text
def fetch_document(argsworkflow, # type: Union[Text, Dict[Text, Any]]
resolver=None, # type: Callable[[Loader, Union[Text, Dict[Text, Any]]], Text]
fetcher_constructor=None
# type: Callable[[Dict[Text, Text], requests.sessions.Session], Fetcher]
):
# type: (...) -> Tuple[Loader, CommentedMap, Text]
fetcher_constructor=None # type: FetcherConstructorType
): # type: (...) -> Tuple[Loader, CommentedMap, Text]
"""Retrieve a CWL document."""
document_loader = Loader(jobloaderctx, fetcher_constructor=fetcher_constructor) # type: ignore
document_loader = default_loader(fetcher_constructor) # type: ignore
uri = None # type: Text
workflowobj = None # type: CommentedMap
if isinstance(argsworkflow, string_types):
uri, fileuri = resolve_tool_uri(argsworkflow, resolver=resolver, document_loader=document_loader)
uri, fileuri = resolve_tool_uri(argsworkflow, resolver=resolver,
document_loader=document_loader)
workflowobj = document_loader.fetch(fileuri)
elif isinstance(argsworkflow, dict):
uri = "#" + Text(id(argsworkflow))
......@@ -126,7 +137,7 @@ def _convert_stdstreams_to_files(workflowobj):
sort_keys=True).encode('utf-8')).hexdigest())
workflowobj[streamtype] = filename
out['type'] = 'File'
out['outputBinding'] = {'glob': filename}
out['outputBinding'] = cmap({'glob': filename})
for inp in workflowobj.get('inputs', []):
if inp.get('type') == 'stdin':
if 'inputBinding' in inp:
......@@ -170,25 +181,25 @@ def validate_document(document_loader, # type: Loader
enable_dev=False, # type: bool
strict=True, # type: bool
preprocess_only=False, # type: bool
fetcher_constructor=None,
skip_schemas=None,
# type: Callable[[Dict[Text, Text], requests.sessions.Session], Fetcher]
overrides=None # type: List[Dict]
fetcher_constructor=None, # type: FetcherConstructorType
skip_schemas=None, # type: bool
overrides=None, # type: List[Dict]
metadata=None, # type: Optional[Dict]
):
# type: (...) -> Tuple[Loader, Names, Union[Dict[Text, Any], List[Dict[Text, Any]]], Dict[Text, Any], Text]
"""Validate a CWL document."""
if isinstance(workflowobj, list):
workflowobj = {
workflowobj = cmap({
"$graph": workflowobj
}
}, fn=uri)
if not isinstance(workflowobj, dict):
raise ValueError("workflowjobj must be a dict, got '%s': %s" % (type(workflowobj), workflowobj))
jobobj = None
if "cwl:tool" in workflowobj:
job_loader = Loader(jobloaderctx, fetcher_constructor=fetcher_constructor) # type: ignore
job_loader = default_loader(fetcher_constructor) # type: ignore
jobobj, _ = job_loader.resolve_all(workflowobj, uri)
uri = urllib.parse.urljoin(uri, workflowobj["https://w3id.org/cwl/cwl#tool"])
del cast(dict, jobobj)["https://w3id.org/cwl/cwl#tool"]
......@@ -200,8 +211,14 @@ def validate_document(document_loader, # type: Loader
workflowobj = fetch_document(uri, fetcher_constructor=fetcher_constructor)[1]
fileuri = urllib.parse.urldefrag(uri)[0]
if "cwlVersion" not in workflowobj:
if metadata and 'cwlVersion' in metadata:
workflowobj['cwlVersion'] = metadata['cwlVersion']
else:
raise ValidationException("No cwlVersion found."
"Use the following syntax in your CWL document to declare "
"the version: cwlVersion: <version>")
if "cwlVersion" in workflowobj:
if not isinstance(workflowobj["cwlVersion"], (str, Text)):
raise Exception("'cwlVersion' must be a string, got %s" % type(workflowobj["cwlVersion"]))
# strip out version
......@@ -213,9 +230,6 @@ def validate_document(document_loader, # type: Loader
versions = list(ALLUPDATES) # ALLUPDATES is a dict
versions.sort()
raise ValidationException("'cwlVersion' not valid. Supported CWL versions are: \n{}".format("\n".join(versions)))
else:
raise ValidationException("No cwlVersion found."
"Use the following syntax in your CWL workflow to declare version: cwlVersion: <version>")
if workflowobj["cwlVersion"] == "draft-2":
workflowobj = cast(CommentedMap, cmap(update._draft2toDraft3dev1(
......@@ -238,36 +252,36 @@ def validate_document(document_loader, # type: Loader
_add_blank_ids(workflowobj)
workflowobj["id"] = fileuri
processobj, metadata = document_loader.resolve_all(workflowobj, fileuri)
processobj, new_metadata = document_loader.resolve_all(workflowobj, fileuri)
if not isinstance(processobj, (CommentedMap, CommentedSeq)):
raise ValidationException("Workflow must be a dict or list.")
if not metadata:
if not new_metadata:
if not isinstance(processobj, dict):
raise ValidationException("Draft-2 workflows must be a dict.")
metadata = cast(CommentedMap, cmap({"$namespaces": processobj.get("$namespaces", {}),
new_metadata = cast(CommentedMap, cmap(
{"$namespaces": processobj.get("$namespaces", {}),
"$schemas": processobj.get("$schemas", []),
"cwlVersion": processobj["cwlVersion"]},
fn=fileuri))
"cwlVersion": processobj["cwlVersion"]}, fn=fileuri))
_convert_stdstreams_to_files(workflowobj)
if preprocess_only:
return document_loader, avsc_names, processobj, metadata, uri
return document_loader, avsc_names, processobj, new_metadata, uri
schema.validate_doc(avsc_names, processobj, document_loader, strict)
if metadata.get("cwlVersion") != update.LATEST:
if new_metadata.get("cwlVersion") != update.LATEST:
processobj = cast(CommentedMap, cmap(update.update(
processobj, document_loader, fileuri, enable_dev, metadata)))
processobj, document_loader, fileuri, enable_dev, new_metadata)))
if jobobj:
metadata[u"cwl:defaults"] = jobobj
new_metadata[u"cwl:defaults"] = jobobj
if overrides:
metadata[u"cwltool:overrides"] = overrides
new_metadata[u"cwltool:overrides"] = overrides
return document_loader, avsc_names, processobj, metadata, uri
return document_loader, avsc_names, processobj, new_metadata, uri
def make_tool(document_loader, # type: Loader
......@@ -322,7 +336,7 @@ def load_tool(argsworkflow, # type: Union[Text, Dict[Text, Any]]
enable_dev=False, # type: bool
strict=True, # type: bool
resolver=None, # type: Callable[[Loader, Union[Text, Dict[Text, Any]]], Text]
fetcher_constructor=None, # type: Callable[[Dict[Text, Text], requests.sessions.Session], Fetcher]
fetcher_constructor=None, # type: FetcherConstructorType
overrides=None
):
# type: (...) -> Process
......@@ -332,7 +346,8 @@ def load_tool(argsworkflow, # type: Union[Text, Dict[Text, Any]]
document_loader, avsc_names, processobj, metadata, uri = validate_document(
document_loader, workflowobj, uri, enable_dev=enable_dev,
strict=strict, fetcher_constructor=fetcher_constructor,
overrides=overrides)
overrides=overrides, metadata=kwargs.get('metadata', None)
if kwargs else None)
return make_tool(document_loader, avsc_names, metadata, uri,
makeTool, kwargs if kwargs else {})
......
This diff is collapsed.
......@@ -95,8 +95,8 @@ def import_embed(d, seen):
import_embed(d[k], seen)
def pack(document_loader, processobj, uri, metadata):
# type: (Loader, Union[Dict[Text, Any], List[Dict[Text, Any]]], Text, Dict[Text, Text]) -> Dict[Text, Any]
def pack(document_loader, processobj, uri, metadata, rewrite_out=None):
# type: (Loader, Union[Dict[Text, Any], List[Dict[Text, Any]]], Text, Dict[Text, Text], Dict[Text, Text]) -> Dict[Text, Any]
document_loader = SubLoader(document_loader)
document_loader.idx = {}
......@@ -114,15 +114,20 @@ def pack(document_loader, processobj, uri, metadata):
# type: (Text, Text) -> Union[Dict, List, Text]
return document_loader.resolve_ref(u, base_url=b)[0]
ids = set() # type: Set[Text]
find_ids(processobj, ids)
runs = {uri}
find_run(processobj, loadref, runs)
ids = set() # type: Set[Text]
for f in runs:
find_ids(document_loader.resolve_ref(f)[0], ids)
names = set() # type: Set[Text]
if rewrite_out is None:
rewrite = {} # type: Dict[Text, Text]
else:
rewrite = rewrite_out
mainpath, _ = urllib.parse.urldefrag(uri)
......@@ -131,8 +136,10 @@ def pack(document_loader, processobj, uri, metadata):
if r == mainuri:
rewrite[r] = "#main"
elif r.startswith(mainuri) and r[len(mainuri)] in ("#", "/"):
path, frag = urllib.parse.urldefrag(r)
rewrite[r] = "#"+frag
if r[len(mainuri):].startswith("#main/"):
rewrite[r] = "#" + uniquename(r[len(mainuri)+1:], names)
else:
rewrite[r] = "#" + uniquename("main/"+r[len(mainuri)+1:], names)
else:
path, frag = urllib.parse.urldefrag(r)
if path == mainpath:
......@@ -144,7 +151,6 @@ def pack(document_loader, processobj, uri, metadata):
sortedids = sorted(ids)
for r in sortedids:
if r in document_loader.idx:
rewrite_id(r, uri)
packed = {"$graph": [], "cwlVersion": metadata["cwlVersion"]
......
......@@ -422,13 +422,13 @@ def avroize_type(field_type, name_prefix=""):
avroize_type(field_type["items"], name_prefix)
return field_type
def get_overrides(overrides, toolid): # type: (List[Dict[Text, Any]], Text) -> List[Dict[Text, Any]]
req = [] # type: List[Dict[Text, Any]]
def get_overrides(overrides, toolid): # type: (List[Dict[Text, Any]], Text) -> Dict[Text, Any]
req = {} # type: Dict[Text, Any]
if not isinstance(overrides, list):
raise validate.ValidationException("Expected overrides to be a list, but was %s" % type(overrides))
for ov in overrides:
if ov["overrideTarget"] == toolid:
req.extend(ov["override"])
req.update(ov)
return req
class Process(six.with_metaclass(abc.ABCMeta, object)):
......@@ -467,7 +467,7 @@ class Process(six.with_metaclass(abc.ABCMeta, object)):
self.tool = toolpath_object
self.requirements = (kwargs.get("requirements", []) +
self.tool.get("requirements", []) +
get_overrides(kwargs.get("overrides", []), self.tool["id"]))
get_overrides(kwargs.get("overrides", []), self.tool["id"]).get("requirements", []))
self.hints = kwargs.get("hints", []) + self.tool.get("hints", [])
self.formatgraph = None # type: Graph
if "loader" in kwargs:
......@@ -608,6 +608,7 @@ class Process(six.with_metaclass(abc.ABCMeta, object)):
builder.stagedir = builder.fs_access.docker_compatible_realpath(kwargs.get("docker_stagedir") or "/var/lib/cwl")
else:
builder.outdir = builder.fs_access.realpath(kwargs.get("outdir") or tempfile.mkdtemp())
if self.tool[u"class"] != 'Workflow':
builder.tmpdir = builder.fs_access.realpath(kwargs.get("tmpdir") or tempfile.mkdtemp())
builder.stagedir = builder.fs_access.realpath(kwargs.get("stagedir") or tempfile.mkdtemp())
......
......@@ -70,11 +70,15 @@ def match_types(sinktype, src, iid, inputobj, linkMerge, valueFrom):
elif isinstance(src.parameter["type"], list):
# Source is union type
# Check that at least one source type is compatible with the sink.
for st in src.parameter["type"]:
srccopy = copy.deepcopy(src)
srccopy.parameter["type"] = st
if match_types(sinktype, srccopy, iid, inputobj, linkMerge, valueFrom):
original_types = src.parameter["type"]
for source_type in original_types:
src.parameter["type"] = source_type
match = match_types(
sinktype, src, iid, inputobj, linkMerge, valueFrom)
if match:
src.parameter["type"] = original_types
return True
src.parameter["type"] = original_types
return False
elif linkMerge:
if iid not in inputobj:
......@@ -206,13 +210,13 @@ def object_from_state(state, parms, frag_only, supportsMultipleInput, sourceFiel
if frag_only:
iid = shortname(iid)
if sourceField in inp:
if (isinstance(inp[sourceField], list) and not
supportsMultipleInput):
connections = aslist(inp[sourceField])
if (len(connections) > 1 and
not supportsMultipleInput):
raise WorkflowException(
"Workflow contains multiple inbound links to a single "
"parameter but MultipleInputFeatureRequirement is not "
"declared.")
connections = aslist(inp[sourceField])
for src in connections:
if src in state and state[src] is not None and (state[src].success == "success" or incomplete):
if not match_types(
......@@ -669,7 +673,7 @@ class WorkflowStep(Process):
kwargs["requirements"] = (kwargs.get("requirements", []) +
toolpath_object.get("requirements", []) +
get_overrides(kwargs.get("overrides", []), self.id))
get_overrides(kwargs.get("overrides", []), self.id).get("requirements", []))
kwargs["hints"] = kwargs.get("hints", []) + toolpath_object.get("hints", [])
try:
......@@ -706,7 +710,14 @@ class WorkflowStep(Process):
for tool_entry in self.embedded_tool.tool[toolfield]:
frag = shortname(tool_entry["id"])
if frag == shortinputid:
#if the case that the step has a default for a parameter,
#we do not want the default of the tool to override it
step_default = None
if "default" in param and "default" in tool_entry:
step_default = param["default"]
param.update(tool_entry)
if step_default is not None:
param["default"] = step_default
found = True
bound.add(frag)
break
......
cwltool (1.0.20180211183944-1) unstable; urgency=medium
* New upstream version.
* Expose the unit tests to autopkgtest
* Mark all *.cwl files as executable
-- Michael R. Crusoe <michael.crusoe@gmail.com> Sun, 11 Feb 2018 11:00:57 -0800
cwltool (1.0.20171221100033-1) unstable; urgency=medium
* Team upload.
......
debian/cwltool.1
......@@ -20,7 +20,7 @@ Build-Depends: debhelper (>= 10),
python-ruamel.yaml,
python-ruamel.ordereddict,
help2man
Standards-Version: 4.1.2
Standards-Version: 4.1.3
Vcs-Browser: https://anonscm.debian.org/cgit/debian-med/cwltool.git
Vcs-Git: https://anonscm.debian.org/git/debian-med/cwltool.git
Homepage: http://www.commonwl.org
......
Author: Andreas Tille <tille@debian.org>
Last-Update: Sun, 18 Dec 2016 08:31:42 +0100
Description: Ignore cwltest for the moment
See the hint in
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845982#62
--- cwltool.orig/setup.py
+++ cwltool/setup.py
@@ -49,7 +49,6 @@
'shellescape >= 3.4.1, < 3.5',
'schema-salad >= 2.2.20170111180227, < 3',
'typing >= 3.5.2, < 3.6',
- 'cwltest >= 1.0.20161227194859'
],
test_suite='tests',
tests_require=[],
......@@ -9,10 +9,6 @@ export HOME=$(shell echo $$PWD"/fakehome")
%:
dh $@ --with python2 --buildsystem=pybuild
override_dh_auto_build:
#find cwltool -name "*.py" | xargs sed -i 's/.*typing.*/#&/g'
dh_auto_build
debian/cwltool.1: debian/rules debian/cwltool.help2man
python2 setup.py develop --user
#help2man cwltool/main.py -I debian/cwltool.help2man -N -n "Reference executor for Common Workflow Language" -o debian/cwltool.1
......@@ -20,8 +16,3 @@ debian/cwltool.1: debian/rules debian/cwltool.help2man
override_dh_installman: debian/cwltool.1
dh_installman
override_dh_clean:
dh_clean
rm -f debian/cwltool.1
#find cwltool -name "*.py" | xargs sed -i 's/#\(.*typing.*\)/\1/g'
Tests: run-tests
Depends: @, python-pytest
Restrictions: allow-stderr