[med-svn] [Git][med-team/toil][master] 3 commits: Another adjustment of disk/memory defaults for the autopkgtests.
Michael R. Crusoe (@crusoe)
gitlab at salsa.debian.org
Fri Oct 8 09:58:00 BST 2021
Michael R. Crusoe pushed to branch master at Debian Med / toil
Commits:
d91dd9b3 by Michael R. Crusoe at 2021-10-08T10:29:14+02:00
Another adjustment of disk/memory defaults for the autopkgtests.
- - - - -
154ad342 by Michael R. Crusoe at 2021-10-08T10:29:49+02:00
debian/patches/exit_code_exception: Fix exception checking for exit_code
- - - - -
6a75fb74 by Michael R. Crusoe at 2021-10-08T10:57:25+02:00
routine-update: Ready to upload to unstable
- - - - -
9 changed files:
- debian/README.source
- debian/changelog
- + debian/patches/exit_code_exception
- − debian/patches/proxy
- debian/patches/series
- debian/patches/setting_version.patch
- debian/patches/ship_tests
- debian/patches/test_with_less_memory
- debian/tests/run-unit-tests
Changes:
=====================================
debian/README.source
=====================================
@@ -7,4 +7,7 @@ compatability of toil is recent.
The bulk of the manual testing is currently focused on the toil-cwl-runner
using the docker.io package.
+Maintainers: please update debian/patch/settings_version.patch following the
+instructions within.
+
-- Michael R. Crusoe <crusoe at debian.org> Sun, 13 Jan 2019 23:25:16 -0800
=====================================
debian/changelog
=====================================
@@ -1,3 +1,11 @@
+toil (5.5.0-5) unstable; urgency=medium
+
+ * Another adjustment of disk/memory defaults for the autopkgtests.
+ * debian/patches/exit_code_exception: Fix exception checking for
+ exit_code
+
+ -- Michael R. Crusoe <crusoe at debian.org> Fri, 08 Oct 2021 10:57:25 +0200
+
toil (5.5.0-4) unstable; urgency=medium
* Adjust disk/memory defaults further downward for 32-bit systems.
=====================================
debian/patches/exit_code_exception
=====================================
@@ -0,0 +1,17 @@
+From: Michael R. Crusoe <crusoe at debian.org>
+Subject: Fix exception checking for exit_code
+Forwarded: https://github.com/DataBiosphere/toil/pull/3830
+
+Otherwise an AttributeError is raised instead of the correct exception
+
+--- toil.orig/src/toil/cwl/cwltoil.py
++++ toil/src/toil/cwl/cwltoil.py
+@@ -3401,7 +3401,7 @@
+ except Exception as err:
+ # TODO: We can't import FailedJobsException due to a circular
+ # import but that's what we'd expect here.
+- if getattr(err, "exit_code") == CWL_UNSUPPORTED_REQUIREMENT_EXIT_CODE:
++ if getattr(err, "exit_code", None) == CWL_UNSUPPORTED_REQUIREMENT_EXIT_CODE:
+ # We figured out that we can't support this workflow.
+ logging.error(err)
+ logging.error(
=====================================
debian/patches/proxy deleted
=====================================
@@ -1,20 +0,0 @@
-From: Michael R. Crusoe <crusoe at debian.org>
-Subject: cope with an invalid HTTP_PROXY
-Forwarded: https://github.com/DataBiosphere/toil/pull/3447
-
-Debian purposely sets an invalid HTTP_PROXY during some build situations
-Index: toil/src/toil/jobStores/abstractJobStore.py
-===================================================================
---- toil.orig/src/toil/jobStores/abstractJobStore.py
-+++ toil/src/toil/jobStores/abstractJobStore.py
-@@ -54,6 +54,10 @@ class InvalidImportExportUrlException(Ex
- """
- super().__init__("The URL '%s' is invalid." % url.geturl())
-
-+try:
-+ from botocore.exceptions import ProxyConnectionError
-+except ImportError:
-+ ProxyConnectionError = None
-
- class NoSuchJobException(Exception):
- """Indicates that the specified job does not exist."""
=====================================
debian/patches/series
=====================================
@@ -1,3 +1,4 @@
+exit_code_exception
cwl_test_path
spelling
setting_version.patch
@@ -5,6 +6,5 @@ no_galaxy_lib
debianize_docs
soften-mesos-deps
soften-pydocker-dep
-proxy
ship_tests
test_with_less_memory
=====================================
debian/patches/setting_version.patch
=====================================
@@ -6,7 +6,10 @@ Last-Update: 2020-05-07
Generated by running `python version_template.py` from a git checkout of the
source to the tag corresponding to the release
----
+
+As of Toil 5.5.0 the `cwltool_version` field is set in setup.py, please
+copy/update that field manually, if it doesn't match the value in setup.py or
+is missing
--- /dev/null
+++ toil/src/toil/version.py
=====================================
debian/patches/ship_tests
=====================================
@@ -1,6 +1,6 @@
From: Michael R. Crusoe <crusoe at debian.org>
Subject: ship the tests
-Forwarded: not-needed
+Forwarded: https://github.com/DataBiosphere/toil/issues/3823
--- toil.orig/setup.py
+++ toil/setup.py
@@ -140,12 +140,9 @@
=====================================
debian/patches/test_with_less_memory
=====================================
@@ -1,9 +1,20 @@
-Author: Michael R. Crusoe <crusoe at debian.org>
-Description: reduce default resource usage on 32bit systems
-Forwarded: not-needed
+From: Michael R. Crusoe <crusoe at debian.org>
+Subject: Remove use of sys.maxsize
+Forwarded: https://github.com/DataBiosphere/toil/pull/3824
+
+Python integers can be larger that sys.maxsize
+
+And on 32-bit machines the use of sys.maxize made is sot that options.maxMemory
+sets a limit of 1.99GiB memory available unless overridden;
+likewise for options.maxDisk being limited to 1.99GiB disk space available
+unless overridden.
+
+The above caused CWLSmallTests.test_workflow_echo_string to fail in Debian on
+32-bit systems
+
--- toil.orig/src/toil/common.py
+++ toil/src/toil/common.py
-@@ -20,6 +20,7 @@
+@@ -20,10 +20,11 @@
import tempfile
import time
import uuid
@@ -11,20 +22,140 @@ Forwarded: not-needed
from urllib.parse import urlparse
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, _ArgumentGroup
-@@ -99,9 +100,14 @@
- self.statePollingWait: Optional[Union[float, int]] = None # Number of seconds to wait before querying job state
-
- # Resource requirements
-- self.defaultMemory: int = 2147483648
-+ is_64bits = sys.maxsize > 2**32
-+ if is_64bits:
-+ self.defaultMemory = 2147483648
-+ self.defaultDisk = 2147483648
-+ else:
-+ self.defaultMemory = 1073741824
-+ self.defaultDisk = 1073741824
- self.defaultCores: Union[float, int] = 1
-- self.defaultDisk: int = 2147483648
+-from typing import Any, Callable, List, Optional, Tuple, Union
++from typing import Any, Callable, List, Literal, Optional, Tuple, Union
+
+ import requests
+
+@@ -92,7 +93,7 @@
+ self.metrics: bool = False
+
+ # Parameters to limit service jobs, so preventing deadlock scheduling scenarios
+- self.maxPreemptableServiceJobs: int = sys.maxsize
++ self.maxPreemptableServiceJobs: int = 9223372036854775807
+ self.maxServiceJobs: int = sys.maxsize
+ self.deadlockWait: Union[float, int] = 60 # Number of seconds we must be stuck with all services before declaring a deadlock
+ self.deadlockCheckInterval: Union[float, int] = 30 # Minimum polling delay for deadlocks
+@@ -104,9 +105,9 @@
+ self.defaultDisk: int = 2147483648
self.readGlobalFileMutableByDefault: bool = False
self.defaultPreemptable: bool = False
- self.maxCores: int = sys.maxsize
+- self.maxCores: int = sys.maxsize
+- self.maxMemory: int = sys.maxsize
+- self.maxDisk: int = sys.maxsize
++ self.maxCores: int = 9223372036854775807
++ self.maxMemory: int = 9223372036854775807
++ self.maxDisk: int = 9223372036854775807
+
+ # Retrying/rescuing jobs
+ self.retryCount: int = 1
+@@ -1393,7 +1394,7 @@
+ return d
+
+
+-def iC(minValue, maxValue=sys.maxsize):
++def iC(minValue, maxValue=9223372036854775807):
+ # Returns function that checks if a given int is in the given half-open interval
+ assert isinstance(minValue, int) and isinstance(maxValue, int)
+ return lambda x: minValue <= x < maxValue
+--- toil.orig/src/toil/cwl/cwltoil.py
++++ toil/src/toil/cwl/cwltoil.py
+@@ -1618,7 +1618,7 @@
+
+ def __init__(self, cwljob: dict):
+ """Store the dictionary of promises for later resolution."""
+- super(ResolveIndirect, self).__init__(cores=1, memory=1024 ^ 2, disk=0)
++ super(ResolveIndirect, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.cwljob = cwljob
+
+ def run(self, file_store: AbstractFileStore) -> dict:
+@@ -1770,7 +1770,7 @@
+ conditional: Union[Conditional, None] = None,
+ ):
+ """Store our context for later evaluation."""
+- super(CWLJobWrapper, self).__init__(cores=1, memory=1024 * 1024, disk=8 * 1024)
++ super(CWLJobWrapper, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.cwltool = remove_pickle_problems(tool)
+ self.cwljob = cwljob
+ self.runtime_context = runtime_context
+@@ -2125,7 +2125,7 @@
+ conditional: Union[Conditional, None],
+ ):
+ """Store our context for later execution."""
+- super(CWLScatter, self).__init__(cores=1, memory=100 * 1024 ^ 2, disk=0)
++ super(CWLScatter, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.step = step
+ self.cwljob = cwljob
+ self.runtime_context = runtime_context
+@@ -2268,7 +2268,7 @@
+ outputs: Union[Mapping, MutableSequence],
+ ):
+ """Collect our context for later gathering."""
+- super(CWLGather, self).__init__(cores=1, memory=10 * 1024 ^ 2, disk=0)
++ super(CWLGather, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.step = step
+ self.outputs = outputs
+
+@@ -2310,7 +2310,7 @@
+
+ def __init__(self, j: "CWLWorkflow", v: dict):
+ """Record the workflow and dictionary."""
+- super(SelfJob, self).__init__(cores=1, memory=1024 ^ 2, disk=0)
++ super(SelfJob, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.j = j
+ self.v = v
+
+@@ -2364,7 +2364,7 @@
+ conditional: Union[Conditional, None] = None,
+ ):
+ """Gather our context for later execution."""
+- super(CWLWorkflow, self).__init__(cores=1, memory=100 * 1024 ^ 2, disk=0)
++ super(CWLWorkflow, self).__init__(cores=1, memory="1GiB", disk="1MiB")
+ self.cwlwf = cwlwf
+ self.cwljob = cwljob
+ self.runtime_context = runtime_context
+--- toil.orig/src/toil/batchSystems/parasol.py
++++ toil/src/toil/batchSystems/parasol.py
+@@ -48,7 +48,7 @@
+
+ def __init__(self, config, maxCores, maxMemory, maxDisk):
+ super(ParasolBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)
+- if maxMemory != sys.maxsize:
++ if maxMemory != 9223372036854775807:
+ logger.warning('The Parasol batch system does not support maxMemory.')
+ # Keep the name of the results file for the pstat2 command..
+ command = config.parasolCommand
+--- toil.orig/src/toil/batchSystems/singleMachine.py
++++ toil/src/toil/batchSystems/singleMachine.py
+@@ -16,7 +16,6 @@
+ import os
+ import signal
+ import subprocess
+-import sys
+ import time
+ import traceback
+ from contextlib import contextmanager
+@@ -80,12 +79,12 @@
+ # If we don't have up to the limit of the resource (and the resource
+ # isn't the inlimited sentinel), warn.
+ if maxCores > self.numCores:
+- if maxCores != sys.maxsize:
++ if maxCores != 9223372036854775807:
+ # We have an actually specified limit and not the default
+ log.warning('Not enough cores! User limited to %i but we only have %i.', maxCores, self.numCores)
+ maxCores = self.numCores
+ if maxMemory > self.physicalMemory:
+- if maxMemory != sys.maxsize:
++ if maxMemory != 9223372036854775807:
+ # We have an actually specified limit and not the default
+ log.warning('Not enough memory! User limited to %i bytes but we only have %i bytes.', maxMemory, self.physicalMemory)
+ maxMemory = self.physicalMemory
+@@ -93,7 +92,7 @@
+ workdir = Toil.getLocalWorkflowDir(config.workflowID, config.workDir) # config.workDir may be None; this sets a real directory
+ self.physicalDisk = toil.physicalDisk(workdir)
+ if maxDisk > self.physicalDisk:
+- if maxDisk != sys.maxsize:
++ if maxDisk != 9223372036854775807:
+ # We have an actually specified limit and not the default
+ log.warning('Not enough disk space! User limited to %i bytes but we only have %i bytes.', maxDisk, self.physicalDisk)
+ maxDisk = self.physicalDisk
=====================================
debian/tests/run-unit-tests
=====================================
@@ -1,13 +1,17 @@
#!/bin/sh -ex
-pkg=toil
-
export LC_ALL=C.UTF-8
-# if [ "${AUTOPKGTEST_TMP}" = "" ] ; then
-# AUTOPKGTEST_TMP=$(mktemp -d /tmp/${pkg}-test.XXXXXX)
-# trap "rm -rf ${AUTOPKGTEST_TMP}" 0 INT QUIT ABRT PIPE TERM
-# fi
-#
-# cd "${AUTOPKGTEST_TMP}"
-
-TMP=AUTOPKGTEST_TMP TOIL_TEST_QUICK=True TOIL_SKIP_DOCKER=True python3 -m pytest -vv -rs -W ignore -k "not (test_bioconda or AWSJobStoreTest or awsjobstoretest or testCwlexample or CWLv10Test or CWLv11Test or CWLv12Test or AWSAutoscaleTest or AWSStaticAutoscaleTest or AWSManagedAutoscaleTest or AWSAutoscaleTestMultipleNodeTypes or AWSRestartTest or PreemptableDeficitCompensationTest)" --ignore-glob '*cwlTest*' --ignore /usr/lib/python3/dist-packages/toil/test/lib/aws/test_s3.py --pyargs toil.test
+
+mkdir ${HOME}/tmp
+
+TMPDIR=${HOME}/tmp TOIL_TEST_QUICK=True TOIL_SKIP_DOCKER=True python3 -m pytest \
+ --junit-xml=$AUTOPKGTEST_ARTIFACTS/toil-tests-junit.xml \
+ -vv -rs -W ignore -k "not (test_bioconda or AWSJobStoreTest or \
+ awsjobstoretest or testCwlexample or CWLv10Test or CWLv11Test or \
+ CWLv12Test or AWSAutoscaleTest or AWSStaticAutoscaleTest or \
+ AWSManagedAutoscaleTest or AWSAutoscaleTestMultipleNodeTypes or \
+ AWSRestartTest or PreemptableDeficitCompensationTest)" \
+ --ignore-glob '*cwlTest*' --ignore /usr/lib/python3/dist-packages/toil/test/lib/aws/test_s3.py \
+ --pyargs toil.test
+
+
View it on GitLab: https://salsa.debian.org/med-team/toil/-/compare/a45778cc1e3a12ddee2003324b94a1238aa18b4d...6a75fb74aa13d5826f8dfb7b1f2869d9784f498b
--
View it on GitLab: https://salsa.debian.org/med-team/toil/-/compare/a45778cc1e3a12ddee2003324b94a1238aa18b4d...6a75fb74aa13d5826f8dfb7b1f2869d9784f498b
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20211008/8d66f947/attachment-0001.htm>
More information about the debian-med-commit
mailing list