[med-svn] [Git][med-team/toil][master] 5 commits: d/patches/allow_newer_requests: added

Michael R. Crusoe (@crusoe) gitlab at salsa.debian.org
Tue Jun 18 19:45:17 BST 2024



Michael R. Crusoe pushed to branch master at Debian Med / toil


Commits:
93ceb45a by Michael R. Crusoe at 2024-06-18T18:23:13+02:00
d/patches/allow_newer_requests: added

- - - - -
69bb06e1 by Michael R. Crusoe at 2024-06-18T18:28:20+02:00
docs: link to cwltool-doc; link to python3-doc locally

- - - - -
10d8ceb7 by Michael R. Crusoe at 2024-06-18T19:46:39+02:00
d/patches/skip-mypy-boto3: these type hints are not packaged for Debian yet

- - - - -
33355745 by Michael R. Crusoe at 2024-06-18T19:46:40+02:00
d/patches/needs_aws-proxyfix: Fix build due to purposely invalid proxy configuration.

- - - - -
950a6aef by Michael R. Crusoe at 2024-06-18T20:14:03+02:00
d/rules: skip online tests

- - - - -


8 changed files:

- debian/changelog
- debian/control
- + debian/patches/allow_newer_requests
- debian/patches/intersphinx
- + debian/patches/needs_aws-proxyfix
- debian/patches/series
- + debian/patches/skip-mypy-boto3
- debian/rules


Changes:

=====================================
debian/changelog
=====================================
@@ -2,6 +2,13 @@ toil (7.0.0-1) UNRELEASED; urgency=medium
 
   * New upstream version
   * Refresh patches
+  * d/patches/allow_newer_requests: added
+  * docs: link to cwltool-doc; link to python3-doc locally
+  * d/patches/skip-mypy-boto3: these type hints are not packaged for
+    Debian yet
+  * d/patches/needs_aws-proxyfix: Fix build due to purposely invalid
+    proxy configuration.
+  * d/rules: skip online tests
 
  -- Michael R. Crusoe <crusoe at debian.org>  Mon, 27 May 2024 08:45:55 +0200
 


=====================================
debian/control
=====================================
@@ -23,6 +23,7 @@ Build-Depends: debhelper-compat (= 13),
                python3-yaml <!nocheck>,
                docker.io <!nocheck>,
                python3-urllib3,
+               cwltool-doc <!nodoc>,
                python3-sphinx <!nodoc>,
                python3-sphinx-autoapi <!nodoc>,
                python3-sphinx-autodoc-typehints <!nodoc>,


=====================================
debian/patches/allow_newer_requests
=====================================
@@ -0,0 +1,13 @@
+Author: Michael R. Crusoe <crusoe at debian.org>
+Description: remove max version pin for 'requests'
+Forwarded: not-needed
+
+--- toil.orig/requirements.txt
++++ toil/requirements.txt
+@@ -1,5 +1,5 @@
+ dill>=0.3.2, <0.4
+-requests<=2.31.0
++requests
+ docker
+ urllib3>=1.26.0,<3
+ python-dateutil


=====================================
debian/patches/intersphinx
=====================================
@@ -3,12 +3,14 @@ Description: Link to the offline Python docs.
 Forwarded: not-needed
 --- toil.orig/docs/conf.py
 +++ toil/docs/conf.py
-@@ -71,7 +71,7 @@
+@@ -71,8 +71,8 @@
  ]
  
  intersphinx_mapping = {
 -    "python": ("https://docs.python.org/3", None),
-+    "python": ("https://docs.python.org/3", f"/usr/share/doc/python{sys.version_info.major}.{sys.version_info.minor}/html/objects.inv"),
-     "cwltool": ("https://cwltool.readthedocs.io/en/latest/", None),
+-    "cwltool": ("https://cwltool.readthedocs.io/en/latest/", None),
++    "python": ("/usr/share/doc/python3/html", None),
++    "cwltool": ("/usr/share/doc/cwltool/html", None),
  }
  
+ # Link definitions available everywhere so we don't need to keep repeating ourselves.


=====================================
debian/patches/needs_aws-proxyfix
=====================================
@@ -0,0 +1,29 @@
+From: Michael R. Crusoe <crusoe at debian.org>
+Subject: Skip AWS requiring tests when a broken proxy is set.
+
+When building Debian packages we purposely set HTTP{,S}_PROXY to http://127.0.0.1:9/
+to quickly avoid internet access.
+
+--- toil.orig/src/toil/test/__init__.py
++++ toil/src/toil/test/__init__.py
+@@ -62,6 +62,8 @@
+ from toil.lib.threading import ExceptionalThread, cpu_count
+ from toil.version import distVersion
+ 
++import botocore.exceptions
++
+ logger = logging.getLogger(__name__)
+ 
+ 
+@@ -379,6 +381,11 @@
+         return unittest.skip("Install Toil with the 'aws' extra to include this test.")(
+             test_item
+         )
++    except botocore.exceptions.ProxyConnectionError as e:
++        return unittest.skip(f"Proxy error: {e}, skipping this test.")(
++            test_item
++        )
++
+     from toil.lib.aws import running_on_ec2
+     if not (boto3_credentials or os.path.exists(os.path.expanduser('~/.aws/credentials')) or running_on_ec2()):
+         return unittest.skip("Configure AWS credentials to include this test.")(test_item)


=====================================
debian/patches/series
=====================================
@@ -1,3 +1,5 @@
+needs_aws-proxyfix
+skip-mypy-boto3
 soften-configargparser-deps
 intersphinx
 setting_version.patch
@@ -7,3 +9,4 @@ soften-mesos-deps
 atomic_copy_as_alternative.patch
 soften-cwltool-dep.patch
 accept_debian_packaged_docker_version.patch
+allow_newer_requests


=====================================
debian/patches/skip-mypy-boto3
=====================================
@@ -0,0 +1,1317 @@
+From: Michael R. Crusoe <crusoe at debian.org>
+Subject: Allow for not installing the mypy_boto3_* packages
+Forwarded: https://github.com/DataBiosphere/toil/pull/4975
+
+They aren't packaged for Debian yet.
+
+--- toil.orig/src/toil/jobStores/aws/jobStore.py
++++ toil/src/toil/jobStores/aws/jobStore.py
+@@ -27,8 +27,6 @@
+ from urllib.parse import ParseResult, parse_qs, urlencode, urlsplit, urlunsplit
+ 
+ from botocore.exceptions import ClientError
+-from mypy_boto3_sdb import SimpleDBClient
+-from mypy_boto3_sdb.type_defs import ReplaceableItemTypeDef, ReplaceableAttributeTypeDef, SelectResultTypeDef, ItemTypeDef, AttributeTypeDef, DeletableItemTypeDef, UpdateConditionTypeDef
+ 
+ import toil.lib.encryption as encryption
+ from toil.fileStores import FileID
+@@ -71,6 +69,7 @@
+ 
+ if TYPE_CHECKING:
+     from toil import Config
++    from mypy_boto3_sdb.type_defs import ReplaceableItemTypeDef, ReplaceableAttributeTypeDef, ItemTypeDef, AttributeTypeDef, DeletableItemTypeDef, UpdateConditionTypeDef
+ 
+ boto3_session = establish_boto3_session()
+ s3_boto3_resource = boto3_session.resource('s3')
+@@ -229,7 +228,7 @@
+                                                     ItemName=self.name_prefix,
+                                                     AttributeNames=['exists'],
+                                                     ConsistentRead=True)
+-                attributes: List[AttributeTypeDef] = get_result.get("Attributes", [])  # the documentation says 'Attributes' should always exist, but this is not true
++                attributes: List["AttributeTypeDef"] = get_result.get("Attributes", [])  # the documentation says 'Attributes' should always exist, but this is not true
+                 exists: Optional[str] = get_item_from_attributes(attributes=attributes, name="exists")
+                 if exists is None:
+                     return False
+@@ -259,7 +258,7 @@
+                                                   ItemName=self.name_prefix)
+                     else:
+                         if value is True:
+-                            attributes: List[ReplaceableAttributeTypeDef] = [{"Name": "exists", "Value": "True", "Replace": True}]
++                            attributes: List["ReplaceableAttributeTypeDef"] = [{"Name": "exists", "Value": "True", "Replace": True}]
+                         elif value is None:
+                             attributes = [{"Name": "exists", "Value": "False", "Replace": True}]
+                         else:
+@@ -268,7 +267,7 @@
+                                                ItemName=self.name_prefix,
+                                                Attributes=attributes)
+ 
+-    def _checkItem(self, item: ItemTypeDef, enforce: bool = True) -> None:
++    def _checkItem(self, item: "ItemTypeDef", enforce: bool = True) -> None:
+         """
+         Make sure that the given SimpleDB item actually has the attributes we think it should.
+ 
+@@ -278,7 +277,7 @@
+         """
+         self._checkAttributes(item["Attributes"], enforce)
+ 
+-    def _checkAttributes(self, attributes: List[AttributeTypeDef], enforce: bool = True) -> None:
++    def _checkAttributes(self, attributes: List["AttributeTypeDef"], enforce: bool = True) -> None:
+         if get_item_from_attributes(attributes=attributes, name="overlargeID") is None:
+             logger.error("overlargeID attribute isn't present: either SimpleDB entry is "
+                          "corrupt or jobstore is from an extremely old Toil: %s", attributes)
+@@ -286,7 +285,7 @@
+                 raise RuntimeError("encountered SimpleDB entry missing required attribute "
+                                    "'overlargeID'; is your job store ancient?")
+ 
+-    def _awsJobFromAttributes(self, attributes: List[AttributeTypeDef]) -> Job:
++    def _awsJobFromAttributes(self, attributes: List["AttributeTypeDef"]) -> Job:
+         """
+         Get a Toil Job object from attributes that are defined in an item from the DB
+         :param attributes: List of attributes
+@@ -309,15 +308,14 @@
+             job.assignConfig(self.config)
+         return job
+ 
+-    def _awsJobFromItem(self, item: ItemTypeDef) -> Job:
++    def _awsJobFromItem(self, item: "ItemTypeDef") -> Job:
+         """
+         Get a Toil Job object from an item from the DB
+-        :param item: ItemTypeDef
+         :return: Toil Job
+         """
+         return self._awsJobFromAttributes(item["Attributes"])
+ 
+-    def _awsJobToAttributes(self, job: JobDescription) -> List[AttributeTypeDef]:
++    def _awsJobToAttributes(self, job: JobDescription) -> List["AttributeTypeDef"]:
+         binary = pickle.dumps(job, protocol=pickle.HIGHEST_PROTOCOL)
+         if len(binary) > SDBHelper.maxBinarySize(extraReservedChunks=1):
+             # Store as an overlarge job in S3
+@@ -330,7 +328,7 @@
+             item["overlargeID"] = ""
+         return SDBHelper.attributeDictToList(item)
+ 
+-    def _awsJobToItem(self, job: JobDescription, name: str) -> ItemTypeDef:
++    def _awsJobToItem(self, job: JobDescription, name: str) -> "ItemTypeDef":
+         return {"Name": name, "Attributes": self._awsJobToAttributes(job)}
+ 
+     jobsPerBatchInsert = 25
+@@ -343,14 +341,14 @@
+                    range(0, len(self._batchedUpdates), self.jobsPerBatchInsert)]
+ 
+         for batch in batches:
+-            items: List[ReplaceableItemTypeDef] = []
++            items: List["ReplaceableItemTypeDef"] = []
+             for jobDescription in batch:
+-                item_attributes: List[ReplaceableAttributeTypeDef] = []
++                item_attributes: List["ReplaceableAttributeTypeDef"] = []
+                 jobDescription.pre_update_hook()
+                 item_name = compat_bytes(jobDescription.jobStoreID)
+-                got_job_attributes: List[AttributeTypeDef] = self._awsJobToAttributes(jobDescription)
++                got_job_attributes: List["AttributeTypeDef"] = self._awsJobToAttributes(jobDescription)
+                 for each_attribute in got_job_attributes:
+-                    new_attribute: ReplaceableAttributeTypeDef = {"Name": each_attribute["Name"],
++                    new_attribute: "ReplaceableAttributeTypeDef" = {"Name": each_attribute["Name"],
+                                                                   "Value": each_attribute["Value"],
+                                                                   "Replace": True}
+                     item_attributes.append(new_attribute)
+@@ -383,7 +381,7 @@
+                                                   ConsistentRead=True).get("Attributes", [])) > 0
+ 
+     def jobs(self) -> Generator[Job, None, None]:
+-        job_items: Optional[List[ItemTypeDef]] = None
++        job_items: Optional[List["ItemTypeDef"]] = None
+         for attempt in retry_sdb():
+             with attempt:
+                 job_items = boto3_pager(self.db.select,
+@@ -413,7 +411,7 @@
+         logger.debug("Updating job %s", job_description.jobStoreID)
+         job_description.pre_update_hook()
+         job_attributes = self._awsJobToAttributes(job_description)
+-        update_attributes: List[ReplaceableAttributeTypeDef] = [{"Name": attribute["Name"], "Value": attribute["Value"], "Replace": True}
++        update_attributes: List["ReplaceableAttributeTypeDef"] = [{"Name": attribute["Name"], "Value": attribute["Value"], "Replace": True}
+                                                                 for attribute in job_attributes]
+         for attempt in retry_sdb():
+             with attempt:
+@@ -442,7 +440,7 @@
+         for attempt in retry_sdb():
+             with attempt:
+                 self.db.delete_attributes(DomainName=self.jobs_domain_name, ItemName=compat_bytes(job_id))
+-        items: Optional[List[ItemTypeDef]] = None
++        items: Optional[List["ItemTypeDef"]] = None
+         for attempt in retry_sdb():
+             with attempt:
+                 items = list(boto3_pager(self.db.select,
+@@ -455,12 +453,12 @@
+             n = self.itemsPerBatchDelete
+             batches = [items[i:i + n] for i in range(0, len(items), n)]
+             for batch in batches:
+-                delete_items: List[DeletableItemTypeDef] = [{"Name": item["Name"]} for item in batch]
++                delete_items: List["DeletableItemTypeDef"] = [{"Name": item["Name"]} for item in batch]
+                 for attempt in retry_sdb():
+                     with attempt:
+                         self.db.batch_delete_attributes(DomainName=self.files_domain_name, Items=delete_items)
+             for item in items:
+-                item: ItemTypeDef
++                item: "ItemTypeDef"
+                 version = get_item_from_attributes(attributes=item["Attributes"], name="version")
+                 for attempt in retry_s3():
+                     with attempt:
+@@ -1062,7 +1060,7 @@
+                 return self
+ 
+         @classmethod
+-        def fromItem(cls, item: ItemTypeDef):
++        def fromItem(cls, item: "ItemTypeDef"):
+             """
+             Convert an SDB item to an instance of this class.
+ 
+@@ -1128,7 +1126,7 @@
+             attributes_boto3 = SDBHelper.attributeDictToList(attributes)
+             # False stands for absence
+             if self.previousVersion is None:
+-                expected: UpdateConditionTypeDef = {"Name": 'version', "Exists": False}
++                expected: "UpdateConditionTypeDef" = {"Name": 'version', "Exists": False}
+             else:
+                 expected = {"Name": 'version', "Value": cast(str, self.previousVersion)}
+             try:
+@@ -1602,7 +1600,7 @@
+         def delete(self):
+             store = self.outer
+             if self.previousVersion is not None:
+-                expected: UpdateConditionTypeDef = {"Name": 'version', "Value": cast(str, self.previousVersion)}
++                expected: "UpdateConditionTypeDef" = {"Name": 'version', "Value": cast(str, self.previousVersion)}
+                 for attempt in retry_sdb():
+                     with attempt:
+                         store.db.delete_attributes(DomainName=store.files_domain_name,
+--- toil.orig/src/toil/jobStores/aws/utils.py
++++ toil/src/toil/jobStores/aws/utils.py
+@@ -22,7 +22,6 @@
+ from boto3.s3.transfer import TransferConfig
+ from botocore.client import Config
+ from botocore.exceptions import ClientError
+-from mypy_boto3_sdb.type_defs import ItemTypeDef, AttributeTypeDef
+ 
+ from toil.lib.aws import session, AWSServerErrors
+ from toil.lib.aws.utils import connection_error, get_bucket_region
+@@ -36,6 +35,7 @@
+                             retry)
+ if TYPE_CHECKING:
+     from mypy_boto3_s3 import S3ServiceResource
++    from mypy_boto3_sdb.type_defs import ItemTypeDef, AttributeTypeDef
+ 
+ logger = logging.getLogger(__name__)
+ 
+@@ -148,27 +148,27 @@
+         return attributes
+ 
+     @classmethod
+-    def attributeDictToList(cls, attributes: Dict[str, str]) -> List[AttributeTypeDef]:
++    def attributeDictToList(cls, attributes: Dict[str, str]) -> List["AttributeTypeDef"]:
+         """
+         Convert the attribute dict (ex: from binaryToAttributes) into a list of attribute typed dicts
+         to be compatible with boto3 argument syntax
+         :param attributes: Dict[str, str], attribute in object form
+-        :return: List[AttributeTypeDef], list of attributes in typed dict form
++        :return: list of attributes in typed dict form
+         """
+         return [{"Name": name, "Value": value} for name, value in attributes.items()]
+ 
+     @classmethod
+-    def attributeListToDict(cls, attributes: List[AttributeTypeDef]) -> Dict[str, str]:
++    def attributeListToDict(cls, attributes: List["AttributeTypeDef"]) -> Dict[str, str]:
+         """
+         Convert the attribute boto3 representation of list of attribute typed dicts
+         back to a dictionary with name, value pairs
+-        :param attribute: List[AttributeTypeDef, attribute in typed dict form
++        :param attribute: attribute in typed dict form
+         :return: Dict[str, str], attribute in dict form
+         """
+         return {attribute["Name"]: attribute["Value"] for attribute in attributes}
+ 
+     @classmethod
+-    def get_attributes_from_item(cls, item: ItemTypeDef, keys: List[str]) -> List[Optional[str]]:
++    def get_attributes_from_item(cls, item: "ItemTypeDef", keys: List[str]) -> List[Optional[str]]:
+         return_values: List[Optional[str]] = [None for _ in keys]
+         mapped_indices: Dict[str, int] = {name: index for index, name in enumerate(keys)}
+         for attribute in item["Attributes"]:
+@@ -196,7 +196,7 @@
+         return 'numChunks'
+ 
+     @classmethod
+-    def attributesToBinary(cls, attributes: List[AttributeTypeDef]) -> Tuple[bytes, int]:
++    def attributesToBinary(cls, attributes: List["AttributeTypeDef"]) -> Tuple[bytes, int]:
+         """
+         :rtype: (str|None,int)
+         :return: the binary data and the number of chunks it was composed from
+--- toil.orig/src/toil/lib/aws/__init__.py
++++ toil/src/toil/lib/aws/__init__.py
+@@ -18,15 +18,16 @@
+ import socket
+ import toil.lib.retry
+ from http.client import HTTPException
+-from typing import Dict, MutableMapping, Optional, Union, Literal
++from typing import Dict, MutableMapping, Optional, Union, Literal, TYPE_CHECKING
+ from urllib.error import URLError
+ from urllib.request import urlopen
+ 
+ from botocore.exceptions import ClientError
+ 
+-from mypy_boto3_s3.literals import BucketLocationConstraintType
++if TYPE_CHECKING:
++    from mypy_boto3_s3.literals import BucketLocationConstraintType
+ 
+-AWSRegionName = Union[BucketLocationConstraintType, Literal["us-east-1"]]
++AWSRegionName = Union["BucketLocationConstraintType", Literal["us-east-1"]]
+ 
+ # These are errors where we think something randomly
+ # went wrong on the AWS side and we ought to retry.
+--- toil.orig/src/toil/lib/aws/iam.py
++++ toil/src/toil/lib/aws/iam.py
+@@ -3,16 +3,18 @@
+ import logging
+ from collections import defaultdict
+ from functools import lru_cache
+-from typing import Dict, List, Optional, Union, cast
++from typing import Dict, List, Optional, Union, cast, TYPE_CHECKING
+ 
+ import boto3
+-from mypy_boto3_iam import IAMClient
+-from mypy_boto3_iam.type_defs import (AttachedPolicyTypeDef,
+-                                      PolicyDocumentDictTypeDef)
+-from mypy_boto3_sts import STSClient
+ 
+ from toil.lib.aws.session import client as get_client
+ 
++if TYPE_CHECKING:
++    from mypy_boto3_iam import IAMClient
++    from mypy_boto3_iam.type_defs import (AttachedPolicyTypeDef,
++                                          PolicyDocumentDictTypeDef)
++    from mypy_boto3_sts import STSClient
++
+ logger = logging.getLogger(__name__)
+ 
+ #TODO Make this comprehensive
+@@ -120,7 +122,7 @@
+             return True
+     return False
+ 
+-def get_actions_from_policy_document(policy_doc: PolicyDocumentDictTypeDef) -> AllowedActionCollection:
++def get_actions_from_policy_document(policy_doc: "PolicyDocumentDictTypeDef") -> AllowedActionCollection:
+     '''
+     Given a policy document, go through each statement and create an AllowedActionCollection representing the
+     permissions granted in the policy document.
+@@ -149,7 +151,7 @@
+                             allowed_actions[resource][key].append(statement[key])  # type: ignore[literal-required]
+ 
+     return allowed_actions
+-def allowed_actions_attached(iam: IAMClient, attached_policies: List[AttachedPolicyTypeDef]) -> AllowedActionCollection:
++def allowed_actions_attached(iam: "IAMClient", attached_policies: List["AttachedPolicyTypeDef"]) -> AllowedActionCollection:
+     """
+     Go through all attached policy documents and create an AllowedActionCollection representing granted permissions.
+ 
+@@ -168,7 +170,7 @@
+     return allowed_actions
+ 
+ 
+-def allowed_actions_roles(iam: IAMClient, policy_names: List[str], role_name: str) -> AllowedActionCollection:
++def allowed_actions_roles(iam: "IAMClient", policy_names: List[str], role_name: str) -> AllowedActionCollection:
+     """
+     Returns a dictionary containing a list of all aws actions allowed for a given role.
+     This dictionary is keyed by resource and gives a list of policies allowed on that resource.
+@@ -196,7 +198,7 @@
+     return allowed_actions
+ 
+ 
+-def collect_policy_actions(policy_documents: List[Union[str, PolicyDocumentDictTypeDef]]) -> AllowedActionCollection:
++def collect_policy_actions(policy_documents: List[Union[str, "PolicyDocumentDictTypeDef"]]) -> AllowedActionCollection:
+     """
+     Collect all of the actions allowed by the given policy documents into one AllowedActionCollection.
+     """
+@@ -211,7 +213,7 @@
+     return allowed_actions
+ 
+ 
+-def allowed_actions_user(iam: IAMClient, policy_names: List[str], user_name: str) -> AllowedActionCollection:
++def allowed_actions_user(iam: "IAMClient", policy_names: List[str], user_name: str) -> AllowedActionCollection:
+     """
+     Gets all allowed actions for a user given by user_name, returns a dictionary, keyed by resource,
+     with a list of permissions allowed for each given resource.
+@@ -230,7 +232,7 @@
+     return collect_policy_actions(user_policies)
+ 
+ 
+-def allowed_actions_group(iam: IAMClient, policy_names: List[str], group_name: str) -> AllowedActionCollection:
++def allowed_actions_group(iam: "IAMClient", policy_names: List[str], group_name: str) -> AllowedActionCollection:
+     """
+     Gets all allowed actions for a group given by group_name, returns a dictionary, keyed by resource,
+     with a list of permissions allowed for each given resource.
+@@ -257,8 +259,8 @@
+     :param zone: AWS zone to connect to
+     """
+ 
+-    iam: IAMClient = get_client('iam', region)
+-    sts: STSClient = get_client('sts', region)
++    iam: "IAMClient" = get_client('iam', region)
++    sts: "STSClient" = get_client('sts', region)
+     #TODO Condider effect: deny at some point
+     allowed_actions: AllowedActionCollection = defaultdict(lambda: {'Action': [], 'NotAction': []})
+     try:
+--- toil.orig/src/toil/lib/aws/session.py
++++ toil/src/toil/lib/aws/session.py
+@@ -15,7 +15,7 @@
+ import logging
+ import os
+ import threading
+-from typing import Dict, Optional, Tuple, cast, Union, Literal, overload, TypeVar
++from typing import Dict, Optional, Tuple, cast, Union, Literal, overload, TypeVar, TYPE_CHECKING
+ 
+ import boto3
+ import boto3.resources.base
+@@ -24,12 +24,14 @@
+ from botocore.client import Config
+ from botocore.session import get_session
+ from botocore.utils import JSONFileCache
+-from mypy_boto3_autoscaling import AutoScalingClient
+-from mypy_boto3_ec2 import EC2Client, EC2ServiceResource
+-from mypy_boto3_iam import IAMClient, IAMServiceResource
+-from mypy_boto3_s3 import S3Client, S3ServiceResource
+-from mypy_boto3_sdb import SimpleDBClient
+-from mypy_boto3_sts import STSClient
++
++if TYPE_CHECKING:
++    from mypy_boto3_autoscaling import AutoScalingClient
++    from mypy_boto3_ec2 import EC2Client, EC2ServiceResource
++    from mypy_boto3_iam import IAMClient, IAMServiceResource
++    from mypy_boto3_s3 import S3Client, S3ServiceResource
++    from mypy_boto3_sdb import SimpleDBClient
++    from mypy_boto3_sts import STSClient
+ 
+ logger = logging.getLogger(__name__)
+ 
+@@ -126,11 +128,11 @@
+         return cast(boto3.session.Session, storage.item)
+ 
+     @overload
+-    def resource(self, region: Optional[str], service_name: Literal["s3"], endpoint_url: Optional[str] = None) -> S3ServiceResource: ...
++    def resource(self, region: Optional[str], service_name: Literal["s3"], endpoint_url: Optional[str] = None) -> "S3ServiceResource": ...
+     @overload
+-    def resource(self, region: Optional[str], service_name: Literal["iam"], endpoint_url: Optional[str] = None) -> IAMServiceResource: ...
++    def resource(self, region: Optional[str], service_name: Literal["iam"], endpoint_url: Optional[str] = None) -> "IAMServiceResource": ...
+     @overload
+-    def resource(self, region: Optional[str], service_name: Literal["ec2"], endpoint_url: Optional[str] = None) -> EC2ServiceResource: ...
++    def resource(self, region: Optional[str], service_name: Literal["ec2"], endpoint_url: Optional[str] = None) -> "EC2ServiceResource": ...
+ 
+     def resource(self, region: Optional[str], service_name: str, endpoint_url: Optional[str] = None) -> boto3.resources.base.ServiceResource:
+         """
+@@ -160,22 +162,22 @@
+ 
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["ec2"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> EC2Client: ...
++               config: Optional[Config] = None) -> "EC2Client": ...
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["iam"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> IAMClient: ...
++               config: Optional[Config] = None) -> "IAMClient": ...
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["s3"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> S3Client: ...
++               config: Optional[Config] = None) -> "S3Client": ...
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["sts"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> STSClient: ...
++               config: Optional[Config] = None) -> "STSClient": ...
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["sdb"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> SimpleDBClient: ...
++               config: Optional[Config] = None) -> "SimpleDBClient": ...
+     @overload
+     def client(self, region: Optional[str], service_name: Literal["autoscaling"], endpoint_url: Optional[str] = None,
+-               config: Optional[Config] = None) -> AutoScalingClient: ...
++               config: Optional[Config] = None) -> "AutoScalingClient": ...
+ 
+ 
+     def client(self, region: Optional[str], service_name: Literal["ec2", "iam", "s3", "sts", "sdb", "autoscaling"], endpoint_url: Optional[str] = None,
+@@ -226,17 +228,17 @@
+     return _global_manager.session(region_name)
+ 
+ @overload
+-def client(service_name: Literal["ec2"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> EC2Client: ...
++def client(service_name: Literal["ec2"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "EC2Client": ...
+ @overload
+-def client(service_name: Literal["iam"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> IAMClient: ...
++def client(service_name: Literal["iam"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "IAMClient": ...
+ @overload
+-def client(service_name: Literal["s3"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> S3Client: ...
++def client(service_name: Literal["s3"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "S3Client": ...
+ @overload
+-def client(service_name: Literal["sts"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> STSClient: ...
++def client(service_name: Literal["sts"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "STSClient": ...
+ @overload
+-def client(service_name: Literal["sdb"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> SimpleDBClient: ...
++def client(service_name: Literal["sdb"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "SimpleDBClient": ...
+ @overload
+-def client(service_name: Literal["autoscaling"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> AutoScalingClient: ...
++def client(service_name: Literal["autoscaling"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> "AutoScalingClient": ...
+ 
+ def client(service_name: Literal["ec2", "iam", "s3", "sts", "sdb", "autoscaling"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None, config: Optional[Config] = None) -> botocore.client.BaseClient:
+     """
+@@ -249,11 +251,11 @@
+     return _global_manager.client(region_name, service_name, endpoint_url=endpoint_url, config=config)
+ 
+ @overload
+-def resource(service_name: Literal["s3"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> S3ServiceResource: ...
++def resource(service_name: Literal["s3"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> "S3ServiceResource": ...
+ @overload
+-def resource(service_name: Literal["iam"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> IAMServiceResource: ...
++def resource(service_name: Literal["iam"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> "IAMServiceResource": ...
+ @overload
+-def resource(service_name: Literal["ec2"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> EC2ServiceResource: ...
++def resource(service_name: Literal["ec2"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> "EC2ServiceResource": ...
+ 
+ def resource(service_name: Literal["s3", "iam", "ec2"], region_name: Optional[str] = None, endpoint_url: Optional[str] = None) -> boto3.resources.base.ServiceResource:
+     """
+--- toil.orig/src/toil/lib/aws/utils.py
++++ toil/src/toil/lib/aws/utils.py
+@@ -24,10 +24,10 @@
+                     List,
+                     Optional,
+                     Set,
+-                    cast)
++                    cast,
++                    TYPE_CHECKING)
+ from urllib.parse import ParseResult
+ 
+-from mypy_boto3_sdb.type_defs import AttributeTypeDef
+ from toil.lib.aws import session, AWSRegionName, AWSServerErrors
+ from toil.lib.misc import printq
+ from toil.lib.retry import (DEFAULT_DELAYS,
+@@ -37,13 +37,13 @@
+                             old_retry,
+                             retry, ErrorCondition)
+ 
++if TYPE_CHECKING:
++    from mypy_boto3_sdb.type_defs import AttributeTypeDef
++    from mypy_boto3_s3 import S3ServiceResource
++    from mypy_boto3_s3.service_resource import Bucket, Object as S3Object
++
+ try:
+     from botocore.exceptions import ClientError, EndpointConnectionError
+-    from mypy_boto3_iam import IAMClient, IAMServiceResource
+-    from mypy_boto3_s3 import S3Client, S3ServiceResource
+-    from mypy_boto3_s3.literals import BucketLocationConstraintType
+-    from mypy_boto3_s3.service_resource import Bucket, Object
+-    from mypy_boto3_sdb import SimpleDBClient
+ except ImportError:
+     ClientError = None  # type: ignore
+     EndpointConnectionError = None  # type: ignore
+@@ -335,7 +335,7 @@
+ def bucket_location_to_region(location: Optional[str]) -> str:
+     return "us-east-1" if location == "" or location is None else location
+ 
+-def get_object_for_url(url: ParseResult, existing: Optional[bool] = None) -> "Object":
++def get_object_for_url(url: ParseResult, existing: Optional[bool] = None) -> "S3Object":
+         """
+         Extracts a key (object) from a given parsed s3:// URL.
+ 
+@@ -465,7 +465,7 @@
+         yield from page.get(result_attribute_name, [])
+ 
+ 
+-def get_item_from_attributes(attributes: List[AttributeTypeDef], name: str) -> Any:
++def get_item_from_attributes(attributes: List["AttributeTypeDef"], name: str) -> Any:
+     """
+     Given a list of attributes, find the attribute associated with the name and return its corresponding value.
+ 
+@@ -475,7 +475,7 @@
+ 
+     If the attribute with the name does not exist, the function will return None.
+ 
+-    :param attributes: list of attributes as List[AttributeTypeDef]
++    :param attributes: list of attributes
+     :param name: name of the attribute
+     :return: value of the attribute
+     """
+--- toil.orig/src/toil/lib/ec2.py
++++ toil/src/toil/lib/ec2.py
+@@ -16,10 +16,11 @@
+                             old_retry,
+                             retry)
+ 
+-from mypy_boto3_ec2.client import EC2Client
+-from mypy_boto3_autoscaling.client import AutoScalingClient
+-from mypy_boto3_ec2.type_defs import SpotInstanceRequestTypeDef, DescribeInstancesResultTypeDef, InstanceTypeDef
+-from mypy_boto3_ec2.service_resource import EC2ServiceResource, Instance
++if TYPE_CHECKING:
++    from mypy_boto3_ec2.client import EC2Client
++    from mypy_boto3_autoscaling.client import AutoScalingClient
++    from mypy_boto3_ec2.type_defs import SpotInstanceRequestTypeDef, DescribeInstancesResultTypeDef, InstanceTypeDef
++    from mypy_boto3_ec2.service_resource import EC2ServiceResource, Instance
+ 
+ a_short_time = 5
+ a_long_time = 60 * 60
+@@ -69,8 +70,8 @@
+             (resource, to_state, state))
+ 
+ 
+-def wait_transition(boto3_ec2: EC2Client, resource: InstanceTypeDef, from_states: Iterable[str], to_state: str,
+-                    state_getter: Callable[[InstanceTypeDef], str]=lambda x: x.get('State').get('Name')):
++def wait_transition(boto3_ec2: "EC2Client", resource: "InstanceTypeDef", from_states: Iterable[str], to_state: str,
++                    state_getter: Callable[["InstanceTypeDef"], str]=lambda x: x.get('State').get('Name')):
+     """
+     Wait until the specified EC2 resource (instance, image, volume, ...) transitions from any
+     of the given 'from' states to the specified 'to' state. If the instance is found in a state
+@@ -94,21 +95,20 @@
+         raise UnexpectedResourceState(resource, to_state, state)
+ 
+ 
+-def wait_instances_running(boto3_ec2: EC2Client, instances: Iterable[InstanceTypeDef]) -> Generator[InstanceTypeDef, None, None]:
++def wait_instances_running(boto3_ec2: "EC2Client", instances: Iterable["InstanceTypeDef"]) -> Generator["InstanceTypeDef", None, None]:
+     """
+     Wait until no instance in the given iterable is 'pending'. Yield every instance that
+     entered the running state as soon as it does.
+ 
+-    :param EC2Client boto3_ec2: the EC2 connection to use for making requests
+-    :param Iterable[InstanceTypeDef] instances: the instances to wait on
+-    :rtype: Iterable[InstanceTypeDef]
++    :param boto3_ec2: the EC2 connection to use for making requests
++    :param instances: the instances to wait on
+     """
+     running_ids = set()
+     other_ids = set()
+     while True:
+         pending_ids = set()
+         for i in instances:
+-            i: InstanceTypeDef
++            i: "InstanceTypeDef"
+             if i['State']['Name'] == 'pending':
+                 pending_ids.add(i['InstanceId'])
+             elif i['State']['Name'] == 'running':
+@@ -134,7 +134,7 @@
+                 instances = [instance for reservation in described_instances["Reservations"] for instance in reservation["Instances"]]
+ 
+ 
+-def wait_spot_requests_active(boto3_ec2: EC2Client, requests: Iterable[SpotInstanceRequestTypeDef], timeout: float = None, tentative: bool = False) -> Iterable[List[SpotInstanceRequestTypeDef]]:
++def wait_spot_requests_active(boto3_ec2: "EC2Client", requests: Iterable["SpotInstanceRequestTypeDef"], timeout: float = None, tentative: bool = False) -> Iterable[List["SpotInstanceRequestTypeDef"]]:
+     """
+     Wait until no spot request in the given iterator is in the 'open' state or, optionally,
+     a timeout occurs. Yield spot requests as soon as they leave the 'open' state.
+@@ -168,7 +168,7 @@
+             open_ids, eval_ids, fulfill_ids = set(), set(), set()
+             batch = []
+             for r in requests:
+-                r: SpotInstanceRequestTypeDef  # pycharm thinks it is a string
++                r: "SpotInstanceRequestTypeDef"  # pycharm thinks it is a string
+                 if r['State'] == 'open':
+                     open_ids.add(r['InstanceId'])
+                     if r['Status'] == 'pending-evaluation':
+@@ -215,7 +215,7 @@
+             cancel()
+ 
+ 
+-def create_spot_instances(boto3_ec2: EC2Client, price, image_id, spec, num_instances=1, timeout=None, tentative=False, tags=None) -> Generator[DescribeInstancesResultTypeDef, None, None]:
++def create_spot_instances(boto3_ec2: "EC2Client", price, image_id, spec, num_instances=1, timeout=None, tentative=False, tags=None) -> Generator["DescribeInstancesResultTypeDef", None, None]:
+     """
+     Create instances on the spot market.
+     """
+@@ -246,7 +246,7 @@
+                                            tentative=tentative):
+         instance_ids = []
+         for request in batch:
+-            request: SpotInstanceRequestTypeDef
++            request: "SpotInstanceRequestTypeDef"
+             if request["State"] == 'active':
+                 instance_ids.append(request["InstanceId"])
+                 num_active += 1
+@@ -275,12 +275,10 @@
+         logger.warning('%i request(s) entered a state other than active.', num_other)
+ 
+ 
+-def create_ondemand_instances(boto3_ec2: EC2Client, image_id: str, spec: Mapping[str, Any], num_instances: int=1) -> List[InstanceTypeDef]:
++def create_ondemand_instances(boto3_ec2: "EC2Client", image_id: str, spec: Mapping[str, Any], num_instances: int=1) -> List["InstanceTypeDef"]:
+     """
+     Requests the RunInstances EC2 API call but accounts for the race between recently created
+     instance profiles, IAM roles and an instance creation that refers to them.
+-
+-    :rtype: List[InstanceTypeDef]
+     """
+     instance_type = spec['InstanceType']
+     logger.info('Creating %s instance(s) ... ', instance_type)
+@@ -288,7 +286,7 @@
+     for attempt in retry_ec2(retry_for=a_long_time,
+                              retry_while=inconsistencies_detected):
+         with attempt:
+-            boto_instance_list: List[InstanceTypeDef] = boto3_ec2.run_instances(ImageId=image_id,
++            boto_instance_list: List["InstanceTypeDef"] = boto3_ec2.run_instances(ImageId=image_id,
+                                                                                 MinCount=num_instances,
+                                                                                 MaxCount=num_instances,
+                                                                                 **spec)['Instances']
+@@ -296,7 +294,7 @@
+     return boto_instance_list
+ 
+ 
+-def increase_instance_hop_limit(boto3_ec2: EC2Client, boto_instance_list: List[InstanceTypeDef]) -> None:
++def increase_instance_hop_limit(boto3_ec2: "EC2Client", boto_instance_list: List["InstanceTypeDef"]) -> None:
+     """
+     Increase the default HTTP hop limit, as we are running Toil and Kubernetes inside a Docker container, so the default
+     hop limit of 1 will not be enough when grabbing metadata information with ec2_metadata
+@@ -343,7 +341,7 @@
+ 
+ 
+ @retry(intervals=[5, 5, 10, 20, 20, 20, 20], errors=INCONSISTENCY_ERRORS)
+-def create_instances(ec2_resource: EC2ServiceResource,
++def create_instances(ec2_resource: "EC2ServiceResource",
+                      image_id: str,
+                      key_name: str,
+                      instance_type: str,
+@@ -354,7 +352,7 @@
+                      instance_profile_arn: Optional[str] = None,
+                      placement_az: Optional[str] = None,
+                      subnet_id: str = None,
+-                     tags: Optional[Dict[str, str]] = None) -> List[Instance]:
++                     tags: Optional[Dict[str, str]] = None) -> List["Instance"]:
+     """
+     Replaces create_ondemand_instances.  Uses boto3 and returns a list of Boto3 instance dicts.
+ 
+@@ -404,7 +402,7 @@
+ 
+ 
+ @retry(intervals=[5, 5, 10, 20, 20, 20, 20], errors=INCONSISTENCY_ERRORS)
+-def create_launch_template(ec2_client: EC2Client,
++def create_launch_template(ec2_client: "EC2Client",
+                            template_name: str,
+                            image_id: str,
+                            key_name: str,
+@@ -479,7 +477,7 @@
+ 
+ 
+ @retry(intervals=[5, 5, 10, 20, 20, 20, 20], errors=INCONSISTENCY_ERRORS)
+-def create_auto_scaling_group(autoscaling_client: AutoScalingClient,
++def create_auto_scaling_group(autoscaling_client: "AutoScalingClient",
+                               asg_name: str,
+                               launch_template_ids: Dict[str, str],
+                               vpc_subnets: List[str],
+--- toil.orig/src/toil/provisioners/aws/awsProvisioner.py
++++ toil/src/toil/provisioners/aws/awsProvisioner.py
+@@ -35,14 +35,12 @@
+                     Union,
+                     Literal,
+                     cast,
+-                    TypeVar)
++                    TypeVar,
++                    TYPE_CHECKING)
+ from urllib.parse import unquote
+ 
+ # We need these to exist as attributes we can get off of the boto object
+ from botocore.exceptions import ClientError
+-from mypy_boto3_autoscaling.client import AutoScalingClient
+-from mypy_boto3_ec2.service_resource import Instance
+-from mypy_boto3_iam.type_defs import InstanceProfileTypeDef, RoleTypeDef, ListRolePoliciesResponseTypeDef
+ from mypy_extensions import VarArg, KwArg
+ 
+ from toil.lib.aws import zone_to_region, AWSRegionName, AWSServerErrors
+@@ -84,11 +82,14 @@
+ from toil.provisioners.node import Node
+ from toil.lib.aws.session import client as get_client
+ 
+-from mypy_boto3_ec2.client import EC2Client
+-from mypy_boto3_iam.client import IAMClient
+-from mypy_boto3_ec2.type_defs import DescribeInstancesResultTypeDef, InstanceTypeDef, TagTypeDef, BlockDeviceMappingTypeDef, EbsBlockDeviceTypeDef, FilterTypeDef, SpotInstanceRequestTypeDef, TagDescriptionTypeDef, SecurityGroupTypeDef, \
+-    CreateSecurityGroupResultTypeDef, IpPermissionTypeDef, ReservationTypeDef
+-from mypy_boto3_s3.literals import BucketLocationConstraintType
++if TYPE_CHECKING:
++    from mypy_boto3_autoscaling.client import AutoScalingClient
++    from mypy_boto3_ec2.service_resource import Instance
++    from mypy_boto3_iam.type_defs import InstanceProfileTypeDef, RoleTypeDef
++    from mypy_boto3_ec2.client import EC2Client
++    from mypy_boto3_iam.client import IAMClient
++    from mypy_boto3_ec2.type_defs import DescribeInstancesResultTypeDef, InstanceTypeDef, TagTypeDef, BlockDeviceMappingTypeDef, EbsBlockDeviceTypeDef, FilterTypeDef, SpotInstanceRequestTypeDef, TagDescriptionTypeDef, SecurityGroupTypeDef, \
++        CreateSecurityGroupResultTypeDef, IpPermissionTypeDef, ReservationTypeDef
+ 
+ logger = logging.getLogger(__name__)
+ logging.getLogger("boto").setLevel(logging.CRITICAL)
+@@ -160,7 +161,7 @@
+     return wrapper
+ 
+ 
+-def awsFilterImpairedNodes(nodes: List[InstanceTypeDef], boto3_ec2: EC2Client) -> List[InstanceTypeDef]:
++def awsFilterImpairedNodes(nodes: List["InstanceTypeDef"], boto3_ec2: "EC2Client") -> List["InstanceTypeDef"]:
+     # if TOIL_AWS_NODE_DEBUG is set don't terminate nodes with
+     # failing status checks so they can be debugged
+     nodeDebug = os.environ.get('TOIL_AWS_NODE_DEBUG') in ('True', 'TRUE', 'true', True)
+@@ -180,10 +181,10 @@
+     pass
+ 
+ 
+-def collapse_tags(instance_tags: List[TagTypeDef]) -> Dict[str, str]:
++def collapse_tags(instance_tags: List["TagTypeDef"]) -> Dict[str, str]:
+     """
+     Collapse tags from boto3 format to node format
+-    :param instance_tags: tags as list of TagTypeDef
++    :param instance_tags: tags as a list 
+     :return: Dict of tags
+     """
+     collapsed_tags: Dict[str, str] = dict()
+@@ -254,7 +255,7 @@
+         """
+         from ec2_metadata import ec2_metadata
+         boto3_ec2 = self.aws.client(self._region, 'ec2')
+-        instance: InstanceTypeDef = boto3_ec2.describe_instances(InstanceIds=[ec2_metadata.instance_id])["Reservations"][0]["Instances"][0]
++        instance: "InstanceTypeDef" = boto3_ec2.describe_instances(InstanceIds=[ec2_metadata.instance_id])["Reservations"][0]["Instances"][0]
+         # The cluster name is the same as the name of the leader.
+         self.clusterName: str = "default-toil-cluster-name"
+         for tag in instance["Tags"]:
+@@ -421,7 +422,7 @@
+         leader_tags[_TAG_KEY_TOIL_NODE_TYPE] = 'leader'
+         logger.debug('Launching leader with tags: %s', leader_tags)
+ 
+-        instances: List[Instance] = create_instances(self.aws.resource(self._region, 'ec2'),
++        instances: List["Instance"] = create_instances(self.aws.resource(self._region, 'ec2'),
+                                                      image_id=self._discoverAMI(),
+                                                      num_instances=1,
+                                                      key_name=self._keyName,
+@@ -521,7 +522,7 @@
+         acls = set(self._get_subnet_acls(base_subnet_id))
+ 
+         # Compose a filter that selects the subnets we might want
+-        filters: List[FilterTypeDef] = [{
++        filters: List["FilterTypeDef"] = [{
+             'Name': 'vpc-id',
+             'Values': [vpc_id]
+         }, {
+@@ -584,7 +585,7 @@
+         """
+ 
+         # Compose a filter that selects the default subnet in the AZ
+-        filters: List[FilterTypeDef] = [{
++        filters: List["FilterTypeDef"] = [{
+             'Name': 'default-for-az',
+             'Values': ['true']
+         }, {
+@@ -691,7 +692,7 @@
+         removed = False
+         instances = self._get_nodes_in_cluster_boto3(include_stopped_nodes=True)
+         spotIDs = self._getSpotRequestIDs()
+-        boto3_ec2: EC2Client = self.aws.client(region=self._region, service_name="ec2")
++        boto3_ec2: "EC2Client" = self.aws.client(region=self._region, service_name="ec2")
+         if spotIDs:
+             boto3_ec2.cancel_spot_instance_requests(SpotInstanceRequestIds=spotIDs)
+             # self.aws.boto2(self._region, 'ec2').cancel_spot_instance_requests(request_ids=spotIDs)
+@@ -734,7 +735,7 @@
+             removed = False
+             for attempt in old_retry(timeout=300, predicate=expectedShutdownErrors):
+                 with attempt:
+-                    security_groups: List[SecurityGroupTypeDef] = boto3_ec2.describe_security_groups()["SecurityGroups"]
++                    security_groups: List["SecurityGroupTypeDef"] = boto3_ec2.describe_security_groups()["SecurityGroups"]
+                     for security_group in security_groups:
+                         # TODO: If we terminate the leader and the workers but
+                         # miss the security group, we won't find it now because
+@@ -898,7 +899,7 @@
+                             },
+                             'SubnetId': subnet_id}
+ 
+-        instancesLaunched: List[InstanceTypeDef] = []
++        instancesLaunched: List["InstanceTypeDef"] = []
+ 
+         for attempt in old_retry(predicate=awsRetryPredicate):
+             with attempt:
+@@ -913,7 +914,7 @@
+                 else:
+                     logger.debug('Launching %s preemptible nodes', numNodes)
+                     # force generator to evaluate
+-                    generatedInstancesLaunched: List[DescribeInstancesResultTypeDef] = list(create_spot_instances(boto3_ec2=boto3_ec2,
++                    generatedInstancesLaunched: List["DescribeInstancesResultTypeDef"] = list(create_spot_instances(boto3_ec2=boto3_ec2,
+                                                                                                                   price=spotBid,
+                                                                                                                   image_id=self._discoverAMI(),
+                                                                                                                   tags={_TAG_KEY_TOIL_CLUSTER_NAME: self.clusterName},
+@@ -922,7 +923,7 @@
+                                                                                                                   tentative=True)
+                                                                                             )
+                     # flatten the list
+-                    flatten_reservations: List[ReservationTypeDef] = [reservation for subdict in generatedInstancesLaunched for reservation in subdict["Reservations"] for key, value in subdict.items()]
++                    flatten_reservations: List["ReservationTypeDef"] = [reservation for subdict in generatedInstancesLaunched for reservation in subdict["Reservations"] for key, value in subdict.items()]
+                     # get a flattened list of all requested instances, as before instancesLaunched is a dict of reservations which is a dict of instance requests
+                     instancesLaunched = [instance for instances in flatten_reservations for instance in instances['Instances']]
+ 
+@@ -969,7 +970,7 @@
+         assert self._leaderPrivateIP
+         entireCluster = self._get_nodes_in_cluster_boto3(instance_type=instance_type)
+         logger.debug('All nodes in cluster: %s', entireCluster)
+-        workerInstances: List[InstanceTypeDef] = [i for i in entireCluster if i["PrivateIpAddress"] != self._leaderPrivateIP]
++        workerInstances: List["InstanceTypeDef"] = [i for i in entireCluster if i["PrivateIpAddress"] != self._leaderPrivateIP]
+         logger.debug('All workers found in cluster: %s', workerInstances)
+         if preemptible is not None:
+             workerInstances = [i for i in workerInstances if preemptible == (i["SpotInstanceRequestId"] is not None)]
+@@ -1018,13 +1019,12 @@
+         denamespaced = '/' + '_'.join(s.replace('_', '/') for s in namespaced_name.split('__'))
+         return denamespaced.startswith(self._toNameSpace())
+ 
+-    def _getLeaderInstanceBoto3(self) -> InstanceTypeDef:
++    def _getLeaderInstanceBoto3(self) -> "InstanceTypeDef":
+         """
+         Get the Boto 3 instance for the cluster's leader.
+-        :return: InstanceTypeDef
+         """
+         # Tags are stored differently in Boto 3
+-        instances: List[InstanceTypeDef] = self._get_nodes_in_cluster_boto3(include_stopped_nodes=True)
++        instances: List["InstanceTypeDef"] = self._get_nodes_in_cluster_boto3(include_stopped_nodes=True)
+         instances.sort(key=lambda x: x["LaunchTime"])
+         try:
+             leader = instances[0]  # assume leader was launched first
+@@ -1042,14 +1042,14 @@
+             )
+         return leader
+ 
+-    def _getLeaderInstance(self) -> InstanceTypeDef:
++    def _getLeaderInstance(self) -> "InstanceTypeDef":
+         """
+         Get the Boto 2 instance for the cluster's leader.
+         """
+         instances = self._get_nodes_in_cluster_boto3(include_stopped_nodes=True)
+         instances.sort(key=lambda x: x["LaunchTime"])
+         try:
+-            leader: InstanceTypeDef = instances[0]  # assume leader was launched first
++            leader: "InstanceTypeDef" = instances[0]  # assume leader was launched first
+         except IndexError:
+             raise NoSuchClusterException(self.clusterName)
+         tagged_node_type: str = 'leader'
+@@ -1069,7 +1069,7 @@
+         """
+         Get the leader for the cluster as a Toil Node object.
+         """
+-        leader: InstanceTypeDef = self._getLeaderInstanceBoto3()
++        leader: "InstanceTypeDef" = self._getLeaderInstanceBoto3()
+ 
+         leaderNode = Node(publicIP=leader["PublicIpAddress"], privateIP=leader["PrivateIpAddress"],
+                           name=leader["InstanceId"], launchTime=leader["LaunchTime"], nodeType=None,
+@@ -1085,20 +1085,20 @@
+ 
+     @classmethod
+     @awsRetry
+-    def _addTag(cls, boto3_ec2: EC2Client, instance: InstanceTypeDef, key: str, value: str) -> None:
++    def _addTag(cls, boto3_ec2: "EC2Client", instance: "InstanceTypeDef", key: str, value: str) -> None:
+         if instance.get('Tags') is None:
+             instance['Tags'] = []
+-        new_tag: TagTypeDef = {"Key": key, "Value": value}
++        new_tag: "TagTypeDef" = {"Key": key, "Value": value}
+         boto3_ec2.create_tags(Resources=[instance["InstanceId"]], Tags=[new_tag])
+ 
+     @classmethod
+-    def _addTags(cls, boto3_ec2: EC2Client, instances: List[InstanceTypeDef], tags: Dict[str, str]) -> None:
++    def _addTags(cls, boto3_ec2: "EC2Client", instances: List["InstanceTypeDef"], tags: Dict[str, str]) -> None:
+         for instance in instances:
+             for key, value in tags.items():
+                 cls._addTag(boto3_ec2, instance, key, value)
+ 
+     @classmethod
+-    def _waitForIP(cls, instance: InstanceTypeDef) -> None:
++    def _waitForIP(cls, instance: "InstanceTypeDef") -> None:
+         """
+         Wait until the instances has a public IP address assigned to it.
+ 
+@@ -1111,7 +1111,7 @@
+                 logger.debug('...got ip')
+                 break
+ 
+-    def _terminateInstances(self, instances: List[InstanceTypeDef]) -> None:
++    def _terminateInstances(self, instances: List["InstanceTypeDef"]) -> None:
+         instanceIDs = [x["InstanceId"] for x in instances]
+         self._terminateIDs(instanceIDs)
+         logger.info('... Waiting for instance(s) to shut down...')
+@@ -1168,14 +1168,14 @@
+                     logger.debug('... Succesfully deleted instance profile %s', profile_name)
+ 
+     @classmethod
+-    def _getBoto3BlockDeviceMapping(cls, type_info: InstanceType, rootVolSize: int = 50) -> List[BlockDeviceMappingTypeDef]:
++    def _getBoto3BlockDeviceMapping(cls, type_info: InstanceType, rootVolSize: int = 50) -> List["BlockDeviceMappingTypeDef"]:
+         # determine number of ephemeral drives via cgcloud-lib (actually this is moved into toil's lib
+         bdtKeys = [''] + [f'/dev/xvd{c}' for c in string.ascii_lowercase[1:]]
+-        bdm_list: List[BlockDeviceMappingTypeDef] = []
++        bdm_list: List["BlockDeviceMappingTypeDef"] = []
+         # Change root volume size to allow for bigger Docker instances
+-        root_vol: EbsBlockDeviceTypeDef = {"DeleteOnTermination": True,
++        root_vol: "EbsBlockDeviceTypeDef" = {"DeleteOnTermination": True,
+                                            "VolumeSize": rootVolSize}
+-        bdm: BlockDeviceMappingTypeDef = {"DeviceName": "/dev/xvda", "Ebs": root_vol}
++        bdm: "BlockDeviceMappingTypeDef" = {"DeviceName": "/dev/xvda", "Ebs": root_vol}
+         bdm_list.append(bdm)
+         # The first disk is already attached for us so start with 2nd.
+         # Disk count is weirdly a float in our instance database, so make it an int here.
+@@ -1191,13 +1191,13 @@
+         return bdm_list
+ 
+     @classmethod
+-    def _getBoto3BlockDeviceMappings(cls, type_info: InstanceType, rootVolSize: int = 50) -> List[BlockDeviceMappingTypeDef]:
++    def _getBoto3BlockDeviceMappings(cls, type_info: InstanceType, rootVolSize: int = 50) -> List["BlockDeviceMappingTypeDef"]:
+         """
+         Get block device mappings for the root volume for a worker.
+         """
+ 
+         # Start with the root
+-        bdms: List[BlockDeviceMappingTypeDef] = [{
++        bdms: List["BlockDeviceMappingTypeDef"] = [{
+             'DeviceName': '/dev/xvda',
+             'Ebs': {
+                 'DeleteOnTermination': True,
+@@ -1222,21 +1222,21 @@
+         return bdms
+ 
+     @awsRetry
+-    def _get_nodes_in_cluster_boto3(self, instance_type: Optional[str] = None, include_stopped_nodes: bool = False) -> List[InstanceTypeDef]:
++    def _get_nodes_in_cluster_boto3(self, instance_type: Optional[str] = None, include_stopped_nodes: bool = False) -> List["InstanceTypeDef"]:
+         """
+         Get Boto3 instance objects for all nodes in the cluster.
+         """
+-        boto3_ec2: EC2Client = self.aws.client(region=self._region, service_name='ec2')
+-        instance_filter: FilterTypeDef = {'Name': 'instance.group-name', 'Values': [self.clusterName]}
+-        describe_response: DescribeInstancesResultTypeDef = boto3_ec2.describe_instances(Filters=[instance_filter])
+-        all_instances: List[InstanceTypeDef] = []
++        boto3_ec2: "EC2Client" = self.aws.client(region=self._region, service_name='ec2')
++        instance_filter: "FilterTypeDef" = {'Name': 'instance.group-name', 'Values': [self.clusterName]}
++        describe_response: "DescribeInstancesResultTypeDef" = boto3_ec2.describe_instances(Filters=[instance_filter])
++        all_instances: List["InstanceTypeDef"] = []
+         for reservation in describe_response['Reservations']:
+             instances = reservation['Instances']
+             all_instances.extend(instances)
+ 
+         # all_instances = self.aws.boto2(self._region, 'ec2').get_only_instances(filters={'instance.group-name': self.clusterName})
+ 
+-        def instanceFilter(i: InstanceTypeDef) -> bool:
++        def instanceFilter(i: "InstanceTypeDef") -> bool:
+             # filter by type only if nodeType is true
+             rightType = not instance_type or i['InstanceType'] == instance_type
+             rightState = i['State']['Name'] == 'running' or i['State']['Name'] == 'pending'
+@@ -1252,11 +1252,11 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        ec2: EC2Client = self.aws.client(self._region, 'ec2')
++        ec2: "EC2Client" = self.aws.client(self._region, 'ec2')
+ 
+-        requests: List[SpotInstanceRequestTypeDef] = ec2.describe_spot_instance_requests()["SpotInstanceRequests"]
+-        tag_filter: FilterTypeDef = {"Name": "tag:" + _TAG_KEY_TOIL_CLUSTER_NAME, "Values": [self.clusterName]}
+-        tags: List[TagDescriptionTypeDef] = ec2.describe_tags(Filters=[tag_filter])["Tags"]
++        requests: List["SpotInstanceRequestTypeDef"] = ec2.describe_spot_instance_requests()["SpotInstanceRequests"]
++        tag_filter: "FilterTypeDef" = {"Name": "tag:" + _TAG_KEY_TOIL_CLUSTER_NAME, "Values": [self.clusterName]}
++        tags: List["TagDescriptionTypeDef"] = ec2.describe_tags(Filters=[tag_filter])["Tags"]
+         idsToCancel = [tag["ResourceId"] for tag in tags]
+         return [request["SpotInstanceRequestId"] for request in requests if request["InstanceId"] in idsToCancel]
+ 
+@@ -1270,7 +1270,7 @@
+ 
+         # Grab the connection we need to use for this operation.
+         # The VPC connection can do anything the EC2 one can do, but also look at subnets.
+-        boto3_ec2: EC2Client = self.aws.client(region=self._region, service_name="ec2")
++        boto3_ec2: "EC2Client" = self.aws.client(region=self._region, service_name="ec2")
+ 
+         vpc_id = None
+         if self._leader_subnet:
+@@ -1285,7 +1285,7 @@
+             if vpc_id is not None:
+                 other["VpcId"] = vpc_id
+             # mypy stubs don't explicitly state kwargs even though documentation allows it, and mypy gets confused
+-            web_response: CreateSecurityGroupResultTypeDef = boto3_ec2.create_security_group(**other)  # type: ignore[arg-type]
++            web_response: "CreateSecurityGroupResultTypeDef" = boto3_ec2.create_security_group(**other)  # type: ignore[arg-type]
+         except ClientError as e:
+             if get_error_status(e) == 400 and 'already exists' in get_error_body(e):
+                 pass
+@@ -1294,7 +1294,7 @@
+         else:
+             for attempt in old_retry(predicate=group_not_found, timeout=300):
+                 with attempt:
+-                    ip_permissions: List[IpPermissionTypeDef] = [{"IpProtocol": "tcp",
++                    ip_permissions: List["IpPermissionTypeDef"] = [{"IpProtocol": "tcp",
+                                                                   "FromPort": 22,
+                                                                   "ToPort": 22,
+                                                                   "IpRanges": [
+@@ -1331,7 +1331,7 @@
+                     sg["GroupId"] in self._leaderSecurityGroupIDs)]
+ 
+     @awsRetry
+-    def _get_launch_template_ids(self, filters: Optional[List[FilterTypeDef]] = None) -> List[str]:
++    def _get_launch_template_ids(self, filters: Optional[List["FilterTypeDef"]] = None) -> List[str]:
+         """
+         Find all launch templates associated with the cluster.
+ 
+@@ -1339,10 +1339,10 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        ec2: EC2Client = self.aws.client(self._region, 'ec2')
++        ec2: "EC2Client" = self.aws.client(self._region, 'ec2')
+ 
+         # How do we match the right templates?
+-        combined_filters: List[FilterTypeDef] = [{'Name': 'tag:' + _TAG_KEY_TOIL_CLUSTER_NAME, 'Values': [self.clusterName]}]
++        combined_filters: List["FilterTypeDef"] = [{'Name': 'tag:' + _TAG_KEY_TOIL_CLUSTER_NAME, 'Values': [self.clusterName]}]
+ 
+         if filters:
+             # Add any user-specified filters
+@@ -1389,7 +1389,7 @@
+         lt_name = self._name_worker_launch_template(instance_type, preemptible=preemptible)
+ 
+         # How do we match the right templates?
+-        filters: List[FilterTypeDef] = [{'Name': 'launch-template-name', 'Values': [lt_name]}]
++        filters: List["FilterTypeDef"] = [{'Name': 'launch-template-name', 'Values': [lt_name]}]
+ 
+         # Get the templates
+         templates: List[str] = self._get_launch_template_ids(filters=filters)
+@@ -1480,13 +1480,13 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        autoscaling: AutoScalingClient = self.aws.client(self._region, 'autoscaling')
++        autoscaling: "AutoScalingClient" = self.aws.client(self._region, 'autoscaling')
+ 
+         # AWS won't filter ASGs server-side for us in describe_auto_scaling_groups.
+         # So we search instances of applied tags for the ASGs they are on.
+         # The ASGs tagged with our cluster are our ASGs.
+         # The filtering is on different fields of the tag object itself.
+-        filters: List[FilterTypeDef] = [{'Name': 'key',
++        filters: List["FilterTypeDef"] = [{'Name': 'key',
+                                          'Values': [_TAG_KEY_TOIL_CLUSTER_NAME]},
+                                         {'Name': 'value',
+                                          'Values': [self.clusterName]}]
+@@ -1610,8 +1610,8 @@
+         for result in boto3_pager(boto3_iam.list_roles, 'Roles'):
+             # For each Boto2 role object
+             # Grab out the name
+-            cast(RoleTypeDef, result)
+-            name = result['RoleName']
++            result2 = cast("RoleTypeDef", result)
++            name = result2['RoleName']
+             if self._is_our_namespaced_name(name):
+                 # If it looks like ours, it is ours.
+                 results.append(name)
+@@ -1628,8 +1628,8 @@
+         for result in boto3_pager(boto3_iam.list_instance_profiles,
+                                   'InstanceProfiles'):
+             # Grab out the name
+-            cast(InstanceProfileTypeDef, result)
+-            name = result['InstanceProfileName']
++            result2 = cast("InstanceProfileTypeDef", result)
++            name = result2['InstanceProfileName']
+             if self._is_our_namespaced_name(name):
+                 # If it looks like ours, it is ours.
+                 results.append(name)
+@@ -1644,7 +1644,7 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        boto3_iam: IAMClient = self.aws.client(self._region, 'iam')
++        boto3_iam: "IAMClient" = self.aws.client(self._region, 'iam')
+ 
+         return [item['InstanceProfileName'] for item in boto3_pager(boto3_iam.list_instance_profiles_for_role,
+                                                                     'InstanceProfiles',
+@@ -1661,7 +1661,7 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        boto3_iam: IAMClient = self.aws.client(self._region, 'iam')
++        boto3_iam: "IAMClient" = self.aws.client(self._region, 'iam')
+ 
+         # TODO: we don't currently use attached policies.
+ 
+@@ -1677,7 +1677,7 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        boto3_iam: IAMClient = self.aws.client(self._region, 'iam')
++        boto3_iam: "IAMClient" = self.aws.client(self._region, 'iam')
+ 
+         return list(boto3_pager(boto3_iam.list_role_policies, 'PolicyNames', RoleName=role_name))
+ 
+@@ -1755,7 +1755,7 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        boto3_iam: IAMClient = self.aws.client(self._region, 'iam')
++        boto3_iam: "IAMClient" = self.aws.client(self._region, 'iam')
+ 
+         # Make sure we can tell our roles apart from roles for other clusters
+         aws_role_name = self._namespace_name(local_role_name)
+@@ -1805,7 +1805,7 @@
+         """
+ 
+         # Grab the connection we need to use for this operation.
+-        boto3_iam: IAMClient = self.aws.client(self._region, 'iam')
++        boto3_iam: "IAMClient" = self.aws.client(self._region, 'iam')
+ 
+         policy = dict(iam_full=self.full_policy('iam'), ec2_full=self.full_policy('ec2'),
+                       s3_full=self.full_policy('s3'), sbd_full=self.full_policy('sdb'))
+@@ -1817,7 +1817,7 @@
+ 
+         try:
+             profile_result = boto3_iam.get_instance_profile(InstanceProfileName=iamRoleName)
+-            profile: InstanceProfileTypeDef = profile_result["InstanceProfile"]
++            profile: "InstanceProfileTypeDef" = profile_result["InstanceProfile"]
+             logger.debug("Have preexisting instance profile: %s", profile)
+         except boto3_iam.exceptions.NoSuchEntityException:
+             profile_result = boto3_iam.create_instance_profile(InstanceProfileName=iamRoleName)
+--- toil.orig/src/toil/test/lib/aws/test_s3.py
++++ toil/src/toil/test/lib/aws/test_s3.py
+@@ -14,7 +14,7 @@
+ import logging
+ import os
+ import uuid
+-from typing import Optional
++from typing import Optional, TYPE_CHECKING
+ 
+ from toil.jobStores.aws.jobStore import AWSJobStore
+ from toil.lib.aws.session import establish_boto3_session
+@@ -29,11 +29,12 @@
+ class S3Test(ToilTest):
+     """Confirm the workarounds for us-east-1."""
+ 
+-    from mypy_boto3_s3 import S3ServiceResource
+-    from mypy_boto3_s3.service_resource import Bucket
++    if TYPE_CHECKING:
++        from mypy_boto3_s3 import S3ServiceResource
++        from mypy_boto3_s3.service_resource import Bucket
+ 
+-    s3_resource: Optional[S3ServiceResource]
+-    bucket: Optional[Bucket]
++    s3_resource: Optional["S3ServiceResource"]
++    bucket: Optional["Bucket"]
+ 
+     @classmethod
+     def setUpClass(cls) -> None:
+--- toil.orig/src/toil/test/provisioners/aws/awsProvisionerTest.py
++++ toil/src/toil/test/provisioners/aws/awsProvisionerTest.py
+@@ -19,13 +19,11 @@
+ from abc import abstractmethod
+ from inspect import getsource
+ from textwrap import dedent
+-from typing import Optional, List
++from typing import Optional, List, TYPE_CHECKING
+ from uuid import uuid4
+ 
+ import botocore.exceptions
+ import pytest
+-from mypy_boto3_ec2 import EC2Client
+-from mypy_boto3_ec2.type_defs import EbsInstanceBlockDeviceTypeDef, InstanceTypeDef, InstanceBlockDeviceMappingTypeDef, FilterTypeDef, DescribeVolumesResultTypeDef, VolumeTypeDef
+ 
+ from toil.provisioners import cluster_factory
+ from toil.provisioners.aws.awsProvisioner import AWSProvisioner
+@@ -39,6 +37,12 @@
+ from toil.test.provisioners.clusterTest import AbstractClusterTest
+ from toil.version import exactPython
+ 
++if TYPE_CHECKING:
++    from mypy_boto3_ec2 import EC2Client
++    from mypy_boto3_ec2.type_defs import EbsInstanceBlockDeviceTypeDef, InstanceTypeDef, InstanceBlockDeviceMappingTypeDef, FilterTypeDef, DescribeVolumesResultTypeDef, VolumeTypeDef
++
++
++
+ log = logging.getLogger(__name__)
+ 
+ 
+@@ -118,13 +122,13 @@
+         subprocess.check_call(['toil', 'rsync-cluster', '--insecure', '-p=aws', '-z', self.zone, self.clusterName] + [src, dest])
+ 
+     def getRootVolID(self) -> str:
+-        instances: List[InstanceTypeDef] = self.cluster._get_nodes_in_cluster_boto3()
++        instances: List["InstanceTypeDef"] = self.cluster._get_nodes_in_cluster_boto3()
+         instances.sort(key=lambda x: x.get("LaunchTime"))
+-        leader: InstanceTypeDef = instances[0]  # assume leader was launched first
++        leader: "InstanceTypeDef" = instances[0]  # assume leader was launched first
+ 
+-        bdm: Optional[List[InstanceBlockDeviceMappingTypeDef]] = leader.get("BlockDeviceMappings")
++        bdm: Optional[List["InstanceBlockDeviceMappingTypeDef"]] = leader.get("BlockDeviceMappings")
+         assert bdm is not None
+-        root_block_device: Optional[EbsInstanceBlockDeviceTypeDef] = None
++        root_block_device: Optional["EbsInstanceBlockDeviceTypeDef"] = None
+         for device in bdm:
+             if device["DeviceName"] == "/dev/xvda":
+                 root_block_device = device["Ebs"]
+@@ -202,9 +206,9 @@
+ 
+         volumeID = self.getRootVolID()
+         self.cluster.destroyCluster()
+-        boto3_ec2: EC2Client = self.aws.client(region=self.region, service_name="ec2")
+-        volume_filter: FilterTypeDef = {"Name": "volume-id", "Values": [volumeID]}
+-        volumes: Optional[List[VolumeTypeDef]] = None
++        boto3_ec2: "EC2Client" = self.aws.client(region=self.region, service_name="ec2")
++        volume_filter: "FilterTypeDef" = {"Name": "volume-id", "Values": [volumeID]}
++        volumes: Optional[List["VolumeTypeDef"]] = None
+         for attempt in range(6):
+             # https://github.com/BD2KGenomics/toil/issues/1567
+             # retry this for up to 1 minute until the volume disappears
+@@ -261,10 +265,10 @@
+         :return: volumeID
+         """
+         volumeID = super().getRootVolID()
+-        boto3_ec2: EC2Client = self.aws.client(region=self.region, service_name="ec2")
+-        volume_filter: FilterTypeDef = {"Name": "volume-id", "Values": [volumeID]}
+-        volumes: DescribeVolumesResultTypeDef = boto3_ec2.describe_volumes(Filters=[volume_filter])
+-        root_volume: VolumeTypeDef = volumes["Volumes"][0]  # should be first
++        boto3_ec2: "EC2Client" = self.aws.client(region=self.region, service_name="ec2")
++        volume_filter: "FilterTypeDef" = {"Name": "volume-id", "Values": [volumeID]}
++        volumes: "DescribeVolumesResultTypeDef" = boto3_ec2.describe_volumes(Filters=[volume_filter])
++        root_volume: "VolumeTypeDef" = volumes["Volumes"][0]  # should be first
+         # test that the leader is given adequate storage
+         self.assertGreaterEqual(root_volume["Size"], self.requestedLeaderStorage)
+         return volumeID
+@@ -312,7 +316,7 @@
+         # visible to EC2 read requests immediately after the create returns,
+         # which is the last thing that starting the cluster does.
+         time.sleep(10)
+-        nodes: List[InstanceTypeDef] = self.cluster._get_nodes_in_cluster_boto3()
++        nodes: List["InstanceTypeDef"] = self.cluster._get_nodes_in_cluster_boto3()
+         nodes.sort(key=lambda x: x.get("LaunchTime"))
+         # assuming that leader is first
+         workers = nodes[1:]
+@@ -321,21 +325,21 @@
+         # test that workers have expected storage size
+         # just use the first worker
+         worker = workers[0]
+-        boto3_ec2: EC2Client = self.aws.client(region=self.region, service_name="ec2")
++        boto3_ec2: "EC2Client" = self.aws.client(region=self.region, service_name="ec2")
+ 
+-        worker: InstanceTypeDef = next(wait_instances_running(boto3_ec2, [worker]))
++        worker: "InstanceTypeDef" = next(wait_instances_running(boto3_ec2, [worker]))
+ 
+-        bdm: Optional[List[InstanceBlockDeviceMappingTypeDef]] = worker.get("BlockDeviceMappings")
++        bdm: Optional[List["InstanceBlockDeviceMappingTypeDef"]] = worker.get("BlockDeviceMappings")
+         assert bdm is not None
+-        root_block_device: Optional[EbsInstanceBlockDeviceTypeDef] = None
++        root_block_device: Optional["EbsInstanceBlockDeviceTypeDef"] = None
+         for device in bdm:
+             if device["DeviceName"] == "/dev/xvda":
+                 root_block_device = device["Ebs"]
+         assert root_block_device is not None
+         assert root_block_device.get("VolumeId") is not None  # TypedDicts cannot have runtime type checks
+ 
+-        volume_filter: FilterTypeDef = {"Name": "volume-id", "Values": [root_block_device["VolumeId"]]}
+-        root_volume: VolumeTypeDef = boto3_ec2.describe_volumes(Filters=[volume_filter])["Volumes"][0]  # should be first
++        volume_filter: "FilterTypeDef" = {"Name": "volume-id", "Values": [root_block_device["VolumeId"]]}
++        root_volume: "VolumeTypeDef" = boto3_ec2.describe_volumes(Filters=[volume_filter])["Volumes"][0]  # should be first
+         self.assertGreaterEqual(root_volume.get("Size"), self.requestedNodeStorage)
+ 
+     def _runScript(self, toilOptions):
+--- toil.orig/src/toil/test/server/serverTest.py
++++ toil/src/toil/test/server/serverTest.py
+@@ -21,7 +21,7 @@
+ import zipfile
+ from abc import abstractmethod
+ from io import BytesIO
+-from typing import Optional
++from typing import Optional, TYPE_CHECKING
+ from urllib.parse import urlparse
+ 
+ try:
+@@ -184,14 +184,9 @@
+     Base class for tests that need a bucket.
+     """
+ 
+-    try:
+-        # We need the class to be evaluateable without the AWS modules, if not
+-        # runnable
++    if TYPE_CHECKING:
+         from mypy_boto3_s3 import S3ServiceResource
+         from mypy_boto3_s3.service_resource import Bucket
+-    except ImportError:
+-        pass
+-
+ 
+     region: Optional[str]
+     s3_resource: Optional['S3ServiceResource']


=====================================
debian/rules
=====================================
@@ -7,6 +7,7 @@ export PYBUILD_DESTDIR_python3=debian/toil/
 
 #export PYBUILD_DISABLE=test
 export PYBUILD_DISABLE_python2=1
+
 %:
 	dh $@ --buildsystem=pybuild
 
@@ -23,7 +24,7 @@ override_dh_auto_install:
 override_dh_auto_test:
 ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
 	PYBUILD_SYSTEM=custom \
-		PYBUILD_TEST_ARGS='HOME={home_dir} {interpreter} setup.py develop --user && PYTHONPATH={dir}/src:$$PYTHONPATH PATH={home_dir}/.local/bin/:$$PATH TOIL_TEST_QUICK=True TOIL_SKIP_DOCKER=True {interpreter} -m pytest -n auto --dist loadscope -vv -W ignore --ignore src/toil/test/lib/aws/test_s3.py --ignore src/toil/test/provisioners/aws/awsProvisionerTest.py --ignore src/toil/test/wdl/wdltoil_test.py --ignore src/toil/test/cwl/cwlTest.py --ignore src/toil/test/src/promisedRequirementTest.py --ignore src/toil/test/lib/test_ec2.py --ignore src/toil/test/batchSystems/batchSystemTest.py --ignore src/toil/test/lib/aws/test_iam.py -k "not (test_bioconda or test_run_conformance or testImportFtpFile or ToilWdlIntegrationTest or SortTest or testCwlexample or testVirtualEnv or ToilDocumentationTest or test_cwl_toil_kill or testImportReadFileCompatibility or CleanWorkDirTest or DeferredFunctionTest or WdlToilTest or test_cactus_integration)" {dir}/src/toil/test' \
+		PYBUILD_TEST_ARGS='TOIL_SKIP_ONLINE=true HOME={home_dir} {interpreter} setup.py develop --user && PYTHONPATH={dir}/src:$$PYTHONPATH PATH={home_dir}/.local/bin/:$$PATH TOIL_TEST_QUICK=True TOIL_SKIP_DOCKER=True {interpreter} -m pytest -n auto --dist loadscope -vv -W ignore --ignore src/toil/test/lib/aws/test_s3.py --ignore src/toil/test/provisioners/aws/awsProvisionerTest.py --ignore src/toil/test/wdl/wdltoil_test.py --ignore src/toil/test/cwl/cwlTest.py --ignore src/toil/test/src/promisedRequirementTest.py --ignore src/toil/test/lib/test_ec2.py --ignore src/toil/test/batchSystems/batchSystemTest.py --ignore src/toil/test/lib/aws/test_iam.py -k "not (test_bioconda or test_run_conformance or testImportFtpFile or ToilWdlIntegrationTest or SortTest or testCwlexample or testVirtualEnv or ToilDocumentationTest or test_cwl_toil_kill or testImportReadFileCompatibility or CleanWorkDirTest or DeferredFunctionTest or WdlToilTest or test_cactus_integration)" {dir}/src/toil/test' \
 		dh_auto_test
 endif
 



View it on GitLab: https://salsa.debian.org/med-team/toil/-/compare/fd036ec3fcd12ceaff74c76645bd8ae977dfe212...950a6aef9acdc11ca05c7dfb2a430460a968e7a9

-- 
This project does not include diff previews in email notifications.
View it on GitLab: https://salsa.debian.org/med-team/toil/-/compare/fd036ec3fcd12ceaff74c76645bd8ae977dfe212...950a6aef9acdc11ca05c7dfb2a430460a968e7a9
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/debian-med-commit/attachments/20240618/cd975234/attachment-0001.htm>


More information about the debian-med-commit mailing list