[Debian-med-packaging] Bug#973178: python-streamz: FTBFS: dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.9 3.8" returned exit code 13

Lucas Nussbaum lucas at debian.org
Tue Oct 27 17:20:58 GMT 2020


Source: python-streamz
Version: 0.6.0-1
Severity: serious
Justification: FTBFS on amd64
Tags: bullseye sid ftbfs
Usertags: ftbfs-20201027 ftbfs-bullseye

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.

Relevant part (hopefully):
> dpkg-buildpackage
> -----------------
> 
> Command: dpkg-buildpackage -us -uc -sa -rfakeroot
> dpkg-buildpackage: info: source package python-streamz
> dpkg-buildpackage: info: source version 0.6.0-1
> dpkg-buildpackage: info: source distribution unstable
> dpkg-buildpackage: info: source changed by Nilesh Patra <npatra974 at gmail.com>
>  dpkg-source --before-build .
> dpkg-buildpackage: info: host architecture amd64
>  debian/rules clean
> dh clean --with python3 --buildsystem=pybuild
>    dh_auto_clean -O--buildsystem=pybuild
> I: pybuild base:217: python3.9 setup.py clean 
> running clean
> removing '/<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build' (and everything under it)
> 'build/bdist.linux-x86_64' does not exist -- can't clean it
> 'build/scripts-3.9' does not exist -- can't clean it
> I: pybuild base:217: python3.8 setup.py clean 
> running clean
> removing '/<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build' (and everything under it)
> 'build/bdist.linux-x86_64' does not exist -- can't clean it
> 'build/scripts-3.8' does not exist -- can't clean it
>    dh_autoreconf_clean -O--buildsystem=pybuild
>    dh_clean -O--buildsystem=pybuild
>  dpkg-source -b .
> dpkg-source: info: using source format '3.0 (quilt)'
> dpkg-source: info: building python-streamz using existing ./python-streamz_0.6.0.orig.tar.gz
> dpkg-source: info: using patch list from debian/patches/series
> dpkg-source: info: building python-streamz in python-streamz_0.6.0-1.debian.tar.xz
> dpkg-source: info: building python-streamz in python-streamz_0.6.0-1.dsc
>  debian/rules binary
> dh binary --with python3 --buildsystem=pybuild
>    dh_update_autotools_config -O--buildsystem=pybuild
>    dh_autoreconf -O--buildsystem=pybuild
>    dh_auto_configure -O--buildsystem=pybuild
> I: pybuild base:217: python3.9 setup.py config 
> running config
> I: pybuild base:217: python3.8 setup.py config 
> running config
>    dh_auto_build -O--buildsystem=pybuild
> I: pybuild base:217: /usr/bin/python3.9 setup.py build 
> running build
> running build_py
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/dask.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/compatibility.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/sources.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/orderedweakset.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/graph.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/utils_test.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/batch.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/collection.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> copying streamz/utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe
> copying streamz/dataframe/core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe
> copying streamz/dataframe/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe
> copying streamz/dataframe/aggregations.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe
> copying streamz/dataframe/utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_kafka.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_graph.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/py3_test_core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_sources.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_batch.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> copying streamz/tests/test_dask.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests
> package init file 'streamz/dataframe/tests/__init__.py' not found (or not a regular file)
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests
> copying streamz/dataframe/tests/test_dataframe_utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests
> copying streamz/dataframe/tests/test_dataframes.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests
> I: pybuild base:217: /usr/bin/python3 setup.py build 
> running build
> running build_py
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/dask.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/compatibility.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/sources.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/orderedweakset.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/graph.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/utils_test.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/batch.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/collection.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> copying streamz/utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe
> copying streamz/dataframe/core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe
> copying streamz/dataframe/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe
> copying streamz/dataframe/aggregations.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe
> copying streamz/dataframe/utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_kafka.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_graph.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/__init__.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/py3_test_core.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_sources.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_batch.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> copying streamz/tests/test_dask.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/tests
> package init file 'streamz/dataframe/tests/__init__.py' not found (or not a regular file)
> creating /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe/tests
> copying streamz/dataframe/tests/test_dataframe_utils.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe/tests
> copying streamz/dataframe/tests/test_dataframes.py -> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.8_streamz/build/streamz/dataframe/tests
>    dh_auto_test -O--buildsystem=pybuild
> I: pybuild base:217: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build; python3.9 -m pytest 
> ============================= test session starts ==============================
> platform linux -- Python 3.9.0+, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
> rootdir: /<<PKGBUILDDIR>>
> plugins: flaky-3.7.0
> collected 1518 items / 2 skipped / 1516 selected
> 
> streamz/dataframe/tests/test_dataframe_utils.py .s.s                     [  0%]
> streamz/dataframe/tests/test_dataframes.py ............................. [  2%]
> ........................................................................ [  6%]
> ........................................................................ [ 11%]
> .......................................................................s [ 16%]
> ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 21%]
> ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 25%]
> ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 30%]
> ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 35%]
> ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 40%]
> sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss. [ 44%]
> ...xxxxxx....................ss......................................... [ 49%]
> ....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X. [ 54%]
> ....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X. [ 59%]
> ....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X. [ 63%]
> ....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X.....X. [ 68%]
> .....X......X......X......X......X......X......X......X......X......X... [ 73%]
> ...X......X......X......X......X......X......X......X......X......X..... [ 78%]
> .X......X......X......X......X......X......X......X......X......X......X [ 82%]
> ......X......X......X......X......X......X......X......X......X......X.. [ 87%]
> ....X......X......X......X......X......X......X...............x..        [ 91%]
> streamz/tests/test_batch.py ....                                         [ 92%]
> streamz/tests/test_core.py ............................................. [ 95%]
> ...........................................................              [ 98%]
> streamz/tests/test_dask.py ssssFFFFFFsF                                  [ 99%]
> streamz/tests/test_sources.py XXXx                                       [100%]x [100%]x [100%]
> 
> =================================== FAILURES ===================================
> ___________________________________ test_map ___________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:25: in test_map
>     yield source.emit(i)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> _______________________________ test_map_on_dict _______________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:45: in test_map_on_dict
>     yield source.emit({"i": i})
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> __________________________________ test_scan ___________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:61: in test_scan
>     yield source.emit(i)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> _______________________________ test_scan_state ________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:77: in test_scan_state
>     yield source.emit(i)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> ___________________________________ test_zip ___________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:90: in test_zip
>     yield a.emit(1)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> _______________________________ test_accumulate ________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:103: in test_accumulate
>     yield source.emit(i)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> _________________________________ test_starmap _________________________________
> 
>     def test_func():
>         result = None
>         workers = []
>         with clean(timeout=active_rpc_timeout, **clean_kwargs) as loop:
>     
>             async def coro():
>                 with dask.config.set(config):
>                     s = False
>                     for i in range(5):
>                         try:
>                             s, ws = await start_cluster(
>                                 nthreads,
>                                 scheduler,
>                                 loop,
>                                 security=security,
>                                 Worker=Worker,
>                                 scheduler_kwargs=scheduler_kwargs,
>                                 worker_kwargs=worker_kwargs,
>                             )
>                         except Exception as e:
>                             logger.error(
>                                 "Failed to start gen_cluster, retrying",
>                                 exc_info=True,
>                             )
>                         else:
>                             workers[:] = ws
>                             args = [s] + workers
>                             break
>                     if s is False:
>                         raise Exception("Could not start cluster")
>                     if client:
>                         c = await Client(
>                             s.address,
>                             loop=loop,
>                             security=security,
>                             asynchronous=True,
>                             **client_kwargs
>                         )
>                         args = [c] + args
>                     try:
>                         future = func(*args)
>                         if timeout:
>                             future = asyncio.wait_for(future, timeout)
>                         result = await future
>                         if s.validate:
>                             s.validate_state()
>                     finally:
>                         if client and c.status not in ("closing", "closed"):
>                             await c._close(fast=s.status == "closed")
>                         await end_cluster(s, workers)
>                         await asyncio.wait_for(cleanup_global_workers(), 1)
>     
>                     try:
>                         c = await default_client()
>                     except ValueError:
>                         pass
>                     else:
>                         await c._close(fast=True)
>     
>                     for i in range(5):
>                         if all(c.closed() for c in Comm._instances):
>                             break
>                         else:
>                             await asyncio.sleep(0.05)
>                     else:
>                         L = [c for c in Comm._instances if not c.closed()]
>                         Comm._instances.clear()
>                         # raise ValueError("Unclosed Comms", L)
>                         print("Unclosed Comms", L)
>     
>                     return result
>     
> >           result = loop.run_sync(
>                 coro, timeout=timeout * 2 if timeout else timeout
>             )
> 
> /usr/lib/python3/dist-packages/distributed/utils_test.py:956: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> /usr/lib/python3/dist-packages/tornado/ioloop.py:532: in run_sync
>     return future_cell[0].result()
> /usr/lib/python3/dist-packages/distributed/utils_test.py:927: in coro
>     result = await future
> /usr/lib/python3.9/asyncio/tasks.py:476: in wait_for
>     return fut.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/tests/test_dask.py:208: in test_starmap
>     yield source.emit((i, i))
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/tornado/gen.py:501: in callback
>     result_list.append(f.result())
> /usr/lib/python3/dist-packages/tornado/gen.py:742: in run
>     yielded = self.gen.throw(*exc_info)  # type: ignore
> streamz/dask.py:113: in update
>     future_as_dict = yield client.scatter({tokenized_x: x}, asynchronous=True)
> /usr/lib/python3/dist-packages/tornado/gen.py:735: in run
>     value = future.result()
> /usr/lib/python3/dist-packages/distributed/client.py:1965: in _scatter
>     _, who_has, nbytes = await scatter_to_workers(
> /usr/lib/python3/dist-packages/distributed/utils_comm.py:144: in scatter_to_workers
>     out = await All(
> /usr/lib/python3/dist-packages/distributed/utils.py:235: in All
>     result = await tasks.next()
> /usr/lib/python3/dist-packages/distributed/core.py:757: in send_recv_from_rpc
>     result = await send_recv(comm=comm, op=key, **kwargs)
> /usr/lib/python3/dist-packages/distributed/core.py:540: in send_recv
>     response = await comm.read(deserializers=deserializers)
> /usr/lib/python3/dist-packages/distributed/comm/tcp.py:211: in read
>     msg = await from_frames(
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:69: in from_frames
>     res = _from_frames()
> /usr/lib/python3/dist-packages/distributed/comm/utils.py:54: in _from_frames
>     return protocol.loads(
> /usr/lib/python3/dist-packages/distributed/protocol/core.py:106: in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> >   ???
> E   ValueError: tuple is not allowed for map key
> 
> msgpack/_unpacker.pyx:195: ValueError
> ----------------------------- Captured stderr call -----------------------------
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.core - ERROR - tuple is not allowed for map key
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/core.py", line 346, in handle_comm
>     msg = await comm.read()
>   File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 211, in read
>     msg = await from_frames(
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 69, in from_frames
>     res = _from_frames()
>   File "/usr/lib/python3/dist-packages/distributed/comm/utils.py", line 54, in _from_frames
>     return protocol.loads(
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> distributed.protocol.core - CRITICAL - Failed to deserialize
> Traceback (most recent call last):
>   File "/usr/lib/python3/dist-packages/distributed/protocol/core.py", line 106, in loads
>     header = msgpack.loads(header, use_list=False, **msgpack_opts)
>   File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
> ValueError: tuple is not allowed for map key
> =============================== warnings summary ===============================
> /usr/lib/python3/dist-packages/_pytest/mark/structures.py:331
> /usr/lib/python3/dist-packages/_pytest/mark/structures.py:331
>   /usr/lib/python3/dist-packages/_pytest/mark/structures.py:331: PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
>     warnings.warn(
> 
> streamz/tests/test_dask.py:139
>   /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/tests/test_dask.py:139: UserWarning: ncores= has moved to nthreads=
>     @gen_cluster(client=True, ncores=[('127.0.0.1', 1)] * 2)
> 
> .pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests/test_dataframes.py::test_windowing_n[<lambda>1-1-<lambda>4]
> .pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests/test_dataframes.py::test_windowing_n[<lambda>1-1-<lambda>5]
>   /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/dataframe/aggregations.py:99: RuntimeWarning: invalid value encountered in double_scalars
>     result = result * n / (n - self.ddof)
> 
> .pybuild/cpython3_3.9_streamz/build/streamz/dataframe/tests/test_dataframes.py::test_gc_random
>   /usr/lib/python3/dist-packages/distributed/deploy/spec.py:311: DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.
>     await asyncio.wait(tasks)
> 
> -- Docs: https://docs.pytest.org/en/latest/warnings.html
> ===Flaky Test Report===
> 
> test_tcp passed 1 out of the required 1 times. Success!
> test_tcp_async passed 1 out of the required 1 times. Success!
> test_process failed (2 runs remaining out of 3).
> 	<class 'tornado.util.TimeoutError'>
> 	Operation timed out after 60 seconds
> 	[<TracebackEntry /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/utils_test.py:69>, <TracebackEntry /usr/lib/python3/dist-packages/tornado/ioloop.py:531>]
> test_process failed (1 runs remaining out of 3).
> 	<class 'tornado.util.TimeoutError'>
> 	Operation timed out after 60 seconds
> 	[<TracebackEntry /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/utils_test.py:69>, <TracebackEntry /usr/lib/python3/dist-packages/tornado/ioloop.py:531>]
> test_process failed; it passed 0 out of the required 1 times.
> 	<class 'tornado.util.TimeoutError'>
> 	Operation timed out after 60 seconds
> 	[<TracebackEntry /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build/streamz/utils_test.py:69>, <TracebackEntry /usr/lib/python3/dist-packages/tornado/ioloop.py:531>]
> 
> ===End Flaky Test Report===
> = 7 failed, 963 passed, 443 skipped, 10 xfailed, 99 xpassed, 6 warnings in 238.02 seconds =
> E: pybuild pybuild:352: test: plugin distutils failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9_streamz/build; python3.9 -m pytest 
> dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.9 3.8" returned exit code 13

The full build log is available from:
   http://qa-logs.debian.net/2020/10/27/python-streamz_0.6.0-1_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

About the archive rebuild: The rebuild was done on EC2 VM instances from
Amazon Web Services, using a clean, minimal and up-to-date chroot. Every
failed build was retried once to eliminate random failures.



More information about the Debian-med-packaging mailing list