[Debian-med-packaging] Bug#1002400: umap-learn: FTBFS: dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 --system=custom "--test-args=PYTHONPATH={build_dir} {interpreter} -m pytest" returned exit code 13

Lucas Nussbaum lucas at debian.org
Wed Dec 22 07:58:09 GMT 2021


Source: umap-learn
Version: 0.4.5+dfsg-2
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: lucas at debian.org
Usertags: ftbfs-20211220 ftbfs-bookworm

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.


Relevant part (hopefully):
> make[1]: Entering directory '/<<PKGBUILDDIR>>'
> dh_auto_test -- --system=custom --test-args="PYTHONPATH={build_dir} {interpreter} -m pytest"
> I: pybuild base:237: PYTHONPATH=/<<PKGBUILDDIR>>/.pybuild/cpython3_3.9/build python3.9 -m pytest
> ============================= test session starts ==============================
> platform linux -- Python 3.9.9, pytest-6.2.5, py-1.10.0, pluggy-0.13.0
> rootdir: /<<PKGBUILDDIR>>
> collected 128 items
> 
> umap/tests/test_plot.py .xx                                              [  2%]
> umap/tests/test_umap_df_validation_params.py ...................         [ 17%]
> umap/tests/test_umap_metrics.py ........................................ [ 48%]
> .                                                                        [ 49%]
> umap/tests/test_umap_nn.py ............                                  [ 58%]
> umap/tests/test_umap_on_iris.py FFFFFF..                                 [ 64%]
> umap/tests/test_umap_ops.py ....                                         [ 67%]
> umap/tests/test_umap_repeated_data.py .........                          [ 75%]
> umap/tests/test_umap_trustworthiness.py FFFFFFFFF                        [ 82%]
> umap/tests/test_umap_validation_params.py .......................        [100%]
> 
> =================================== FAILURES ===================================
> ______________________ test_umap_trustworthiness_on_iris _______________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> iris_model = UMAP(min_dist=0.01, n_neighbors=10, random_state=42)
> 
>     def test_umap_trustworthiness_on_iris(iris, iris_model):
>         embedding = iris_model.embedding_
> >       trust = trustworthiness(iris.data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_on_iris.py:27: TypeError
> ________________ test_initialized_umap_trustworthiness_on_iris _________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> 
>     def test_initialized_umap_trustworthiness_on_iris(iris):
>         data = iris.data
>         embedding = UMAP(
>             n_neighbors=10, min_dist=0.01, init=data[:, 2:], n_epochs=200, random_state=42
>         ).fit_transform(data)
> >       trust = trustworthiness(iris.data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_on_iris.py:40: TypeError
> ___________________ test_umap_trustworthiness_on_sphere_iris ___________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> 
>     def test_umap_trustworthiness_on_sphere_iris(iris,):
>         data = iris.data
>         embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             n_epochs=200,
>             random_state=42,
>             output_metric="haversine",
>         ).fit_transform(data)
>         # Since trustworthiness doesn't support haversine, project onto
>         # a 3D embedding of the sphere and use cosine distance
>         r = 3
>         projected_embedding = np.vstack(
>             [
>                 r * np.sin(embedding[:, 0]) * np.cos(embedding[:, 1]),
>                 r * np.sin(embedding[:, 0]) * np.sin(embedding[:, 1]),
>                 r * np.cos(embedding[:, 0]),
>             ]
>         ).T
> >       trust = trustworthiness(iris.data, projected_embedding, 10, metric="cosine")
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 positional arguments (and 1 keyword-only argument) were given
> 
> umap/tests/test_umap_on_iris.py:67: TypeError
> _________________________ test_umap_transform_on_iris __________________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> iris_selection = array([ True,  True,  True,  True, False,  True,  True, False,  True,
>         True,  True,  True,  True,  True,  True,...        True,  True,  True,  True,  True,  True,  True, False, False,
>         True,  True, False,  True, False, False])
> 
>     def test_umap_transform_on_iris(iris, iris_selection):
>         data = iris.data[iris_selection]
>         fitter = UMAP(n_neighbors=10, min_dist=0.01, n_epochs=200, random_state=42).fit(
>             data
>         )
>     
>         new_data = iris.data[~iris_selection]
>         embedding = fitter.transform(new_data)
>     
> >       trust = trustworthiness(new_data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_on_iris.py:88: TypeError
> __________________ test_umap_transform_on_iris_modified_dtype __________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> iris_selection = array([ True,  True,  True,  True, False,  True,  True, False,  True,
>         True,  True,  True,  True,  True,  True,...        True,  True,  True,  True,  True,  True,  True, False, False,
>         True,  True, False,  True, False, False])
> 
>     def test_umap_transform_on_iris_modified_dtype(iris, iris_selection):
>         data = iris.data[iris_selection]
>         fitter = UMAP(n_neighbors=10, min_dist=0.01, random_state=42).fit(data)
>         fitter.embedding_ = fitter.embedding_.astype(np.float64)
>     
>         new_data = iris.data[~iris_selection]
>         embedding = fitter.transform(new_data)
>     
> >       trust = trustworthiness(new_data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_on_iris.py:104: TypeError
> ______________________ test_umap_sparse_transform_on_iris ______________________
> 
> iris = {'data': array([[5.1, 3.5, 1.4, 0.2],
>        [4.9, 3. , 1.4, 0.2],
>        [4.7, 3.2, 1.3, 0.2],
>        [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
> iris_selection = array([ True,  True,  True,  True, False,  True,  True, False,  True,
>         True,  True,  True,  True,  True,  True,...        True,  True,  True,  True,  True,  True,  True, False, False,
>         True,  True, False,  True, False, False])
> 
>     def test_umap_sparse_transform_on_iris(iris, iris_selection):
>         data = sparse.csr_matrix(iris.data[iris_selection])
>         assert sparse.issparse(data)
>         fitter = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             random_state=42,
>             n_epochs=100,
>             force_approximation_algorithm=True,
>         ).fit(data)
>     
>         new_data = sparse.csr_matrix(iris.data[~iris_selection])
>         assert sparse.issparse(new_data)
>         embedding = fitter.transform(new_data)
>     
> >       trust = trustworthiness(new_data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_on_iris.py:127: TypeError
> _______________________ test_umap_sparse_trustworthiness _______________________
> 
> sparse_test_data = <1002x5 sparse matrix of type '<class 'numpy.float64'>'
> 	with 1672 stored elements in Compressed Sparse Row format>
> 
>     def test_umap_sparse_trustworthiness(sparse_test_data):
>         embedding = UMAP(n_neighbors=10).fit_transform(sparse_test_data[:100])
> >       trust = trustworthiness(sparse_test_data[:100].toarray(), embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:23: TypeError
> ____________________ test_umap_trustworthiness_fast_approx _____________________
> 
> nn_data = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864],
>        [0.15599452, 0.05808361, 0.86617615, 0.601... 0.        , 0.        , 0.        , 0.        ],
>        [0.        , 0.        , 0.        , 0.        , 0.        ]])
> 
>     def test_umap_trustworthiness_fast_approx(nn_data):
>         data = nn_data[:50]
>         embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             random_state=42,
>             n_epochs=100,
>             force_approximation_algorithm=True,
>         ).fit_transform(data)
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:41: TypeError
> ____________________ test_umap_trustworthiness_random_init _____________________
> 
> nn_data = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864],
>        [0.15599452, 0.05808361, 0.86617615, 0.601... 0.        , 0.        , 0.        , 0.        ],
>        [0.        , 0.        , 0.        , 0.        , 0.        ]])
> 
>     def test_umap_trustworthiness_random_init(nn_data):
>         data = nn_data[:50]
>         embedding = UMAP(
>             n_neighbors=10, min_dist=0.01, random_state=42, init="random"
>         ).fit_transform(data)
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:54: TypeError
> _____________________ test_supervised_umap_trustworthiness _____________________
> 
>     def test_supervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         embedding = UMAP(n_neighbors=10, min_dist=0.01, random_state=42).fit_transform(
>             data, labels
>         )
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:67: TypeError
> ___________________ test_semisupervised_umap_trustworthiness ___________________
> 
>     def test_semisupervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         labels[10:30] = -1
>         embedding = UMAP(n_neighbors=10, min_dist=0.01, random_state=42).fit_transform(
>             data, labels
>         )
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:81: TypeError
> _________________ test_metric_supervised_umap_trustworthiness __________________
> 
>     def test_metric_supervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             target_metric="l1",
>             target_weight=0.8,
>             n_epochs=100,
>             random_state=42,
>         ).fit_transform(data, labels)
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:99: TypeError
> ______________ test_string_metric_supervised_umap_trustworthiness ______________
> 
> array = array(['this', 'this', 'that', 'other', 'this', 'this', 'this', 'this',
>        'that', 'other', 'that', 'that', 'that'...other',
>        'other', 'this', 'other', 'this', 'this', 'other', 'this', 'other',
>        'that', 'that'], dtype='<U5')
> accept_sparse = False
> 
>     def check_array(
>         array,
>         accept_sparse=False,
>         *,
>         accept_large_sparse=True,
>         dtype="numeric",
>         order=None,
>         copy=False,
>         force_all_finite=True,
>         ensure_2d=True,
>         allow_nd=False,
>         ensure_min_samples=1,
>         ensure_min_features=1,
>         estimator=None,
>     ):
>     
>         """Input validation on an array, list, sparse matrix or similar.
>     
>         By default, the input is checked to be a non-empty 2D array containing
>         only finite values. If the dtype of the array is object, attempt
>         converting to float, raising on failure.
>     
>         Parameters
>         ----------
>         array : object
>             Input object to check / convert.
>     
>         accept_sparse : str, bool or list/tuple of str, default=False
>             String[s] representing allowed sparse matrix formats, such as 'csc',
>             'csr', etc. If the input is sparse but not in the allowed format,
>             it will be converted to the first listed format. True allows the input
>             to be any format. False means that a sparse matrix input will
>             raise an error.
>     
>         accept_large_sparse : bool, default=True
>             If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by
>             accept_sparse, accept_large_sparse=False will cause it to be accepted
>             only if its indices are stored with a 32-bit dtype.
>     
>             .. versionadded:: 0.20
>     
>         dtype : 'numeric', type, list of type or None, default='numeric'
>             Data type of result. If None, the dtype of the input is preserved.
>             If "numeric", dtype is preserved unless array.dtype is object.
>             If dtype is a list of types, conversion on the first type is only
>             performed if the dtype of the input is not in the list.
>     
>         order : {'F', 'C'} or None, default=None
>             Whether an array will be forced to be fortran or c-style.
>             When order is None (default), then if copy=False, nothing is ensured
>             about the memory layout of the output array; otherwise (copy=True)
>             the memory layout of the returned array is kept as close as possible
>             to the original array.
>     
>         copy : bool, default=False
>             Whether a forced copy will be triggered. If copy=False, a copy might
>             be triggered by a conversion.
>     
>         force_all_finite : bool or 'allow-nan', default=True
>             Whether to raise an error on np.inf, np.nan, pd.NA in array. The
>             possibilities are:
>     
>             - True: Force all values of array to be finite.
>             - False: accepts np.inf, np.nan, pd.NA in array.
>             - 'allow-nan': accepts only np.nan and pd.NA values in array. Values
>               cannot be infinite.
>     
>             .. versionadded:: 0.20
>                ``force_all_finite`` accepts the string ``'allow-nan'``.
>     
>             .. versionchanged:: 0.23
>                Accepts `pd.NA` and converts it into `np.nan`
>     
>         ensure_2d : bool, default=True
>             Whether to raise a value error if array is not 2D.
>     
>         allow_nd : bool, default=False
>             Whether to allow array.ndim > 2.
>     
>         ensure_min_samples : int, default=1
>             Make sure that the array has a minimum number of samples in its first
>             axis (rows for a 2D array). Setting to 0 disables this check.
>     
>         ensure_min_features : int, default=1
>             Make sure that the 2D array has some minimum number of features
>             (columns). The default value of 1 rejects empty datasets.
>             This check is only enforced when the input data has effectively 2
>             dimensions or is originally 1D and ``ensure_2d`` is True. Setting to 0
>             disables this check.
>     
>         estimator : str or estimator instance, default=None
>             If passed, include the name of the estimator in warning messages.
>     
>         Returns
>         -------
>         array_converted : object
>             The converted and validated array.
>         """
>         if isinstance(array, np.matrix):
>             warnings.warn(
>                 "np.matrix usage is deprecated in 1.0 and will raise a TypeError "
>                 "in 1.2. Please convert to a numpy array with np.asarray. For "
>                 "more information see: "
>                 "https://numpy.org/doc/stable/reference/generated/numpy.matrix.html",  # noqa
>                 FutureWarning,
>             )
>     
>         # store reference to original array to check if copy is needed when
>         # function returns
>         array_orig = array
>     
>         # store whether originally we wanted numeric dtype
>         dtype_numeric = isinstance(dtype, str) and dtype == "numeric"
>     
>         dtype_orig = getattr(array, "dtype", None)
>         if not hasattr(dtype_orig, "kind"):
>             # not a data type (e.g. a column named dtype in a pandas DataFrame)
>             dtype_orig = None
>     
>         # check if the object contains several dtypes (typically a pandas
>         # DataFrame), and store them. If not, store None.
>         dtypes_orig = None
>         has_pd_integer_array = False
>         if hasattr(array, "dtypes") and hasattr(array.dtypes, "__array__"):
>             # throw warning if columns are sparse. If all columns are sparse, then
>             # array.sparse exists and sparsity will be preserved (later).
>             with suppress(ImportError):
>                 from pandas.api.types import is_sparse
>     
>                 if not hasattr(array, "sparse") and array.dtypes.apply(is_sparse).any():
>                     warnings.warn(
>                         "pandas.DataFrame with sparse columns found."
>                         "It will be converted to a dense numpy array."
>                     )
>     
>             dtypes_orig = list(array.dtypes)
>             # pandas boolean dtype __array__ interface coerces bools to objects
>             for i, dtype_iter in enumerate(dtypes_orig):
>                 if dtype_iter.kind == "b":
>                     dtypes_orig[i] = np.dtype(object)
>                 elif dtype_iter.name.startswith(("Int", "UInt")):
>                     # name looks like an Integer Extension Array, now check for
>                     # the dtype
>                     with suppress(ImportError):
>                         from pandas import (
>                             Int8Dtype,
>                             Int16Dtype,
>                             Int32Dtype,
>                             Int64Dtype,
>                             UInt8Dtype,
>                             UInt16Dtype,
>                             UInt32Dtype,
>                             UInt64Dtype,
>                         )
>     
>                         if isinstance(
>                             dtype_iter,
>                             (
>                                 Int8Dtype,
>                                 Int16Dtype,
>                                 Int32Dtype,
>                                 Int64Dtype,
>                                 UInt8Dtype,
>                                 UInt16Dtype,
>                                 UInt32Dtype,
>                                 UInt64Dtype,
>                             ),
>                         ):
>                             has_pd_integer_array = True
>     
>             if all(isinstance(dtype, np.dtype) for dtype in dtypes_orig):
>                 dtype_orig = np.result_type(*dtypes_orig)
>     
>         if dtype_numeric:
>             if dtype_orig is not None and dtype_orig.kind == "O":
>                 # if input is object, convert to float.
>                 dtype = np.float64
>             else:
>                 dtype = None
>     
>         if isinstance(dtype, (list, tuple)):
>             if dtype_orig is not None and dtype_orig in dtype:
>                 # no dtype conversion required
>                 dtype = None
>             else:
>                 # dtype conversion required. Let's select the first element of the
>                 # list of accepted types.
>                 dtype = dtype[0]
>     
>         if has_pd_integer_array:
>             # If there are any pandas integer extension arrays,
>             array = array.astype(dtype)
>     
>         if force_all_finite not in (True, False, "allow-nan"):
>             raise ValueError(
>                 'force_all_finite should be a bool or "allow-nan". Got {!r} instead'.format(
>                     force_all_finite
>                 )
>             )
>     
>         if estimator is not None:
>             if isinstance(estimator, str):
>                 estimator_name = estimator
>             else:
>                 estimator_name = estimator.__class__.__name__
>         else:
>             estimator_name = "Estimator"
>         context = " by %s" % estimator_name if estimator is not None else ""
>     
>         # When all dataframe columns are sparse, convert to a sparse array
>         if hasattr(array, "sparse") and array.ndim > 1:
>             # DataFrame.sparse only supports `to_coo`
>             array = array.sparse.to_coo()
>             if array.dtype == np.dtype("object"):
>                 unique_dtypes = set([dt.subtype.name for dt in array_orig.dtypes])
>                 if len(unique_dtypes) > 1:
>                     raise ValueError(
>                         "Pandas DataFrame with mixed sparse extension arrays "
>                         "generated a sparse matrix with object dtype which "
>                         "can not be converted to a scipy sparse matrix."
>                         "Sparse extension arrays should all have the same "
>                         "numeric type."
>                     )
>     
>         if sp.issparse(array):
>             _ensure_no_complex_data(array)
>             array = _ensure_sparse_format(
>                 array,
>                 accept_sparse=accept_sparse,
>                 dtype=dtype,
>                 copy=copy,
>                 force_all_finite=force_all_finite,
>                 accept_large_sparse=accept_large_sparse,
>             )
>         else:
>             # If np.array(..) gives ComplexWarning, then we convert the warning
>             # to an error. This is needed because specifying a non complex
>             # dtype to the function converts complex to real dtype,
>             # thereby passing the test made in the lines following the scope
>             # of warnings context manager.
>             with warnings.catch_warnings():
>                 try:
>                     warnings.simplefilter("error", ComplexWarning)
>                     if dtype is not None and np.dtype(dtype).kind in "iu":
>                         # Conversion float -> int should not contain NaN or
>                         # inf (numpy#14412). We cannot use casting='safe' because
>                         # then conversion float -> int would be disallowed.
>                         array = np.asarray(array, order=order)
>                         if array.dtype.kind == "f":
>                             _assert_all_finite(array, allow_nan=False, msg_dtype=dtype)
>                         array = array.astype(dtype, casting="unsafe", copy=False)
>                     else:
>                         array = np.asarray(array, order=order, dtype=dtype)
>                 except ComplexWarning as complex_warning:
>                     raise ValueError(
>                         "Complex data not supported\n{}\n".format(array)
>                     ) from complex_warning
>     
>             # It is possible that the np.array(..) gave no warning. This happens
>             # when no dtype conversion happened, for example dtype = None. The
>             # result is that np.array(..) produces an array of complex dtype
>             # and we need to catch and raise exception for such cases.
>             _ensure_no_complex_data(array)
>     
>             if ensure_2d:
>                 # If input is scalar raise error
>                 if array.ndim == 0:
>                     raise ValueError(
>                         "Expected 2D array, got scalar array instead:\narray={}.\n"
>                         "Reshape your data either using array.reshape(-1, 1) if "
>                         "your data has a single feature or array.reshape(1, -1) "
>                         "if it contains a single sample.".format(array)
>                     )
>                 # If input is 1D raise error
>                 if array.ndim == 1:
>                     raise ValueError(
>                         "Expected 2D array, got 1D array instead:\narray={}.\n"
>                         "Reshape your data either using array.reshape(-1, 1) if "
>                         "your data has a single feature or array.reshape(1, -1) "
>                         "if it contains a single sample.".format(array)
>                     )
>     
>             # make sure we actually converted to numeric:
>             if dtype_numeric and array.dtype.kind in "OUSV":
>                 warnings.warn(
>                     "Arrays of bytes/strings is being converted to decimal "
>                     "numbers if dtype='numeric'. This behavior is deprecated in "
>                     "0.24 and will be removed in 1.1 (renaming of 0.26). Please "
>                     "convert your data to numeric values explicitly instead.",
>                     FutureWarning,
>                     stacklevel=2,
>                 )
>                 try:
> >                   array = array.astype(np.float64)
> E                   ValueError: could not convert string to float: 'this'
> 
> /usr/lib/python3/dist-packages/sklearn/utils/validation.py:779: ValueError
> 
> The above exception was the direct cause of the following exception:
> 
>     def test_string_metric_supervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         labels = np.array(["this", "that", "other"])[labels]
> >       embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             target_metric="string",
>             target_weight=0.8,
>             n_epochs=100,
>             random_state=42,
>         ).fit_transform(data, labels)
> 
> umap/tests/test_umap_trustworthiness.py:110: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> umap/umap_.py:2014: in fit_transform
>     self.fit(X, y)
> umap/umap_.py:1867: in fit
>     y_ = check_array(y, ensure_2d=False)[index]
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> 
> array = array(['this', 'this', 'that', 'other', 'this', 'this', 'this', 'this',
>        'that', 'other', 'that', 'that', 'that'...other',
>        'other', 'this', 'other', 'this', 'this', 'other', 'this', 'other',
>        'that', 'that'], dtype='<U5')
> accept_sparse = False
> 
>     def check_array(
>         array,
>         accept_sparse=False,
>         *,
>         accept_large_sparse=True,
>         dtype="numeric",
>         order=None,
>         copy=False,
>         force_all_finite=True,
>         ensure_2d=True,
>         allow_nd=False,
>         ensure_min_samples=1,
>         ensure_min_features=1,
>         estimator=None,
>     ):
>     
>         """Input validation on an array, list, sparse matrix or similar.
>     
>         By default, the input is checked to be a non-empty 2D array containing
>         only finite values. If the dtype of the array is object, attempt
>         converting to float, raising on failure.
>     
>         Parameters
>         ----------
>         array : object
>             Input object to check / convert.
>     
>         accept_sparse : str, bool or list/tuple of str, default=False
>             String[s] representing allowed sparse matrix formats, such as 'csc',
>             'csr', etc. If the input is sparse but not in the allowed format,
>             it will be converted to the first listed format. True allows the input
>             to be any format. False means that a sparse matrix input will
>             raise an error.
>     
>         accept_large_sparse : bool, default=True
>             If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by
>             accept_sparse, accept_large_sparse=False will cause it to be accepted
>             only if its indices are stored with a 32-bit dtype.
>     
>             .. versionadded:: 0.20
>     
>         dtype : 'numeric', type, list of type or None, default='numeric'
>             Data type of result. If None, the dtype of the input is preserved.
>             If "numeric", dtype is preserved unless array.dtype is object.
>             If dtype is a list of types, conversion on the first type is only
>             performed if the dtype of the input is not in the list.
>     
>         order : {'F', 'C'} or None, default=None
>             Whether an array will be forced to be fortran or c-style.
>             When order is None (default), then if copy=False, nothing is ensured
>             about the memory layout of the output array; otherwise (copy=True)
>             the memory layout of the returned array is kept as close as possible
>             to the original array.
>     
>         copy : bool, default=False
>             Whether a forced copy will be triggered. If copy=False, a copy might
>             be triggered by a conversion.
>     
>         force_all_finite : bool or 'allow-nan', default=True
>             Whether to raise an error on np.inf, np.nan, pd.NA in array. The
>             possibilities are:
>     
>             - True: Force all values of array to be finite.
>             - False: accepts np.inf, np.nan, pd.NA in array.
>             - 'allow-nan': accepts only np.nan and pd.NA values in array. Values
>               cannot be infinite.
>     
>             .. versionadded:: 0.20
>                ``force_all_finite`` accepts the string ``'allow-nan'``.
>     
>             .. versionchanged:: 0.23
>                Accepts `pd.NA` and converts it into `np.nan`
>     
>         ensure_2d : bool, default=True
>             Whether to raise a value error if array is not 2D.
>     
>         allow_nd : bool, default=False
>             Whether to allow array.ndim > 2.
>     
>         ensure_min_samples : int, default=1
>             Make sure that the array has a minimum number of samples in its first
>             axis (rows for a 2D array). Setting to 0 disables this check.
>     
>         ensure_min_features : int, default=1
>             Make sure that the 2D array has some minimum number of features
>             (columns). The default value of 1 rejects empty datasets.
>             This check is only enforced when the input data has effectively 2
>             dimensions or is originally 1D and ``ensure_2d`` is True. Setting to 0
>             disables this check.
>     
>         estimator : str or estimator instance, default=None
>             If passed, include the name of the estimator in warning messages.
>     
>         Returns
>         -------
>         array_converted : object
>             The converted and validated array.
>         """
>         if isinstance(array, np.matrix):
>             warnings.warn(
>                 "np.matrix usage is deprecated in 1.0 and will raise a TypeError "
>                 "in 1.2. Please convert to a numpy array with np.asarray. For "
>                 "more information see: "
>                 "https://numpy.org/doc/stable/reference/generated/numpy.matrix.html",  # noqa
>                 FutureWarning,
>             )
>     
>         # store reference to original array to check if copy is needed when
>         # function returns
>         array_orig = array
>     
>         # store whether originally we wanted numeric dtype
>         dtype_numeric = isinstance(dtype, str) and dtype == "numeric"
>     
>         dtype_orig = getattr(array, "dtype", None)
>         if not hasattr(dtype_orig, "kind"):
>             # not a data type (e.g. a column named dtype in a pandas DataFrame)
>             dtype_orig = None
>     
>         # check if the object contains several dtypes (typically a pandas
>         # DataFrame), and store them. If not, store None.
>         dtypes_orig = None
>         has_pd_integer_array = False
>         if hasattr(array, "dtypes") and hasattr(array.dtypes, "__array__"):
>             # throw warning if columns are sparse. If all columns are sparse, then
>             # array.sparse exists and sparsity will be preserved (later).
>             with suppress(ImportError):
>                 from pandas.api.types import is_sparse
>     
>                 if not hasattr(array, "sparse") and array.dtypes.apply(is_sparse).any():
>                     warnings.warn(
>                         "pandas.DataFrame with sparse columns found."
>                         "It will be converted to a dense numpy array."
>                     )
>     
>             dtypes_orig = list(array.dtypes)
>             # pandas boolean dtype __array__ interface coerces bools to objects
>             for i, dtype_iter in enumerate(dtypes_orig):
>                 if dtype_iter.kind == "b":
>                     dtypes_orig[i] = np.dtype(object)
>                 elif dtype_iter.name.startswith(("Int", "UInt")):
>                     # name looks like an Integer Extension Array, now check for
>                     # the dtype
>                     with suppress(ImportError):
>                         from pandas import (
>                             Int8Dtype,
>                             Int16Dtype,
>                             Int32Dtype,
>                             Int64Dtype,
>                             UInt8Dtype,
>                             UInt16Dtype,
>                             UInt32Dtype,
>                             UInt64Dtype,
>                         )
>     
>                         if isinstance(
>                             dtype_iter,
>                             (
>                                 Int8Dtype,
>                                 Int16Dtype,
>                                 Int32Dtype,
>                                 Int64Dtype,
>                                 UInt8Dtype,
>                                 UInt16Dtype,
>                                 UInt32Dtype,
>                                 UInt64Dtype,
>                             ),
>                         ):
>                             has_pd_integer_array = True
>     
>             if all(isinstance(dtype, np.dtype) for dtype in dtypes_orig):
>                 dtype_orig = np.result_type(*dtypes_orig)
>     
>         if dtype_numeric:
>             if dtype_orig is not None and dtype_orig.kind == "O":
>                 # if input is object, convert to float.
>                 dtype = np.float64
>             else:
>                 dtype = None
>     
>         if isinstance(dtype, (list, tuple)):
>             if dtype_orig is not None and dtype_orig in dtype:
>                 # no dtype conversion required
>                 dtype = None
>             else:
>                 # dtype conversion required. Let's select the first element of the
>                 # list of accepted types.
>                 dtype = dtype[0]
>     
>         if has_pd_integer_array:
>             # If there are any pandas integer extension arrays,
>             array = array.astype(dtype)
>     
>         if force_all_finite not in (True, False, "allow-nan"):
>             raise ValueError(
>                 'force_all_finite should be a bool or "allow-nan". Got {!r} instead'.format(
>                     force_all_finite
>                 )
>             )
>     
>         if estimator is not None:
>             if isinstance(estimator, str):
>                 estimator_name = estimator
>             else:
>                 estimator_name = estimator.__class__.__name__
>         else:
>             estimator_name = "Estimator"
>         context = " by %s" % estimator_name if estimator is not None else ""
>     
>         # When all dataframe columns are sparse, convert to a sparse array
>         if hasattr(array, "sparse") and array.ndim > 1:
>             # DataFrame.sparse only supports `to_coo`
>             array = array.sparse.to_coo()
>             if array.dtype == np.dtype("object"):
>                 unique_dtypes = set([dt.subtype.name for dt in array_orig.dtypes])
>                 if len(unique_dtypes) > 1:
>                     raise ValueError(
>                         "Pandas DataFrame with mixed sparse extension arrays "
>                         "generated a sparse matrix with object dtype which "
>                         "can not be converted to a scipy sparse matrix."
>                         "Sparse extension arrays should all have the same "
>                         "numeric type."
>                     )
>     
>         if sp.issparse(array):
>             _ensure_no_complex_data(array)
>             array = _ensure_sparse_format(
>                 array,
>                 accept_sparse=accept_sparse,
>                 dtype=dtype,
>                 copy=copy,
>                 force_all_finite=force_all_finite,
>                 accept_large_sparse=accept_large_sparse,
>             )
>         else:
>             # If np.array(..) gives ComplexWarning, then we convert the warning
>             # to an error. This is needed because specifying a non complex
>             # dtype to the function converts complex to real dtype,
>             # thereby passing the test made in the lines following the scope
>             # of warnings context manager.
>             with warnings.catch_warnings():
>                 try:
>                     warnings.simplefilter("error", ComplexWarning)
>                     if dtype is not None and np.dtype(dtype).kind in "iu":
>                         # Conversion float -> int should not contain NaN or
>                         # inf (numpy#14412). We cannot use casting='safe' because
>                         # then conversion float -> int would be disallowed.
>                         array = np.asarray(array, order=order)
>                         if array.dtype.kind == "f":
>                             _assert_all_finite(array, allow_nan=False, msg_dtype=dtype)
>                         array = array.astype(dtype, casting="unsafe", copy=False)
>                     else:
>                         array = np.asarray(array, order=order, dtype=dtype)
>                 except ComplexWarning as complex_warning:
>                     raise ValueError(
>                         "Complex data not supported\n{}\n".format(array)
>                     ) from complex_warning
>     
>             # It is possible that the np.array(..) gave no warning. This happens
>             # when no dtype conversion happened, for example dtype = None. The
>             # result is that np.array(..) produces an array of complex dtype
>             # and we need to catch and raise exception for such cases.
>             _ensure_no_complex_data(array)
>     
>             if ensure_2d:
>                 # If input is scalar raise error
>                 if array.ndim == 0:
>                     raise ValueError(
>                         "Expected 2D array, got scalar array instead:\narray={}.\n"
>                         "Reshape your data either using array.reshape(-1, 1) if "
>                         "your data has a single feature or array.reshape(1, -1) "
>                         "if it contains a single sample.".format(array)
>                     )
>                 # If input is 1D raise error
>                 if array.ndim == 1:
>                     raise ValueError(
>                         "Expected 2D array, got 1D array instead:\narray={}.\n"
>                         "Reshape your data either using array.reshape(-1, 1) if "
>                         "your data has a single feature or array.reshape(1, -1) "
>                         "if it contains a single sample.".format(array)
>                     )
>     
>             # make sure we actually converted to numeric:
>             if dtype_numeric and array.dtype.kind in "OUSV":
>                 warnings.warn(
>                     "Arrays of bytes/strings is being converted to decimal "
>                     "numbers if dtype='numeric'. This behavior is deprecated in "
>                     "0.24 and will be removed in 1.1 (renaming of 0.26). Please "
>                     "convert your data to numeric values explicitly instead.",
>                     FutureWarning,
>                     stacklevel=2,
>                 )
>                 try:
>                     array = array.astype(np.float64)
>                 except ValueError as e:
> >                   raise ValueError(
>                         "Unable to convert array of bytes/strings "
>                         "into decimal numbers with dtype='numeric'"
>                     ) from e
> E                   ValueError: Unable to convert array of bytes/strings into decimal numbers with dtype='numeric'
> 
> /usr/lib/python3/dist-packages/sklearn/utils/validation.py:781: ValueError
> _____________ test_discrete_metric_supervised_umap_trustworthiness _____________
> 
>     def test_discrete_metric_supervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             target_metric="ordinal",
>             target_weight=0.8,
>             n_epochs=100,
>             random_state=42,
>         ).fit_transform(data, labels)
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:136: TypeError
> ______________ test_count_metric_supervised_umap_trustworthiness _______________
> 
>     def test_count_metric_supervised_umap_trustworthiness():
>         data, labels = make_blobs(50, cluster_std=0.5, random_state=42)
>         labels = (labels ** 2) + 2 * labels
>         embedding = UMAP(
>             n_neighbors=10,
>             min_dist=0.01,
>             target_metric="count",
>             target_weight=0.8,
>             n_epochs=100,
>             random_state=42,
>         ).fit_transform(data, labels)
> >       trust = trustworthiness(data, embedding, 10)
> E       TypeError: trustworthiness() takes 2 positional arguments but 3 were given
> 
> umap/tests/test_umap_trustworthiness.py:155: TypeError
> =============================== warnings summary ===============================
> ../../../usr/lib/python3/dist-packages/numba/core/types/__init__.py:108
>   /usr/lib/python3/dist-packages/numba/core/types/__init__.py:108: DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     long_ = _make_signed(np.long)
> 
> ../../../usr/lib/python3/dist-packages/numba/core/types/__init__.py:109
>   /usr/lib/python3/dist-packages/numba/core/types/__init__.py:109: DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     ulong = _make_unsigned(np.long)
> 
> umap/distances.py:24
>   /<<PKGBUILDDIR>>/umap/distances.py:24: DeprecationWarning: invalid escape sequence \s
>     """Standard euclidean distance.
> 
> umap/distances.py:37
>   /<<PKGBUILDDIR>>/umap/distances.py:37: DeprecationWarning: invalid escape sequence \s
>     """Standard euclidean distance and its gradient.
> 
> umap/distances.py:53
>   /<<PKGBUILDDIR>>/umap/distances.py:53: DeprecationWarning: invalid escape sequence \s
>     """Euclidean distance standardised against a vector of standard
> 
> umap/distances.py:68
>   /<<PKGBUILDDIR>>/umap/distances.py:68: DeprecationWarning: invalid escape sequence \s
>     """Euclidean distance standardised against a vector of standard
> 
> umap/distances.py:84
>   /<<PKGBUILDDIR>>/umap/distances.py:84: DeprecationWarning: invalid escape sequence \s
>     """Manhattan, taxicab, or l1 distance.
> 
> umap/distances.py:98
>   /<<PKGBUILDDIR>>/umap/distances.py:98: DeprecationWarning: invalid escape sequence \s
>     """Manhattan, taxicab, or l1 distance with gradient.
> 
> umap/distances.py:113
>   /<<PKGBUILDDIR>>/umap/distances.py:113: DeprecationWarning: invalid escape sequence \m
>     """Chebyshev or l-infinity distance.
> 
> umap/distances.py:127
>   /<<PKGBUILDDIR>>/umap/distances.py:127: DeprecationWarning: invalid escape sequence \m
>     """Chebyshev or l-infinity distance with gradient.
> 
> umap/distances.py:147
>   /<<PKGBUILDDIR>>/umap/distances.py:147: DeprecationWarning: invalid escape sequence \l
>     """Minkowski distance.
> 
> umap/distances.py:166
>   /<<PKGBUILDDIR>>/umap/distances.py:166: DeprecationWarning: invalid escape sequence \l
>     """Minkowski distance with gradient.
> 
> umap/distances.py:193
>   /<<PKGBUILDDIR>>/umap/distances.py:193: DeprecationWarning: invalid escape sequence \d
>     """Poincare distance.
> 
> umap/distances.py:230
>   /<<PKGBUILDDIR>>/umap/distances.py:230: DeprecationWarning: invalid escape sequence \l
>     """A weighted version of Minkowski distance.
> 
> umap/distances.py:248
>   /<<PKGBUILDDIR>>/umap/distances.py:248: DeprecationWarning: invalid escape sequence \l
>     """A weighted version of Minkowski distance with gradient.
> 
> umap/distances.py:752
>   /<<PKGBUILDDIR>>/umap/distances.py:752: DeprecationWarning: invalid escape sequence \l
>     """
> 
> ../../../usr/lib/python3/dist-packages/nose/importer.py:12
>   /usr/lib/python3/dist-packages/nose/importer.py:12: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
>     from imp import find_module, load_module, acquire_lock, release_lock
> 
> umap/tests/test_plot.py::test_umap_plot_dependency
>   /<<PKGBUILDDIR>>/umap/plot.py:20: UserWarning: The umap.plot package requires extra plotting libraries to be installed.
>       You can install these via pip using
>   
>       pip install umap-learn[plot]
>   
>       or via conda using
>   
>        conda install pandas matplotlib datashader bokeh holoviews colorcet
>       
>     warn(
> 
> umap/tests/test_umap_metrics.py: 16 warnings
> umap/tests/test_umap_nn.py: 4 warnings
>   /usr/lib/python3/dist-packages/sklearn/utils/validation.py:585: FutureWarning: np.matrix usage is deprecated in 1.0 and will raise a TypeError in 1.2. Please convert to a numpy array with np.asarray. For more information see: https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
>     warnings.warn(
> 
> umap/tests/test_umap_metrics.py::test_weighted_minkowski
> umap/tests/test_umap_metrics.py::test_grad_metrics_match_metrics
>   /usr/lib/python3/dist-packages/scipy/spatial/distance.py:275: DeprecationWarning: 'wminkowski' metric is deprecated and will be removed in SciPy 1.8.0, use 'minkowski' instead.
>     kwargs = _validate_kwargs(X, m, n, **kwargs)
> 
> umap/tests/test_umap_metrics.py::test_hellinger
>   /<<PKGBUILDDIR>>/umap/tests/test_umap_metrics.py:396: RuntimeWarning: invalid value encountered in sqrt
>     dist_matrix = np.sqrt(dist_matrix)
> 
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
>   /usr/lib/python3/dist-packages/numba/core/consts.py:114: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     return getattr(value, expr.attr)
> 
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
>   /usr/lib/python3/dist-packages/numba/core/ir_utils.py:2097: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     defn = getattr(defn, x, False)
> 
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_nn.py::test_smooth_knn_dist_l1norms
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
>   /usr/lib/python3/dist-packages/numba/core/typing/context.py:338: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     attrval = getattr(typ.pymod, attr)
> 
> umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris
>   /usr/lib/python3/dist-packages/numba/np/arrayobj.py:3843: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
>   Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
>     NPY_TY = getattr(types, "int%s" % (8 * np.dtype(np.int).itemsize))
> 
> umap/tests/test_umap_ops.py::test_multi_component_layout
>   /usr/lib/python3/dist-packages/sklearn/manifold/_spectral_embedding.py:260: UserWarning: Graph is not fully connected, spectral embedding may not work as expected.
>     warnings.warn(
> 
> umap/tests/test_umap_repeated_data.py::test_repeated_points_large_sparse_spatial
> umap/tests/test_umap_repeated_data.py::test_repeated_points_small_sparse_spatial
> umap/tests/test_umap_repeated_data.py::test_repeated_points_large_sparse_binary
> umap/tests/test_umap_repeated_data.py::test_repeated_points_small_sparse_binary
>   /usr/lib/python3/dist-packages/numpy/lib/arraysetops.py:270: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
>     ar = np.asanyarray(ar)
> 
> umap/tests/test_umap_repeated_data.py::test_repeated_points_large_n
> umap/tests/test_umap_validation_params.py::test_umap_too_many_neighbors_warns
>   /<<PKGBUILDDIR>>/umap/umap_.py:1678: UserWarning: n_neighbors is larger than the dataset size; truncating to X.shape[0] - 1
>     warn(
> 
> umap/tests/test_umap_trustworthiness.py::test_string_metric_supervised_umap_trustworthiness
>   /<<PKGBUILDDIR>>/umap/umap_.py:1867: FutureWarning: Arrays of bytes/strings is being converted to decimal numbers if dtype='numeric'. This behavior is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). Please convert your data to numeric values explicitly instead.
>     y_ = check_array(y, ensure_2d=False)[index]
> 
> -- Docs: https://docs.pytest.org/en/stable/warnings.html
> =========================== short test summary info ============================
> FAILED umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_iris - T...
> FAILED umap/tests/test_umap_on_iris.py::test_initialized_umap_trustworthiness_on_iris
> FAILED umap/tests/test_umap_on_iris.py::test_umap_trustworthiness_on_sphere_iris
> FAILED umap/tests/test_umap_on_iris.py::test_umap_transform_on_iris - TypeErr...
> FAILED umap/tests/test_umap_on_iris.py::test_umap_transform_on_iris_modified_dtype
> FAILED umap/tests/test_umap_on_iris.py::test_umap_sparse_transform_on_iris - ...
> FAILED umap/tests/test_umap_trustworthiness.py::test_umap_sparse_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_umap_trustworthiness_fast_approx
> FAILED umap/tests/test_umap_trustworthiness.py::test_umap_trustworthiness_random_init
> FAILED umap/tests/test_umap_trustworthiness.py::test_supervised_umap_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_semisupervised_umap_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_metric_supervised_umap_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_string_metric_supervised_umap_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_discrete_metric_supervised_umap_trustworthiness
> FAILED umap/tests/test_umap_trustworthiness.py::test_count_metric_supervised_umap_trustworthiness
> ====== 15 failed, 111 passed, 2 xfailed, 64 warnings in 94.86s (0:01:34) =======
> E: pybuild pybuild:355: test: plugin custom failed with: exit code=1: PYTHONPATH=/<<PKGBUILDDIR>>/.pybuild/cpython3_3.9/build python3.9 -m pytest
> dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 --system=custom "--test-args=PYTHONPATH={build_dir} {interpreter} -m pytest" returned exit code 13


The full build log is available from:
http://qa-logs.debian.net/2021/12/20/umap-learn_0.4.5+dfsg-2_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

If you reassign this bug to another package, please marking it as 'affects'-ing
this package. See https://www.debian.org/Bugs/server-control#affects

If you fail to reproduce this, please provide a build log and diff it with mine
so that we can identify if something relevant changed in the meantime.



More information about the Debian-med-packaging mailing list