Bug#963822: numpy breaks scikit-learn autopkgtest: test_set_estimator_none[drop] fails

Paul Gevers elbrus at debian.org
Sat Jun 27 21:12:16 BST 2020


Source: numpy, scikit-learn
Control: found -1 numpy/1:1.19.0-1
Control: found -1 scikit-learn/0.22.2.post1+dfsg-7
Severity: serious
Tags: sid bullseye
X-Debbugs-CC: debian-ci at lists.debian.org
User: debian-ci at lists.debian.org
Usertags: breaks needs-update

Dear maintainer(s),

With a recent upload of numpy the autopkgtest of scikit-learn fails in
testing when that autopkgtest is run with the binary packages of numpy
from unstable. It passes when run with only packages from testing. In
tabular form:

                       pass            fail
numpy                  from testing    1:1.19.0-1
scikit-learn           from testing    0.22.2.post1+dfsg-7
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration of numpy to testing
[1]. Due to the nature of this issue, I filed this bug report against
both packages. Can you please investigate the situation and reassign the
bug to the right package?

More information about this bug and the reason for filing it can be found on
https://wiki.debian.org/ContinuousIntegration/RegressionEmailInformation

Paul

[1] https://qa.debian.org/excuses.php?package=numpy

https://ci.debian.net/data/autopkgtest/testing/amd64/s/scikit-learn/6058472/log.gz

=================================== FAILURES
===================================
________________________ test_set_estimator_none[drop]
_________________________

drop = 'drop'

    @pytest.mark.parametrize("drop", [None, 'drop'])
    def test_set_estimator_none(drop):
        """VotingClassifier set_params should be able to set estimators
as None or
        drop"""
        # Test predict
        clf1 = LogisticRegression(random_state=123)
        clf2 = RandomForestClassifier(n_estimators=10, random_state=123)
        clf3 = GaussianNB()
        eclf1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2),
                                             ('nb', clf3)],
                                 voting='hard', weights=[1, 0,
0.5]).fit(X, y)

        eclf2 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2),
                                             ('nb', clf3)],
                                 voting='hard', weights=[1, 1, 0.5])
        with pytest.warns(None) as record:
            eclf2.set_params(rf=drop).fit(X, y)
>       assert record if drop is None else not record
E       assert False

/usr/lib/python3/dist-packages/sklearn/ensemble/tests/test_voting.py:378:
AssertionError
________________ test_logistic_regression_path_convergence_fail
________________

    def test_logistic_regression_path_convergence_fail():
        rng = np.random.RandomState(0)
        X = np.concatenate((rng.randn(100, 2) + [1, 1], rng.randn(100, 2)))
        y = [1] * 100 + [-1] * 100
        Cs = [1e3]

        # Check that the convergence message points to both a model agnostic
        # advice (scaling the data) and to the logistic regression specific
        # documentation that includes hints on the solver configuration.
        with pytest.warns(ConvergenceWarning) as record:
            _logistic_regression_path(
                X, y, Cs=Cs, tol=0., max_iter=1, random_state=0, verbose=0)

>       assert len(record) == 1
E       assert 6 == 1
E         -6
E         +1

/usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_logistic.py:401:
AssertionError

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://alioth-lists.debian.net/pipermail/debian-science-maintainers/attachments/20200627/52a72905/attachment-0001.sig>


More information about the debian-science-maintainers mailing list