[Debian-med-packaging] Bug#1081690: FTBFS with Python 3.13
Stefano Rivera
stefanor at debian.org
Fri Sep 13 20:20:07 BST 2024
Source: python-hmmlearn
Version: 0.3.0-5
Severity: normal
User: debian-python at lists.debian.org
Usertags: python3.13
This package failed build from source when test-built against a version of
python3-defaults that includes 3.13 as a supported version.
To reproduce this issue, build against python3-defaults (python3-all-dev etc.)
from Debian experimental.
What's new in Python 3.13:
https://docs.python.org/3.13/whatsnew/3.13.html
Log snippet:
I: pybuild plugin_pyproject:144: Unpacking wheel built for python3.12 with "installer" module
dh_auto_test -a -O--buildsystem=pybuild
I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build; python3.13 -m pytest --pyargs hmmlearn
set RNG seed to 2056170209
============================= test session starts ==============================
platform linux -- Python 3.13.0rc2, pytest-8.3.2, pluggy-1.5.0
rootdir: /<<PKGBUILDDIR>>
configfile: setup.cfg
plugins: typeguard-4.3.0
collected 320 items
hmmlearn/tests/test_base.py .....FFF.FF.FF.. [ 5%]
hmmlearn/tests/test_categorical_hmm.py FFFFFFFF..FF..FFFFFF.. [ 11%]
hmmlearn/tests/test_gaussian_hmm.py ..FF..FFFFFF..FFFFFFFF..FF.F..FF..FF [ 23%]
FFFF..FFFFFFFF..FF..FF..FFFFFF..FFFFFFFF..FF..FFFFFF..FFFFFFFF [ 42%]
hmmlearn/tests/test_gmm_hmm.py xxxxxxxxxxxxxxxxxx [ 48%]
hmmlearn/tests/test_gmm_hmm_multisequence.py FFFFFFFF [ 50%]
hmmlearn/tests/test_gmm_hmm_new.py ........FFFFFFxxFF........FFFFFFxxFF. [ 62%]
.......FFFFFFxxFF........FFFFFFxxFFFFFFFF [ 75%]
hmmlearn/tests/test_kl_divergence.py ..... [ 76%]
hmmlearn/tests/test_multinomial_hmm.py ..FF..FFFFFF..FF [ 81%]
hmmlearn/tests/test_poisson_hmm.py ..FFFFFFFFFF [ 85%]
hmmlearn/tests/test_utils.py ... [ 86%]
hmmlearn/tests/test_variational_categorical.py FFFFFFFFFFFF [ 90%]
hmmlearn/tests/test_variational_gaussian.py FFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 98%]
FFFF [100%]
=================================== FAILURES ===================================
____________ TestBaseAgainstWikipedia.test_do_forward_scaling_pass _____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffffac26c910>
def test_do_forward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:79: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_forward_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffffac26ca50>
def test_do_forward_pass(self):
> log_prob, fwdlattice = _hmmc.forward_log(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:91: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestBaseAgainstWikipedia.test_do_backward_scaling_pass ____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffffac299e00>
def test_do_backward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:104: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_viterbi_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffffac33d5b0>
def test_do_viterbi_pass(self):
> log_prob, state_sequence = _hmmc.viterbi(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:129: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseAgainstWikipedia.test_score_samples __________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffffac51b9b0>
def test_score_samples(self):
# ``StubHMM` ignores the values in ```X``, so we just pass in an
# array of the appropriate shape.
> log_prob, posteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:139:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=2)
X = array([[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791],
[-2.30258509, -0.22314355],
[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseConsistentWithGMM.test_score_samples _________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffffac26cb90>
def test_score_samples(self):
> log_prob, hmmposteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:177:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-1.28478267, -1.22982449, -1.09821851, -0.66596849, -0.68644591,
-1.45579042, -0.99263006, -5.65478644... [-0.26985466, -1.844466 , -0.78019206, -0.25162891, -0.30942052,
-0.12770762, -0.79887005, -0.36080897]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestBaseConsistentWithGMM.test_decode _____________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffffac26ccd0>
def test_decode(self):
> _log_prob, state_sequence = self.hmm.decode(self.log_frameprob)
hmmlearn/tests/test_base.py:188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-1.03665708e+00, -3.16860856e-01, -1.63383793e-01,
-2.53975573e+00, -2.15596416e-01, -4.18642332e+00,
...-1.33130001e-01,
-5.46214155e-01, -7.08914810e-02, -3.76537185e-01,
-3.40748403e-01, -3.70029913e+00]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestCategoricalAgainstWikipedia.test_decode_viterbi[scaling] _________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffab1c9e50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_viterbi[log] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffab1c9f90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_map[scaling] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffac29a650>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestCategoricalAgainstWikipedia.test_decode_map[log] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffaac80180>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestCategoricalAgainstWikipedia.test_predict[scaling] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffab222c30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalAgainstWikipedia.test_predict[log] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffffac51bbd0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestCategoricalHMM.test_n_features[scaling] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffab1cac10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0],
[2],
[1],
[2],
[2],
[2],
[2],
[2],
[2],
[0]... [2],
[1],
[2],
[1],
[2],
[1],
[2],
[1],
[2],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestCategoricalHMM.test_n_features[log] ____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffab1cb390>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[1],
[0],
[1],
[2],
[1],
[1],
[0],
[1],
[1],
[2]... [2],
[2],
[1],
[0],
[0],
[0],
[2],
[2],
[0],
[2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffab223770>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[0],
[0],
[1],
[1],
[0],
[2],
[1],
[0],
[2],
[1],
[2],
[0],
[0],
[2],
[1],
[2],
[0],
[0],
[0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaac8c6b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[1],
[2],
[1],
[2],
[2],
[0],
[1],
[0],
[2],
[0],
[0],
[2],
[2],
[0],
[1],
[1],
[2],
[1],
[2],
[1]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestCategoricalHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffab1e6d50>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[1],
[1],
[2],
[1],
[2],
[0],
[0],
[0],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestCategoricalHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaabdd040>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0],
[2],
[2],
[2],
[1],
[2],
[2],
[1],
[1],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaabdd130>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1, params='e',
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[1],
[1],
[1],
[0],
[0],
[2],
[2],
[2],
[2],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaac2b150>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
params='e', random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0],
[2],
[0],
[2],
[1],
[2],
[1],
[0],
[0],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaac2b690>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[2],
[1],
[1],
[1],
[2],
[1],
[1],
[1],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffffaac88940>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[2],
[2],
[0],
[2],
[1],
[1],
[1],
[2],
[1],
[0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffab1e7450>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaabdd310>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithSphericalCovars.test_fit[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaacb49f0>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithSphericalCovars.test_fit[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac88530>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithSphericalCovars.test_criterion[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa850950>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA5C2D40)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithSphericalCovars.test_criterion[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa850a10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA5C3340)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa854890>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaac3bb60>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_____ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa854940>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa5a8410>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac023c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa7ef3d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaa7ef450>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaacb96a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaacb99b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac0aa50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac74230>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3, n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffac2c0e10>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=3,
n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac80b00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac809d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_____ TestGaussianHMMWithSphericalCovars.test_underflow_from_scaling[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffffaac8cf30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_underflow_from_scaling(self, implementation):
# Setup an ill-conditioned dataset
data1 = self.prng.normal(0, 1, 100).tolist()
data2 = self.prng.normal(5, 1, 100).tolist()
data3 = self.prng.normal(0, 1, 100).tolist()
data4 = self.prng.normal(5, 1, 100).tolist()
data = np.concatenate([data1, data2, data3, data4])
# Insert an outlier
data[40] = 10000
data2d = data[:, None]
lengths = [len(data2d)]
h = hmm.GaussianHMM(2, n_iter=100, verbose=True,
covariance_type=self.covariance_type,
implementation=implementation, init_params="")
h.startprob_ = [0.0, 1]
h.transmat_ = [[0.4, 0.6], [0.6, 0.4]]
h.means_ = [[0], [5]]
h.covars_ = [[1], [1]]
if implementation == "scaling":
with pytest.raises(ValueError):
h.fit(data2d, lengths)
else:
> h.fit(data2d, lengths)
hmmlearn/tests/test_gaussian_hmm.py:300:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=2,
n_iter=100, verbose=True)
X = array([[ 4.39992016e-01],
[-4.28234395e-01],
[-3.12012681e-01],
[-5.68883385e-01],
[-1.584...83917623e+00],
[ 5.48982119e+00],
[ 7.23344018e+00],
[ 4.20497381e+00],
[ 4.96426274e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac8caf0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffab1e7550>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithDiagonalCovars.test_fit[scaling] ______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaabdda90>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithDiagonalCovars.test_fit[log] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaacb6c10>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(n_components=3), lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithDiagonalCovars.test_criterion[scaling] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaacb7150>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA5C0640)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithDiagonalCovars.test_criterion[log] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac88600>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA5C1C40)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaa851c10>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffac299350>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
______ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaa851d90>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa4ac050>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaacbd270>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaacbd310>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac01640>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaaca9ed0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaacaa450>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac9c910>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac9cc20>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac758b0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac808a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='cm', n_components=3,
params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffffaac80770>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='cm', n_components=3, params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaac80c30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaac80d60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='st', n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithTiedCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaac8d150>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.38447071, 79.77254741, 259.78569727],
[-177.99668156, 78.24961446, 257.45505499],
[-2...33340467],
[-140.98322629, -299.40429186, -138.97161251],
[-239.71737879, 319.619556 , -120.04865487]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[-179.38447071, 79.77254741, 259.78569727],
[-177.99668156, 78.24961446, 257.45505499],
[-2...52079627],
[-179.04395389, 78.65953561, 258.85286997],
[-239.93817805, 320.24284438, -120.63614837]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithTiedCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffab1e7750>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.40658552, 79.64541687, 259.93987859],
[-176.58218733, 79.94365883, 258.74512359],
[-2...37574799],
[-141.47473681, -299.86930015, -139.64222459],
[-239.71408053, 319.74801717, -120.2786557 ]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[-179.40658552, 79.64541687, 259.93987859],
[-176.58218733, 79.94365883, 258.74512359],
[-2...1204851 ],
[-178.3275549 , 79.82649701, 258.94004822],
[-239.55199958, 320.44789439, -119.80358934]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithTiedCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffab1e7850>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFFAA4B7A40)
X = array([[ -89.38123824, 38.01278412, 129.2884801 ],
[-120.05258767, 161.87507355, -59.15330905],
[-1...66327543],
[-119.68552892, 159.38341875, -61.70458396],
[ -90.52972019, 40.50024645, 129.4228257 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithTiedCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaabddb80>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA538040)
X = array([[ -89.62586235, 38.34186093, 128.74476435],
[-120.01617453, 161.36381144, -58.59000098],
[-1...84484112],
[-119.29515682, 159.74501675, -61.49192793],
[ -90.11908553, 40.68568346, 129.67783622]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaabddc70>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa538650>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaacb5e10>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa538b50>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaa850f50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaa850ad0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaa854c00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaa854b50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaacbdd10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaacbddb0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaac01880>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-179.40644663, 80.38628517, 260.46452065],
[-177.75916678, 83.06972891, 259.55100775],
[-2...60368823],
[-241.43416831, 319.07709633, -119.03765627],
[-243.38013664, 319.70389678, -119.26458679]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffffaac997d0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='', n_components=3, n_iter=1)
X = array([[-179.40914788, 80.34873355, 260.06341399],
[-178.32135801, 83.09702222, 260.93968377],
[-2...07157308],
[-240.42088077, 318.23423128, -119.7895532 ],
[-241.13703629, 317.45325121, -118.51017893]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaac80e90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaac80fc0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='st', n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithFullCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaac8d370>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithFullCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffab1e7950>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithFullCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffab1e7a50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFFAA620940)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithFullCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaabddd60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA620040)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaabdde50>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa5e0590>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaacad8d0>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffffaa677e70>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaa852d50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaa852ed0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaa856360>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaa856410>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaacbe7b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
__________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaacbe850>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
_________ TestGaussianHMMWithFullCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaac01910>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithFullCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffffaa82e050>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='', n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag] __
covariance_type = 'diag', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical] _
covariance_type = 'spherical', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied] __
covariance_type = 'tied', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full] __
covariance_type = 'full', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag] ____
covariance_type = 'diag', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical] _
covariance_type = 'spherical', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied] ____
covariance_type = 'tied', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full] ____
covariance_type = 'full', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa7cc550>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA539840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaabde300>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA538C40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMMWithSphericalCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaabde030>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA538E40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithSphericalCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa7d1710>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA538F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[scaling] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa7d1c50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA538940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[log] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa816680>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFFAA538140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGMMHMMWithSphericalCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa7a4310>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFFACFA8140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGMMHMMWithSphericalCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffffaa7a4470>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFFAA5C3340,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa7cc750>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA5C3140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaabde120>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA5C3840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithDiagCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaabde6c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA5C3740,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithDiagCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa7d3070>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA5C0C40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa7d35b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA621140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa8168f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFFAA620140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithDiagCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa7a4e10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFFAA622640,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithDiagCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffffaa7a4ec0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFFAA622540,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa7cc950>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA53B540,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaabde7b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA53AC40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithTiedCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaabde8a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA53AD40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithTiedCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa7acad0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA53A640,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa7ad010>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA539F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa815f30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFFAA41C140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithTiedCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa7a5860>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFFAA41CD40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithTiedCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffffaa7a5910>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFFAA41D140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithFullCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa7ccb50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA41D840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithFullCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaabde990>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA4B6C40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithFullCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaabdea80>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA538240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithFullCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa7d2350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA53AF40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithFullCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa7d11d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA538040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithFullCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa815cc0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFFAA538F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithFullCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa7a4b50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFFAA539D40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithFullCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffffaa7a4940>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFFAA5C3340,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMM_KmeansInit.test_kmeans[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffffaa839d10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMM_KmeansInit.test_kmeans[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffffaa839e50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[diag] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffffaa839f90>
covtype = 'diag', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.97769034, -16.75056455],
[-19.88212945, -16.97913043],
[-19.93125386, -16.94276853],
...,
[-11.01150478, -1.11584774],
[-11.10973308, -1.07914205],
[-10.8998337 , -0.84707255]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMM_MultiSequence.test_chunked[spherical] _______________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffffaa83a0d0>
covtype = 'spherical', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.80390185, -17.07835084],
[-19.60579587, -16.83260239],
[-19.92498908, -16.91030194],
...,
[-11.17392582, -1.26966434],
[-11.14220209, -1.03192961],
[-11.14814372, -0.99298261]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[tied] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffffaac828b0>
covtype = 'tied', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0.... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.22761614, -15.84567719],
[-21.23619726, -16.89659692],
[-20.71982474, -16.73140459],
...,
[-10.87180439, -1.55878592],
[ -9.74956046, -1.38825752],
[-12.13924424, -0.25692342]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[full] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffffaac829e0>
covtype = 'full', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.51255292, -17.67431134],
[-15.84831228, -16.50504373],
[-21.40806672, -17.58054428],
...,
[-12.05683236, -0.58197627],
[-11.42658201, -1.42127957],
[-12.15481108, -0.76401566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestMultinomialHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaac82fd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaac83100>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_____________________ TestMultinomialHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaac8e8b0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[4, 6, 0, 0],
[3, 1, 0, 1],
[2, 2, 6, 0],
[0, 2, 3, 0],
[0, 0, 4, 6],
[3, 0, 0, 2],
[2, 0, 4, 4],
[1, 0, 2, 2],
[2, 4, 0, 4],
[3, 2, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_______________________ TestMultinomialHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaa7ccc50>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0, 0, 6, 4],
[4, 0, 1, 0],
[8, 2, 0, 0],
[2, 2, 0, 1],
[8, 2, 0, 0],
[1, 2, 1, 1],
[2, 4, 0, 4],
[2, 2, 0, 1],
[6, 2, 2, 0],
[0, 1, 2, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
______________ TestMultinomialHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaa7cce50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0, 6, 4, 0],
[0, 1, 2, 2],
[0, 0, 6, 4],
[0, 2, 1, 2],
[6, 0, 2, 2],
[1, 3, 0, 1],
[8, 2, 0, 0],
[3, 2, 0, 0],
[6, 4, 0, 0],
[5, 0, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaabdee40>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[6, 4, 0, 0],
[4, 1, 0, 0],
[4, 2, 2, 2],
[1, 2, 1, 1],
[2, 0, 4, 4],
[0, 0, 5, 0],
[6, 2, 0, 2],
[3, 2, 0, 0],
[0, 0, 6, 4],
[0, 0, 1, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaabdef30>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[0, 0, 3, 2],
[1, 2, 1, 1],
[3, 1, 1, 0],
[4, 1, 0, 0],
[1, 0, 2, 2],
[0, 0, 3, 2],
[1, 1, 3, 0],
[0, 1, 1, 3],
[3, 0, 1, 1],
[0, 0, 3, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaa7aea50>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[4, 0, 0, 1],
[0, 1, 2, 2],
[0, 1, 0, 4],
[3, 1, 1, 0],
[0, 0, 1, 4],
[0, 1, 2, 2],
[1, 0, 2, 2],
[0, 0, 4, 1],
[0, 1, 3, 1],
[3, 2, 0, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________ TestMultinomialHMM.test_compare_with_categorical_hmm[scaling] _________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaa7a9250>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________ TestMultinomialHMM.test_compare_with_categorical_hmm[log] ___________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffffaa7a9550>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestPoissonHMM.test_score_samples[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaac835c0>
implementation = 'scaling', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_score_samples[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaac83490>
implementation = 'log', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestPoissonHMM.test_fit[scaling] _______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaa88f410>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA539040)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________________ TestPoissonHMM.test_fit[log] _________________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaac8ee00>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA538140)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestPoissonHMM.test_fit_lambdas[scaling] ___________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaac8ebe0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
params='l', random_state=RandomState(MT19937) at 0xFFFFAA538940)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestPoissonHMM.test_fit_lambdas[log] _____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaa7cc450>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1, params='l',
random_state=RandomState(MT19937) at 0xFFFFAA539F40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestPoissonHMM.test_fit_with_init[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaa7cd150>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_fit_with_init[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaabdf110>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFB0EB3840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_criterion[scaling] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaabdf200>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA53AC40)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestPoissonHMM.test_criterion[log] ______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffffaa7afcb0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFFAA623240)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_init_priors[scaling] _____________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa83ac10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_init_priors[log] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa83afd0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_n_features[scaling] ______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaac83bb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_n_features[log] ________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaac83ce0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestVariationalCategorical.test_init_incorrect_priors[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa6a4a70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, implementation='scaling',
init_params='', n_...,
n_iter=1, random_state=1984, startprob_prior=1,
transmat_prior=2)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestVariationalCategorical.test_init_incorrect_priors[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaac8f350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, init_params='', n_components=4,
n_features=3, n_iter=1, random_state=1984,
startprob_prior=1, transmat_prior=2)
X = array([[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestVariationalCategorical.test_fit_beal[scaling] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaac8f020>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=12, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C3A40)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestVariationalCategorical.test_fit_beal[log] _________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa7cd550>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=12, n_features=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA41D840)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_and_compare_with_em[scaling] _______
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa7cd650>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='e',
n_components=4, n_features=3, n_iter=500,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_and_compare_with_em[log] _________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaabdf4d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='e', n_components=4, n_features=3,
n_iter=500, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_length_1_sequences[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaabdf5c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_length_1_sequences[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffffaa6ed010>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaac83950>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA4B7540, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B7540,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B7540,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestFull.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaa658c30>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA4B6A40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B6A40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B6A40,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestFull.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaa6a5eb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA4B6740,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestFull.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaac8f790>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='mc', n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA4B6240,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestFull.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaac8ef10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[ 0.21535104],
[ 2.82985744],
[-0.97185779],
[ 2.89081593],
[-0.66290202],
[...644159],
[ 0.32126301],
[ 2.73373158],
[-0.48778415],
[ 3.2352048 ],
[-2.21829728]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestFull.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaa7cd350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[-0.33240202],
[ 1.16575351],
[ 0.76708158],
[-0.16665794],
[-2.0417122 ],
[...612387],
[-1.47774877],
[ 1.99699008],
[ 3.9346355 ],
[-1.84294702],
[-2.14332482]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestFull.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaa83b110>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], impleme...FFFAA7CF440,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[-0.97620016],
[ 0.79725115],
[-0.27940365],
[ 3.32645134],
[-2.69876488],
[...774038],
[ 3.83803194],
[-1.46435466],
[ 2.95456941],
[-0.13443947],
[-0.96474541]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffffaa83b250>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], init_pa...FFFAA7CFC40,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[ 1.90962598],
[ 1.38857322],
[ 0.88432176],
[ 1.50437126],
[-1.37679708],
[...987493],
[ 1.1246179 ],
[-2.31770774],
[ 2.39814844],
[ 1.40856394],
[ 2.12694691]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa658b00>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA5C1440, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 2.91189727],
[ 1.47179701, 9.35583105, 10.30599288],
[ -4.00663682, -15.17296134, 2.9706196 ]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C1440,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C1440,
tol=1e-09)
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 8.29790309],
[ -7.45761904, 8.0443883 , 8.74775768],
[ -7.54100296, 7.27668055, 8.35765657]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestTied.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa6588a0>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA5C2240, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 1.73843505],
[ 3.90207131, 11.87153515, 12.46452122],
[ -6.04735701, -17.31754837, 1.46456652]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C2240,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C2240,
tol=1e-09)
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 10.28703698],
[ -9.27093832, 7.48888941, 7.75556056],
[ -9.50212106, 8.22396714, 7.70516698]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestTied.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa6a6d50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA5C2B40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestTied.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaac8f570>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA53AF40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestTied.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaac8f130>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 3.02406044],
[ 0.15141778],
[ 0.44490074],
[ 0.92052631],
[-0.18359039],
[...156249],
[ 0.61494698],
[-2.27023399],
[ 2.64757888],
[-2.00572944],
[ 0.08367312]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestTied.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa7cd850>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.09489413],
[-0.12957722],
[-1.73146656],
[ 3.55253037],
[ 2.62945991],
[...229695],
[ 0.93327602],
[ 3.14435486],
[-2.68712136],
[-0.81984256],
[ 3.63942885]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestTied.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa83b610>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, im... random_state=RandomState(MT19937) at 0xFFFFAA53BA40,
scale_prior=[[2]], tol=1e-09)
X = array([[ 2.7343842 ],
[ 2.01508175],
[ 2.29638889],
[ 1.12585508],
[ 1.67279509],
[...808295],
[-0.79265056],
[-0.27745453],
[ 0.69004695],
[-0.23995418],
[-1.0133645 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffffaa83b750>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, in... random_state=RandomState(MT19937) at 0xFFFFAA538640,
scale_prior=[[2]], tol=1e-09)
X = array([[-1.51990156],
[-0.77421241],
[ 3.56219686],
[-1.64888838],
[ 2.6276434 ],
[...179403],
[-0.686967 ],
[ 1.27430623],
[-0.31739316],
[ 1.74639412],
[-2.01831639]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_random_fit[scaling] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa658640>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA41E340, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA41E340,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA41E340,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestSpherical.test_random_fit[log] ______________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa6583e0>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA41E040, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA41E040,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA41E040,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestSpherical.test_fit_mcgrory_titterington1d[scaling] ____________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa6a7bf0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='mc',...ter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA41CB40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_fit_mcgrory_titterington1d[log] ______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaac8f8a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA41C440,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaac8f9b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 1.58581198],
[-1.43013571],
[ 3.50073686],
[-2.09080284],
[ 1.48390039],
[...711457],
[ 1.8787106 ],
[ 2.31673751],
[ 0.62417883],
[-2.57450891],
[ 0.51093669]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestSpherical.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa7cd950>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.55895004],
[ 1.9386079 ],
[-1.14441545],
[ 0.79939524],
[-0.84122716],
[...848896],
[-0.7355048 ],
[-1.27791075],
[-1.53171601],
[ 1.93602005],
[-1.20472876]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestSpherical.test_initialization[scaling] __________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa83b4d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFFAA41F140,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[-0.69995355],
[ 1.11732084],
[ 2.34671222],
[ 0.38667263],
[ 0.49315166],
[...586139],
[ 0.81443462],
[-1.66759168],
[ 3.14268492],
[ 3.76227287],
[ 0.80644186]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_initialization[log] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffffaa83b890>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFFAA41F340,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[ 3.45654067],
[-2.75120263],
[ 2.70685609],
[ 2.19256817],
[-0.71552539],
[...986977],
[-2.05296787],
[ 0.98484479],
[ 2.68913339],
[-0.30012857],
[ 3.23805001]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_random_fit[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa6582b0>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(implementation='scaling', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA5C0040, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C0040,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA5C0040,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestDiagonal.test_random_fit[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa658d60>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}, h = GaussianHMM(init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFFAA4B6E40, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B6E40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFFAA4B6E40,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestDiagonal.test_fit_mcgrory_titterington1d[scaling] _____________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa6a8b90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA4B6B40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestDiagonal.test_fit_mcgrory_titterington1d[log] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaac8fac0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFFAA5C1340,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestDiagonal.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaac8fbd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.94840979],
[-0.4236967 ],
[-1.86164101],
[-2.70760383],
[ 0.52817596],
[...614648],
[ 1.17327289],
[-0.48308756],
[-1.23521059],
[ 2.96221347],
[-2.4055287 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestDiagonal.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa7cda50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.00900958],
[ 1.83548612],
[-1.18687723],
[ 1.39357219],
[ 2.31529054],
[...120186],
[-0.59813352],
[ 1.09476375],
[ 2.7001891 ],
[ 0.25515909],
[-1.58409402]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestDiagonal.test_initialization[scaling] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa83b9d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFFAA622640,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[ 0.37725899],
[ 3.11738285],
[-0.09163979],
[ 1.69939899],
[ 1.17211122],
[...975532],
[-1.29219785],
[-2.21400016],
[-0.12401679],
[ 3.5650227 ],
[-0.33847644]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_initialization[log] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffffaa83bb10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFFAA623740,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[-1.50974603],
[ 0.66501942],
[ 1.03376567],
[-0.33821964],
[-0.03369866],
[...945696],
[ 1.03948035],
[ 3.29548267],
[-1.67415189],
[-0.95330419],
[ 2.79920426]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
=============================== warnings summary ===============================
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 9 warnings
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 15 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1192: RuntimeWarning: underflow encountered in exp
self.startprob_subnorm_ = np.exp(startprob_log_subnorm)
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 7 warnings
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 13 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1197: RuntimeWarning: underflow encountered in exp
self.transmat_subnorm_ = np.exp(transmat_log_subnorm)
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1130: RuntimeWarning: underflow encountered in exp
return np.exp(self._compute_subnorm_log_likelihood(X))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_backward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_viterbi_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_decode - ...
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_underflow_from_scaling[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[diag]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[spherical]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[tied]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[full]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[log] - Ru...
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[log]
=========== 202 failed, 92 passed, 26 xfailed, 45 warnings in 28.14s ===========
E: pybuild pybuild:389: test: plugin pyproject failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build; python3.13 -m pytest --pyargs hmmlearn
I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build; python3.12 -m pytest --pyargs hmmlearn
set RNG seed to 764389638
============================= test session starts ==============================
platform linux -- Python 3.12.6, pytest-8.3.2, pluggy-1.5.0
rootdir: /<<PKGBUILDDIR>>
configfile: setup.cfg
plugins: typeguard-4.3.0
collected 320 items
hmmlearn/tests/test_base.py .....FFF.FF.FF.. [ 5%]
hmmlearn/tests/test_categorical_hmm.py FFFFFFFF..FF..FFFFFF.. [ 11%]
hmmlearn/tests/test_gaussian_hmm.py ..FF..FFFFFF..FFFFFFFF..FF.F..FF..FF [ 23%]
FFFF..FFFFFFFF..FF..FF..FFFFFF..FFFFFFFF..FF..FFFFFF..FFFFFFFF [ 42%]
hmmlearn/tests/test_gmm_hmm.py xxxxxxxxxxxxxxxxxx [ 48%]
hmmlearn/tests/test_gmm_hmm_multisequence.py FFFFFFFF [ 50%]
hmmlearn/tests/test_gmm_hmm_new.py ........FFFFFFxxFF........FFFFFFxxFF. [ 62%]
.......FFFFFFxxFF........FFFFFFxxFFFFFFFF [ 75%]
hmmlearn/tests/test_kl_divergence.py ..... [ 76%]
hmmlearn/tests/test_multinomial_hmm.py ..FF..FFFFFF..FF [ 81%]
hmmlearn/tests/test_poisson_hmm.py ..FFFFFFFFFF [ 85%]
hmmlearn/tests/test_utils.py ... [ 86%]
hmmlearn/tests/test_variational_categorical.py FFFFFFFFFFFF [ 90%]
hmmlearn/tests/test_variational_gaussian.py FFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 98%]
FFFF [100%]
=================================== FAILURES ===================================
____________ TestBaseAgainstWikipedia.test_do_forward_scaling_pass _____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7978f080>
def test_do_forward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:79: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_forward_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff79468f20>
def test_do_forward_pass(self):
> log_prob, fwdlattice = _hmmc.forward_log(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:91: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestBaseAgainstWikipedia.test_do_backward_scaling_pass ____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff79469220>
def test_do_backward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:104: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_viterbi_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff794694c0>
def test_do_viterbi_pass(self):
> log_prob, state_sequence = _hmmc.viterbi(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:129: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseAgainstWikipedia.test_score_samples __________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff794696a0>
def test_score_samples(self):
# ``StubHMM` ignores the values in ```X``, so we just pass in an
# array of the appropriate shape.
> log_prob, posteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:139:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=2)
X = array([[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791],
[-2.30258509, -0.22314355],
[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseConsistentWithGMM.test_score_samples _________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff79469b50>
def test_score_samples(self):
> log_prob, hmmposteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:177:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-2.96386293e-01, -4.17593458e-01, -1.27991286e+00,
-2.45502914e+00, -1.25950994e+00, -3.02797648e-01,
...-1.27434836e+00,
-1.40410037e+00, -1.02187784e+00, -1.20382375e+00,
-9.55100491e-01, -1.19901257e+00]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestBaseConsistentWithGMM.test_decode _____________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff79469d00>
def test_decode(self):
> _log_prob, state_sequence = self.hmm.decode(self.log_frameprob)
hmmlearn/tests/test_base.py:188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-1.07887817e-01, -1.94024833e-01, -6.82941990e-01,
-7.96115993e-01, -8.66453816e-01, -6.75703475e-03,
...-3.95517515e-01,
-1.23898412e+00, -1.30590765e+00, -3.97863571e+00,
-9.05989789e-01, -5.84822932e-02]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestCategoricalAgainstWikipedia.test_decode_viterbi[scaling] _________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff78497560>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_viterbi[log] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7978f530>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_map[scaling] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff77f0b590>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestCategoricalAgainstWikipedia.test_decode_map[log] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff77f2c3b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestCategoricalAgainstWikipedia.test_predict[scaling] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff77f2c980>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalAgainstWikipedia.test_predict[log] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff77f2cb00>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestCategoricalHMM.test_n_features[scaling] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2cda0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[2],
[0],
[1],
[0],
[2],
[2],
[1],
[2],
[1],
[0]... [0],
[1],
[1],
[2],
[1],
[0],
[0],
[0],
[2],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestCategoricalHMM.test_n_features[log] ____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2cf20>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[1],
[1],
[2],
[0],
[0],
[2],
[2],
[1],
[1],
[1]... [2],
[0],
[0],
[2],
[0],
[1],
[1],
[0],
[0],
[0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2d4f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[0],
[2],
[0],
[0],
[2],
[2],
[0],
[1],
[0],
[0],
[0],
[2],
[1],
[1],
[0],
[2],
[2],
[0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2d790>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[1],
[0],
[0],
[1],
[2],
[2],
[2],
[0],
[2],
[0],
[0],
[0],
[1],
[1],
[1],
[1],
[1],
[2],
[0],
[0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestCategoricalHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2dd30>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0],
[2],
[2],
[2],
[0],
[0],
[2],
[1],
[1],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestCategoricalHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2deb0>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0],
[2],
[2],
[0],
[2],
[0],
[2],
[1],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2e0c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1, params='e',
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0],
[1],
[0],
[0],
[0],
[2],
[1],
[2],
[1],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2e240>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
params='e', random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0],
[1],
[2],
[0],
[0],
[2],
[0],
[0],
[1],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2e450>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[2],
[2],
[1],
[2],
[1],
[0],
[2],
[2],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff77f2e5d0>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[1],
[0],
[0],
[2],
[2],
[0],
[0],
[0],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4c770>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4c8f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithSphericalCovars.test_fit[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4ce90>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithSphericalCovars.test_fit[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4d070>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithSphericalCovars.test_criterion[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4d280>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF77A0F740)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithSphericalCovars.test_criterion[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4d400>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF7794C440)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4d610>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff77985c40>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_____ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4d790>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff779e0b90>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f2ea20>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4dd30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4cdd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4c5f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4deb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4e060>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4e240>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3, n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4e3c0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=3,
n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f2dfd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f2d8b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_____ TestGaussianHMMWithSphericalCovars.test_underflow_from_scaling[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff77f4fbc0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_underflow_from_scaling(self, implementation):
# Setup an ill-conditioned dataset
data1 = self.prng.normal(0, 1, 100).tolist()
data2 = self.prng.normal(5, 1, 100).tolist()
data3 = self.prng.normal(0, 1, 100).tolist()
data4 = self.prng.normal(5, 1, 100).tolist()
data = np.concatenate([data1, data2, data3, data4])
# Insert an outlier
data[40] = 10000
data2d = data[:, None]
lengths = [len(data2d)]
h = hmm.GaussianHMM(2, n_iter=100, verbose=True,
covariance_type=self.covariance_type,
implementation=implementation, init_params="")
h.startprob_ = [0.0, 1]
h.transmat_ = [[0.4, 0.6], [0.6, 0.4]]
h.means_ = [[0], [5]]
h.covars_ = [[1], [1]]
if implementation == "scaling":
with pytest.raises(ValueError):
h.fit(data2d, lengths)
else:
> h.fit(data2d, lengths)
hmmlearn/tests/test_gaussian_hmm.py:300:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=2,
n_iter=100, verbose=True)
X = array([[ 4.39992016e-01],
[-4.28234395e-01],
[-3.12012681e-01],
[-5.68883385e-01],
[-1.584...83917623e+00],
[ 5.48982119e+00],
[ 7.23344018e+00],
[ 4.20497381e+00],
[ 4.96426274e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4f080>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4f260>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithDiagonalCovars.test_fit[scaling] ______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4f7d0>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithDiagonalCovars.test_fit[log] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4f950>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(n_components=3), lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithDiagonalCovars.test_criterion[scaling] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4fb30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF77A90740)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithDiagonalCovars.test_criterion[log] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4fd10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF77A0CC40)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4ff20>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff778d5070>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
______ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f51100>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff778ab170>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f2fa10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f2fc50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f2fef0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4fa10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4e4b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4e180>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4dc40>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f50050>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4e9f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='cm', n_components=3,
params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff77f4eb70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='cm', n_components=3, params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f506e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f50890>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='st', n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithTiedCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f50e60>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.49036318, 80.49770217, 260.22729101],
[-177.19921897, 82.05240269, 258.75362751],
[-2...76799323],
[-141.11139975, -301.01622178, -139.64456859],
[-239.52737343, 320.0881958 , -119.88264809]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[-179.49036318, 80.49770217, 260.22729101],
[-177.19921897, 82.05240269, 258.75362751],
[-2...48814769],
[-178.22353792, 80.73368008, 259.40528151],
[-240.10345486, 320.44926904, -120.55757739]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithTiedCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f50fe0>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.29695334, 80.63551478, 260.08570242],
[-179.63198744, 83.57475464, 259.43705642],
[-2...54851235],
[-139.94360048, -301.49867289, -139.74192943],
[-240.00737543, 320.41244315, -119.7630728 ]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[-180.29695334, 80.63551478, 260.08570242],
[-179.63198744, 83.57475464, 259.43705642],
[-2...52720131],
[-179.57829677, 81.93918096, 260.07470614],
[-239.91312424, 320.24505768, -120.61859347]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithTiedCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51220>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF7794EC40)
X = array([[ -88.21770462, 40.92795939, 129.40262041],
[-121.41627864, 158.98922756, -59.13670338],
[-1...22352999],
[-119.34365265, 161.49352362, -59.98983334],
[ -90.61800641, 40.37507169, 129.9919648 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithTiedCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51400>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF7794F740)
X = array([[ -89.44538051, 37.94816194, 129.39341826],
[-120.41705964, 161.68371736, -58.8628533 ],
[-1...42278486],
[-118.4986192 , 159.41830414, -60.98122605],
[ -89.67265413, 40.74109859, 129.37179527]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f4f320>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff778a8bf0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f2f770>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff778d76b0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51880>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51a00>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51c10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51d90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f51fa0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f52120>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f52330>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-179.71981959, 80.62320422, 260.46006062],
[-179.65260883, 83.73956528, 259.61020874],
[-2...72729149],
[-240.75128173, 318.46053611, -119.12219174],
[-242.63342505, 317.96779402, -119.42243578]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff77f524e0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='', n_components=3, n_iter=1)
X = array([[-179.36757073, 80.10292494, 260.230414 ],
[-176.9979355 , 82.11169006, 260.26981077],
[-2...00947255],
[-241.50421361, 319.18515577, -119.31106836],
[-242.81887283, 319.8039497 , -118.45936722]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f52ae0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f52c60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='st', n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithFullCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f53290>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithFullCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f53410>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithFullCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f535f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF77946440)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithFullCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f53770>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF77946F40)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f53980>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff77934320>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f4d520>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff778d4da0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f510a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f53d70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77eed370>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7853a5d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f69f10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
__________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f68590>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
_________ TestGaussianHMMWithFullCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f68b30>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithFullCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff77f68ce0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='', n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag] __
covariance_type = 'diag', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical] _
covariance_type = 'spherical', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied] __
covariance_type = 'tied', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full] __
covariance_type = 'full', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag] ____
covariance_type = 'diag', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical] _
covariance_type = 'spherical', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied] ____
covariance_type = 'tied', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full] ____
covariance_type = 'full', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb6f30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF77945B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb70b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF7794FA40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMMWithSphericalCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb7260>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF77975B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithSphericalCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb7410>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF77974F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[scaling] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb75c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF7794FA40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[log] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb7740>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF77944D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGMMHMMWithSphericalCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb7c20>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF77976D40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGMMHMMWithSphericalCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff77fb7da0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF77977840,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1c290>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF77977240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1c410>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF77976940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithDiagCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1c680>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF77975D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithDiagCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1c800>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF77947040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1ca70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF7794E440,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1cc20>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF77944E40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithDiagCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1d370>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF77976F40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithDiagCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff77b1d4f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF77790340,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1e6f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77790D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77fb7800>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77977240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithTiedCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1da30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77976840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithTiedCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1cda0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77975D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1c1a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77947040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1e9c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF77790740,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithTiedCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1efc0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF77974540,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithTiedCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff77b1f140>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF77977440,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithFullCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aae780>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF77791640,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithFullCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aac050>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF77791F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithFullCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aac260>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF77A92D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithFullCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aac410>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF7794C840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithFullCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aac6b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF7794D740,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithFullCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77aac860>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF77976940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithFullCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77b1ee10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF7794E040,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithFullCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff77b1e600>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF7794FA40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMM_KmeansInit.test_kmeans[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff77fb7ec0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMM_KmeansInit.test_kmeans[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff77aac9b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[diag] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff77aaccb0>
covtype = 'diag', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.97769034, -16.75056455],
[-19.88212945, -16.97913043],
[-19.93125386, -16.94276853],
...,
[-11.01150478, -1.11584774],
[-11.10973308, -1.07914205],
[-10.8998337 , -0.84707255]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMM_MultiSequence.test_chunked[spherical] _______________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff77aacf80>
covtype = 'spherical', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.80390185, -17.07835084],
[-19.60579587, -16.83260239],
[-19.92498908, -16.91030194],
...,
[-11.17392582, -1.26966434],
[-11.14220209, -1.03192961],
[-11.14814372, -0.99298261]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[tied] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff77aad0a0>
covtype = 'tied', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0.... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.22761614, -15.84567719],
[-21.23619726, -16.89659692],
[-20.71982474, -16.73140459],
...,
[-10.87180439, -1.55878592],
[ -9.74956046, -1.38825752],
[-12.13924424, -0.25692342]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[full] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff77aad1c0>
covtype = 'full', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.51255292, -17.67431134],
[-15.84831228, -16.50504373],
[-21.40806672, -17.58054428],
...,
[-12.05683236, -0.58197627],
[-11.42658201, -1.42127957],
[-12.15481108, -0.76401566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestMultinomialHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aad640>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aad7f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_____________________ TestMultinomialHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf110>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[4, 6, 0, 0],
[3, 1, 0, 1],
[2, 2, 6, 0],
[0, 2, 3, 0],
[0, 0, 4, 6],
[3, 0, 0, 2],
[2, 0, 4, 4],
[1, 0, 2, 2],
[2, 4, 0, 4],
[3, 2, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_______________________ TestMultinomialHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf290>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0, 0, 6, 4],
[4, 0, 1, 0],
[8, 2, 0, 0],
[2, 2, 0, 1],
[8, 2, 0, 0],
[1, 2, 1, 1],
[2, 4, 0, 4],
[2, 2, 0, 1],
[6, 2, 2, 0],
[0, 1, 2, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
______________ TestMultinomialHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf4a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0, 6, 4, 0],
[0, 1, 2, 2],
[0, 0, 6, 4],
[0, 2, 1, 2],
[6, 0, 2, 2],
[1, 3, 0, 1],
[8, 2, 0, 0],
[3, 2, 0, 0],
[6, 4, 0, 0],
[5, 0, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf620>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[6, 4, 0, 0],
[4, 1, 0, 0],
[4, 2, 2, 2],
[1, 2, 1, 1],
[2, 0, 4, 4],
[0, 0, 5, 0],
[6, 2, 0, 2],
[3, 2, 0, 0],
[0, 0, 6, 4],
[0, 0, 1, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf7d0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[0, 0, 3, 2],
[1, 2, 1, 1],
[3, 1, 1, 0],
[4, 1, 0, 0],
[1, 0, 2, 2],
[0, 0, 3, 2],
[1, 1, 3, 0],
[0, 1, 1, 3],
[3, 0, 1, 1],
[0, 0, 3, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aaf9b0>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[4, 0, 0, 1],
[0, 1, 2, 2],
[0, 1, 0, 4],
[3, 1, 1, 0],
[0, 0, 1, 4],
[0, 1, 2, 2],
[1, 0, 2, 2],
[0, 0, 4, 1],
[0, 1, 3, 1],
[3, 2, 0, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________ TestMultinomialHMM.test_compare_with_categorical_hmm[scaling] _________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77aafe90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________ TestMultinomialHMM.test_compare_with_categorical_hmm[log] ___________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff77ab00b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestPoissonHMM.test_score_samples[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab01a0>
implementation = 'scaling', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_score_samples[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab2750>
implementation = 'log', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestPoissonHMM.test_fit[scaling] _______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab05f0>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7794E240)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________________ TestPoissonHMM.test_fit[log] _________________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0770>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77A92D40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestPoissonHMM.test_fit_lambdas[scaling] ___________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0920>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
params='l', random_state=RandomState(MT19937) at 0xFFFF77976840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestPoissonHMM.test_fit_lambdas[log] _____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0aa0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1, params='l',
random_state=RandomState(MT19937) at 0xFFFF77A0DF40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestPoissonHMM.test_fit_with_init[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0c50>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_fit_with_init[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0e30>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7E16F740)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_criterion[scaling] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab0fe0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF7794F140)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestPoissonHMM.test_criterion[log] ______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff77ab1160>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF7794DA40)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_init_priors[scaling] _____________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab0830>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_init_priors[log] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab2600>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_n_features[scaling] ______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab2840>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_n_features[log] ________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab2a50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestVariationalCategorical.test_init_incorrect_priors[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab2cc0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, implementation='scaling',
init_params='', n_...,
n_iter=1, random_state=1984, startprob_prior=1,
transmat_prior=2)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestVariationalCategorical.test_init_incorrect_priors[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab2e40>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, init_params='', n_components=4,
n_features=3, n_iter=1, random_state=1984,
startprob_prior=1, transmat_prior=2)
X = array([[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestVariationalCategorical.test_fit_beal[scaling] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab30e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=12, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977040)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestVariationalCategorical.test_fit_beal[log] _________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab3260>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=12, n_features=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77947840)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_and_compare_with_em[scaling] _______
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab3440>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='e',
n_components=4, n_features=3, n_iter=500,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_and_compare_with_em[log] _________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab35c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='e', n_components=4, n_features=3,
n_iter=500, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_length_1_sequences[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab37a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_length_1_sequences[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff77ab3980>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77ab1610>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF77976240, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77976240,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77976240,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestFull.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77ab3b30>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF77A0C040, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77A0C040,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77A0C040,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestFull.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77aa4950>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF77977040,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestFull.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77aa6de0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='mc', n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF77974840,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestFull.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77aa4560>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[ 0.21535104],
[ 2.82985744],
[-0.97185779],
[ 2.89081593],
[-0.66290202],
[...644159],
[ 0.32126301],
[ 2.73373158],
[-0.48778415],
[ 3.2352048 ],
[-2.21829728]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestFull.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77aa4980>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[-0.33240202],
[ 1.16575351],
[ 0.76708158],
[-0.16665794],
[-2.0417122 ],
[...612387],
[-1.47774877],
[ 1.99699008],
[ 3.9346355 ],
[-1.84294702],
[-2.14332482]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestFull.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77ab2ba0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], impleme...FFF77976440,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[-0.97620016],
[ 0.79725115],
[-0.27940365],
[ 3.32645134],
[-2.69876488],
[...774038],
[ 3.83803194],
[-1.46435466],
[ 2.95456941],
[-0.13443947],
[-0.96474541]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff77ab17c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], init_pa...FFF77974240,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[ 1.90962598],
[ 1.38857322],
[ 0.88432176],
[ 1.50437126],
[-1.37679708],
[...987493],
[ 1.1246179 ],
[-2.31770774],
[ 2.39814844],
[ 1.40856394],
[ 2.12694691]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa4380>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF77977940, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 2.91189727],
[ 1.47179701, 9.35583105, 10.30599288],
[ -4.00663682, -15.17296134, 2.9706196 ]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977940,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977940,
tol=1e-09)
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 8.29790309],
[ -7.45761904, 8.0443883 , 8.74775768],
[ -7.54100296, 7.27668055, 8.35765657]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestTied.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa4290>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF77974E40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 1.73843505],
[ 3.90207131, 11.87153515, 12.46452122],
[ -6.04735701, -17.31754837, 1.46456652]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77974E40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77974E40,
tol=1e-09)
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 10.28703698],
[ -9.27093832, 7.48888941, 7.75556056],
[ -9.50212106, 8.22396714, 7.70516698]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestTied.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa5520>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF77976040,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestTied.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa5e80>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF77A0E740,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestTied.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa60c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 3.02406044],
[ 0.15141778],
[ 0.44490074],
[ 0.92052631],
[-0.18359039],
[...156249],
[ 0.61494698],
[-2.27023399],
[ 2.64757888],
[-2.00572944],
[ 0.08367312]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestTied.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa6240>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.09489413],
[-0.12957722],
[-1.73146656],
[ 3.55253037],
[ 2.62945991],
[...229695],
[ 0.93327602],
[ 3.14435486],
[-2.68712136],
[-0.81984256],
[ 3.63942885]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestTied.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa49e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, im... random_state=RandomState(MT19937) at 0xFFFF7794F140,
scale_prior=[[2]], tol=1e-09)
X = array([[ 2.7343842 ],
[ 2.01508175],
[ 2.29638889],
[ 1.12585508],
[ 1.67279509],
[...808295],
[-0.79265056],
[-0.27745453],
[ 0.69004695],
[-0.23995418],
[-1.0133645 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff77aa4260>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, in... random_state=RandomState(MT19937) at 0xFFFF77975240,
scale_prior=[[2]], tol=1e-09)
X = array([[-1.51990156],
[-0.77421241],
[ 3.56219686],
[-1.64888838],
[ 2.6276434 ],
[...179403],
[-0.686967 ],
[ 1.27430623],
[-0.31739316],
[ 1.74639412],
[-2.01831639]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_random_fit[scaling] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa67b0>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF77977540, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977540,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977540,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestSpherical.test_random_fit[log] ______________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa6930>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF77A0CA40, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77A0CA40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77A0CA40,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestSpherical.test_fit_mcgrory_titterington1d[scaling] ____________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa6ae0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='mc',...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF7794E240,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_fit_mcgrory_titterington1d[log] ______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa6c60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF7794D740,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77ab3a40>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 1.58581198],
[-1.43013571],
[ 3.50073686],
[-2.09080284],
[ 1.48390039],
[...711457],
[ 1.8787106 ],
[ 2.31673751],
[ 0.62417883],
[-2.57450891],
[ 0.51093669]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestSpherical.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa6a50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.55895004],
[ 1.9386079 ],
[-1.14441545],
[ 0.79939524],
[-0.84122716],
[...848896],
[-0.7355048 ],
[-1.27791075],
[-1.53171601],
[ 1.93602005],
[-1.20472876]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestSpherical.test_initialization[scaling] __________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa64b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF77946940,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[-0.69995355],
[ 1.11732084],
[ 2.34671222],
[ 0.38667263],
[ 0.49315166],
[...586139],
[ 0.81443462],
[-1.66759168],
[ 3.14268492],
[ 3.76227287],
[ 0.80644186]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_initialization[log] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff77aa6630>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF77975940,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[ 3.45654067],
[-2.75120263],
[ 2.70685609],
[ 2.19256817],
[-0.71552539],
[...986977],
[-2.05296787],
[ 0.98484479],
[ 2.68913339],
[-0.30012857],
[ 3.23805001]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_random_fit[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa6ea0>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(implementation='scaling', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF77977640, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977640,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF77977640,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestDiagonal.test_random_fit[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa7020>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}, h = GaussianHMM(init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF7794E940, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7794E940,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7794E940,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestDiagonal.test_fit_mcgrory_titterington1d[scaling] _____________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa71d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF77A0C440,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestDiagonal.test_fit_mcgrory_titterington1d[log] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa7350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF77976F40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestDiagonal.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa7500>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.94840979],
[-0.4236967 ],
[-1.86164101],
[-2.70760383],
[ 0.52817596],
[...614648],
[ 1.17327289],
[-0.48308756],
[-1.23521059],
[ 2.96221347],
[-2.4055287 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestDiagonal.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa7680>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.00900958],
[ 1.83548612],
[-1.18687723],
[ 1.39357219],
[ 2.31529054],
[...120186],
[-0.59813352],
[ 1.09476375],
[ 2.7001891 ],
[ 0.25515909],
[-1.58409402]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestDiagonal.test_initialization[scaling] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa5c40>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF77975940,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[ 0.37725899],
[ 3.11738285],
[-0.09163979],
[ 1.69939899],
[ 1.17211122],
[...975532],
[-1.29219785],
[-2.21400016],
[-0.12401679],
[ 3.5650227 ],
[-0.33847644]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_initialization[log] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff77aa4ad0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF77975F40,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[-1.50974603],
[ 0.66501942],
[ 1.03376567],
[-0.33821964],
[-0.03369866],
[...945696],
[ 1.03948035],
[ 3.29548267],
[-1.67415189],
[-0.95330419],
[ 2.79920426]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
=============================== warnings summary ===============================
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 9 warnings
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 15 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1192: RuntimeWarning: underflow encountered in exp
self.startprob_subnorm_ = np.exp(startprob_log_subnorm)
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 7 warnings
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 13 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1197: RuntimeWarning: underflow encountered in exp
self.transmat_subnorm_ = np.exp(transmat_log_subnorm)
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1130: RuntimeWarning: underflow encountered in exp
return np.exp(self._compute_subnorm_log_likelihood(X))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_backward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_viterbi_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_decode - ...
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_underflow_from_scaling[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[diag]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[spherical]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[tied]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[full]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[log] - Ru...
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[log]
=========== 202 failed, 92 passed, 26 xfailed, 45 warnings in 27.94s ===========
E: pybuild pybuild:389: test: plugin pyproject failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build; python3.12 -m pytest --pyargs hmmlearn
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.13 3.12" returned exit code 13
make: *** [debian/rules:9: binary-arch] Error 25
dpkg-buildpackage: error: debian/rules binary-arch subprocess returned exit status 2
--------------------------------------------------------------------------------
Build finished at 2024-09-13T18:19:13Z
If required, the full build log is available here (for the next 30 days):
https://debusine.debian.net/artifact/712521/
This bug has been filed at "normal" severity, as we haven't started the
transition to add 3.13 as a supported version, yet. This will be raised to RC
as soon as that happens, hopefully well before trixie.
Thanks,
Stefano
More information about the Debian-med-packaging
mailing list