Bug#1069472: mpi4py-fft: FTBFS on armhf: tests fail

Lucas Nussbaum lucas at debian.org
Sat Apr 20 14:13:29 BST 2024


Source: mpi4py-fft
Version: 2.0.5-2
Severity: serious
Justification: FTBFS
Tags: trixie sid ftbfs
User: lucas at debian.org
Usertags: ftbfs-20240420 ftbfs-trixie ftbfs-t64-armhf

Hi,

During a rebuild of all packages in sid, your package failed to build
on armhf.


Relevant part (hopefully):
> make[2]: Entering directory '/<<PKGBUILDDIR>>/build/texinfo'
> makeinfo --no-split -o 'mpi4py-fft.info' 'mpi4py-fft.texi'
> mpi4py-fft.texi:1706: warning: could not find @image file `mpi4py-fft-figures//<<PKGBUILDDIR>>/build/texinfo/.doctrees/images/feaa82f44023d4f401d0d133eb689f35762c6507/mpi4py-fft.txt' nor alternate text
> make[2]: Leaving directory '/<<PKGBUILDDIR>>/build/texinfo'
> sed "s|src=\"\(.*\).png\"|src=\"/usr/share/doc/python3-mpi4py-fft/html/_images/\1.png\"|g" -i build/texinfo/mpi4py-fft.info
> sed "s|src=\"\(.*\).svg\"|src=\"\"|g" -i build/texinfo/mpi4py-fft.info
> sed "s|alt=\"Documentation Status\" src=\"https://readthedocs.org/projects/mpi4py-fft/badge/?version=latest\"|alt=\"Latest Documentation\" src=\"\"|" -i build/html/*.html
> sed "s|src=\"https://circleci.com.*svg\"|src=\"\"|" -i build/html/*.html
> make[1]: Leaving directory '/<<PKGBUILDDIR>>'
>    dh_auto_test -O--buildsystem=pybuild
> I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_mpi4py-fft/build; python3.12 -m unittest discover -v 
> --------------------------------------------------------------------------
> Sorry!  You were supposed to get help about:
>     pmix_init:startup:internal-failure
> But I couldn't open the help file:
>     /usr/share/pmix/help-pmix-runtime.txt: No such file or directory.  Sorry!
> --------------------------------------------------------------------------
> [ip-10-84-234-21:102862] PMIX ERROR: NOT-FOUND in file ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at line 237
> [ip-10-84-234-21:102861] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-10-84-234-21:102861] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init_thread
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-10-84-234-21:102861] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
> E: pybuild pybuild:389: test: plugin distutils failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_mpi4py-fft/build; python3.12 -m unittest discover -v 
> I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.11_mpi4py-fft/build; python3.11 -m unittest discover -v 
> --------------------------------------------------------------------------
> Sorry!  You were supposed to get help about:
>     pmix_init:startup:internal-failure
> But I couldn't open the help file:
>     /usr/share/pmix/help-pmix-runtime.txt: No such file or directory.  Sorry!
> --------------------------------------------------------------------------
> [ip-10-84-234-21:102865] PMIX ERROR: NOT-FOUND in file ../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at line 237
> [ip-10-84-234-21:102864] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-10-84-234-21:102864] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init_thread
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-10-84-234-21:102864] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
> E: pybuild pybuild:389: test: plugin distutils failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.11_mpi4py-fft/build; python3.11 -m unittest discover -v 
> dh_auto_test: error: pybuild --test -i python{version} -p "3.12 3.11" returned exit code 13


The full build log is available from:
http://qa-logs.debian.net/2024/04/20/mpi4py-fft_2.0.5-2_unstable-armhf.log

All bugs filed during this archive rebuild are listed at:
https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=ftbfs-20240420;users=lucas@debian.org
or:
https://udd.debian.org/bugs/?release=na&merged=ign&fnewerval=7&flastmodval=7&fusertag=only&fusertagtag=ftbfs-20240420&fusertaguser=lucas@debian.org&allbugs=1&cseverity=1&ctags=1&caffected=1#results

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

If you reassign this bug to another package, please mark it as 'affects'-ing
this package. See https://www.debian.org/Bugs/server-control#affects

If you fail to reproduce this, please provide a build log and diff it with mine
so that we can identify if something relevant changed in the meantime.



More information about the debian-science-maintainers mailing list