Bug#1076468: Bug#1076026: elpa: FTBFS with mpich as default MPI on i386: FAIL validate_complex_2stage_banded_default.sh (exit status: 15)

Adrian Bunk bunk at debian.org
Thu Jul 18 18:33:03 BST 2024


On Tue, Jul 09, 2024 at 07:41:11PM +0200, Sebastian Ramacher wrote:
> Source: elpa
> Version: 2022.11.001-3
> Severity: serious
> Tags: ftbfs
> Justification: fails to build from source (but built successfully in the past)
> X-Debbugs-Cc: sramacher at debian.org
>...
> GetSockInterfaceAddr(369)..........: gethostbyname failed, x86-conova-02 (errno 4)
> Fatal error in internal_Init: Other MPI error, error stack:
> internal_Init(48301)...............: MPI_Init(argc=(nil), argv=(nil)) failed
>...

Regarding this error that seems to happen with elpa and mpi4py (and arpack?)
on some buildds only:

Is it possible that mpich has a problem when no IPV4 is available?

I have no idea if I am looking at the right place, but I wonder 
whether something like for example
  src/mpi/romio/mpl/src/sock/mpl_sockaddr.c:static int af_type = AF_INET;
is the problem?

cu
Adrian



More information about the debian-science-maintainers mailing list