Bug#918362: FTBFS for armhf on arm64, fails MPI-based tests

Steve McIntyre steve at einval.com
Sat Jan 5 14:35:10 GMT 2019


Package: src:dune-grid-glue
Version: 2.6~20180130-1
Severity: important

Hi!

I've been doing a full rebuild of the Debian archive, building all
source packages targeting armel and armhf using arm64 hardware. We are
planning in future to move all of our 32-bit armel/armhf builds to
using arm64 machines, so this rebuild is to identify packages that
might have problems with this configuration.

I've tried to build dune-grid-glue for armhf on top of arm64, and it's
failing some of its tests. It looks like a problem with MPI_Init() at
various points, but I don't know enough to do even basic debugging
here - sorry!

(A similar bug has shown up when building dune-common, and it's
suggested that these might have a common root in #918157 against
openmpi.)

...
Create new tag: 20190104-1810 - Experimental
Test project /<<PKGBUILDDIR>>/build
      Start  1: callmergertwicetest
 1/12 Test  #1: callmergertwicetest ................   Passed    0.01 sec
      Start  2: ringcommtest
 2/12 Test  #2: ringcommtest .......................   Passed    0.17 sec
      Start  3: ringcommtest-mpi-2
 3/12 Test  #3: ringcommtest-mpi-2 .................***Failed    0.16 sec
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes.  This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the "self" BTL.

  Process 1 ([[51217,1],1]) is on host: mustang3
  Process 2 ([[51217,1],0]) is on host: mustang3
  BTLs attempted: tcp self vader

Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_INIT has failed because at least one MPI process is unreachable
from another.  This *usually* means that an underlying communication
plugin -- such as a BTL or an MTL -- has either not loaded or not
allowed itself to be used.  Your MPI job will now abort.

You may wish to try to narrow down the problem;

 * Check the output of ompi_info to see which BTL/MTL plugins are
   available.
 * Run your application with MPI_THREAD_SINGLE.
 * Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
   if using MTL-based communications) to see exactly which
   communication plugins were considered and/or discarded.
--------------------------------------------------------------------------
[mustang3:01580] *** An error occurred in MPI_Init
[mustang3:01580] *** reported by process [3356557313,1]
[mustang3:01580] *** on a NULL communicator
[mustang3:01580] *** Unknown error
[mustang3:01580] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[mustang3:01580] ***    and potentially your MPI job)
...

Full build log is online at

  https://www.einval.com/debian/arm/rebuild-logs/armhf/FAIL/dune-grid-glue_2.6~20180130-1_armhf.log

-- System Information:
Debian Release: 9.6
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable-debug'), (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.9.0-8-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_GB.UTF-8, LC_CTYPE=en_GB.UTF-8 (charmap=UTF-8), LANGUAGE=en_GB.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)



More information about the debian-science-maintainers mailing list