Bug#833425: Aw: Re: Bug#833425: mpi-defaults: switch to openmpi on hppa architecture
Helge Deller
deller at gmx.de
Thu Aug 4 10:02:51 UTC 2016
Hi Mattia,
> On Thu, Aug 04, 2016 at 09:34:35AM +0200, Helge Deller wrote:
> > mpi-defaults depends on libmpich-dev for the hppa architecture (like m68k and sh4).
> > All other architectures use libopenmpi-dev.
> > Is there a reason for that?
>
> reason is that at that time openmpi was not available on those
> architecture.
Ok. I assumed that.
> > The openmpi packages builds successfully on hppa, so I'd suggest to switch
> > to openmpi for hppa (and maybe m68k and sh4?) too.
>
> notice that switching default means rebuilding all the rdep in the
> correct order (ben is able to provide the correct order). I've been
> able to do it correctly for s390x (#813691) thanks to the release team
> tracking the transition, but we don't have tools for ports, so this is
> really up to you. Otherwise what you get is FTBFS of packages down in
> the chain, and runtime errors due to different ABI of the library (I
> noticed some programs are clever enough to say "libfoo has been linked
> against mpich but I'm now building against openmpi, I can't do that,
> please rebuild libfoo first", but most don't and just throw an error
> (IIRC a linking error)).
I'd be fine with rebuilding all required packages, and I'd appreciate
info from you or Ben which order is required.
Furthermore, since the gcc-6 transition happens right now, it's even
a good point to rebuild packages anyway.
Just from history I know, that as long as we are using a non-standard
(means: not like most other arches) library, we face issues which are
sometimes only happening due to the non-standard lib. And such issues
don't get fixed in general packages, because the standard packages
build just fine.
So, the burden to rebuild packages pay off later.
> Besides, do we know whether openmpi works correctly on those
> architectures? Since recently we have mpi-testsuite, but as you can see
> the situation is not nice:
> https://buildd.debian.org/status/package.php?p=mpi-testsuite
Oops, at least hppa is not more broken than others :-)
> PS: did you CCed me on your email?
Yes. Will not do again.
Currently I've stopped all hppa buildds and plan to upgrade them to gcc6
before starting them again. And, I've started a test build of boost1.6.1
to check if the mpi-defaults change will help. I expect a result during
the next few hours. I'll let you know of the outcome.
Helge
More information about the debian-science-maintainers
mailing list