[Pkg-rust-maintainers] rustc_1.84.0+dfsg1-1~bpo12+1~bup_source.changes REJECTED
Michael Tokarev
mjt at tls.msk.ru
Wed Jan 22 04:34:49 GMT 2025
22.01.2025 01:01, Micha Lenk wrote:
>
> Hi Michael,
Hi! Thank you for handling this, and for your comments and questions.
> The backported version 1.84.0+dfsg1-1 is not in testing (yet).
Yes. I expected it to be a bit more difficult to get into bpo, this
is why I uploaded it a bit earlier. But now thinking about it, -
it was a mistake, I should've waited for it to migrate to testing.
> Additionally, I'd really appreciate some hints on how to validate the included
> stage0 tarballs (once 1.84.0+dfsg1-1 entered testing, I would like to know!).
>
> Would you mind to explain a little bit on debian-backports at l.d.o why it is
> worth to allow such a big delta in a backport?
> What alternatives were considered?
Rust is problematic to bootstrap. I can only be bootstrapped with a previous
version of itself (1.83 in this case) being available as a stage0 compiler.
This is the reason why it hasn't been backported before - we didn't back-
port it in time at the first upload after bookworm has been released - at
that time it the previous version has been available to compile the next
version, the next version would be used to compile next-to-next version and
so on - with waiting in backports-NEW each time because each next version
introduces different name for the supporting package (like librust-1.84 etc).
There are currently two possible ways to handle this, plus a few varietes:
1. using the pre-compiled version from the upstream as the stage0. These
versions are especially provided by the upstream for this very purpose.
This is basically the standard way to bootstrap rust.
This is exactly what this upload does. I grabbed the upstream stage0
binaries for the 3 architectures (please note: this backport will only
be available for the architectures where stage0 is available, not for
all architectures in debian). There's a d/rules rule especially for
this purpose, to download upstream-published stage0 for the given list
of architectures, and pack it as rust_$version-stage0.orig.tar.gz --
see debian/make_orig-stage0_tarball.sh and debian/get-stage0.py
Now, rustc will build itself using these stage0 pre-compiled binaries
to bootstrap itself. This is the way how it is done to bootstrap
rust on debian (ports), and/or when a previous version of the compiler
is somehow not available.
It is interesting - current build procedure does NOT have any way
to validate the stage0 binaries it is downloading. It looks like
a serious omission which should be addressed in the rust source
package, by maybe verifying gpg signatures with the already
available debian/upstream/signing-key.asc.
1.a there's a build profile in rust package in debian which enables
downloading pre-built stage0 from upstream, instead of using the
bundled ones as rust_$version-stage0.orig.tar.gz. This, in my
view, is worse than the previous variant, due to several reasons.
1.b there's rustup package. This one installs pre-built binaries from
the upstream site directly for the use.
2. Having version of the compiler available for a different architecture,
by using cross-compiling for this architecture (any rust installation
can generate binaries for all supported architectures).
Basically, it's possible to build (one way or another) rustc for amd64
for example, and cross-build it for other interesting architectures.
This is actually what I *tried* to do, - and to provide binaries of
rust built by me in the first upload to bpo. I would've used just one
of the upstream pre-built binaries for bootstrapping on amd64, and
would upload just the .deb files (for several architectures at once)
to bookworm-backports.
This has significant - in my view - drawback: it basically hides the
fact that the initial bootstrapping is done using an upstream pre-built
binary. I think it is better to be explicit here, with the actual
pre-built binaries being in the debian archive readily available for
any verification.
Besides hiding the actual pre-built binary in use, this way actually
doesn't work currently, - there's a prob in upstream cross-environment
bootstrapping which is being worked with. Maybe we should try to fix
this one before trying to upload rust to bookworm-backports.
That's basically it.
Theoretically it is possible (maybe) to build rust for bookworm-backports
using rust in trixie. The problem here is that the resulting binaries
require libc6 from trixie to run, - maybe this can be worked around by
providing static binaries, so glibc from bookworm can be used when
actually compiling rust. This would be seriously complicated though,
since we'll need some serious assistance from you to use .debs from
trixie to build stuff for bookworm-backports.
Maybe others from the rust team will add some more comments here,
especially Fabian Grünbichler.
This thing is kinda difficult. Even without the foreign binaries, rust
brings enough maintenance tasks by having packages named after version,
so each new version (about every 6 months) needs a new round in the NEW
queue. During bookworm freeze we haven't updated rust for a long time,
so had to perform multiple updates after the release, to catch up the
upstream, - it was kinda fun to prepare the next version, upload, wait
for the acceptance in NEW, wait till it is built on all architectures
and migrate (and fix all possible issues in between), and repeat for
the next version upload to repeat the whole cycle..
Hopefully this clears most questions.
Thanks,
/mjt
More information about the Pkg-rust-maintainers
mailing list