[Pkg-rust-maintainers] rustc_1.84.0+dfsg1-1~bpo12+1~bup_source.changes REJECTED
Micha Lenk
micha at debian.org
Sun Feb 23 10:29:14 GMT 2025
Hi all,
I think after more than two weeks of rustc waiting in backports-NEW,
it's time to be honest to myself. I don't have the energy to process
this upload because of the extra effort that I'd like to put into
validating this as an appropriate change for stable-backports. But
unless you ask for it, I won't reject the upload either so that any of
the other backports FTP masters could help out here. Maybe they have a
better idea how to deal with the upload.
On 22.01.25 17:31, Fabian Grünbichler wrote:
> On Wed, Jan 22, 2025, at 5:34 AM, Michael Tokarev wrote:
>> 22.01.2025 01:01, Micha Lenk wrote:
>>> Additionally, I'd really appreciate some hints on how to validate the included
>>> stage0 tarballs (once 1.84.0+dfsg1-1 entered testing, I would like to know!).
>>>
>>> Would you mind to explain a little bit on debian-backports at l.d.o why it is
>>> worth to allow such a big delta in a backport?
>>> What alternatives were considered?
>> Rust is problematic to bootstrap. I can only be bootstrapped with a previous
>> version of itself (1.83 in this case) being available as a stage0 compiler. [...]
>>
>> There are currently two possible ways to handle this, plus a few varietes:
>>
>> 1. using the pre-compiled version from the upstream as the stage0. These
>> versions are especially provided by the upstream for this very purpose.
>> This is basically the standard way to bootstrap rust.
>>
>> This is exactly what this upload does. I grabbed the upstream stage0
>> binaries for the 3 architectures (please note: this backport will only
>> be available for the architectures where stage0 is available, not for
>> all architectures in debian). There's a d/rules rule especially for
>> this purpose, to download upstream-published stage0 for the given list
>> of architectures, and pack it as rust_$version-stage0.orig.tar.gz --
>> see debian/make_orig-stage0_tarball.sh and debian/get-stage0.py
Yet back to my initial question: How can I validate (not just trust)
this re-packaged rust_$version-stage0.orig.tar.gz was created exactly
this way? Is the invocation of that d/rules rule sufficient to obtain
the same rust_$version-stage0.orig.tar.gz as is shipped in the upload?
As an exception, I am willing to go some extra mile here, but I need a
bit more of explicit guidance, so that I can follow everything you were
doing to prepare this upload.
>> It is interesting - current build procedure does NOT have any way
>> to validate the stage0 binaries it is downloading. It looks like
>> a serious omission which should be addressed in the rust source
>> package, by maybe verifying gpg signatures with the already
>> available debian/upstream/signing-key.asc.
> most points above are correct, but the prebuilt stage0 from upstream is verified. the trust anchor/chain looks like this:
>
> 1.) when importing a new upstream version, the upstream source tarball is verified via the upstream signing key (GPG atm, with plans to switch to another mechanism in the future)
> 2.) that upstream source tarball is heavily pruned (it would be 4GB otherwise, including a full copy of gcc and llvm)
> 3.) the upstream source tarball (both in pristine form, as well as after repacking/pruning) contains a file containing the information about the prebuilt stage0 artifacts provided by upstream (`src/stage0`)
> 4.) the debian/rules target and corresponding script to download the stage0 for (re)bootstrapping purposes uses rustc own build entry point (somewhat confusingly called bootstrap and existing as a python script and a rust binary, the former builds and calls the latter)
> 5.) this bootstrap tool from upstream uses the `src/stage0` file to validate the checksums of downloaded stage0 components before moving them into place
>
> as a result the `debian/rules` target can only download the stage0 artifacts matching the ones upstream used (and as a result, is also limited to those architectures where upstream provides such a prebuilt stage0 - atm this excludes armel and mips64el among the release architectures).
How could independent reviewers convince themselves that the
intermediate artifacts were indeed created following exactly this
procedure? And are there any signatures on the intermediate artifacts
helping to paper trail the process?
>> Theoretically it is possible (maybe) to build rust for bookworm-backports
>> using rust in trixie. The problem here is that the resulting binaries
>> require libc6 from trixie to run, - maybe this can be worked around by
>> providing static binaries, so glibc from bookworm can be used when
>> actually compiling rust. This would be seriously complicated though,
>> since we'll need some serious assistance from you to use .debs from
>> trixie to build stuff for bookworm-backports.
> I don't think that this would be any better than the current approach to be honest ;)
Agreed. No, it isn't better.
> starting with trixie I'd like to provide each version via backports as soon as they hit testing. that way we shouldn't have to do prebuilt-stage0 builds at all.
And that has my full support.
> bin-NEW turnaround (for rustc at least) is pretty fast nowadays (and I hope the same will be true for backports once I start uploading rustc regularly there? ;)). a new upstream release happens every 6 *weeks*, not months. and like you said, each version needs to be built and uploaded in turn if we want to avoid the rebootstrap dance with the prebuilt (or downloaded during build time) stage0.
Nowadays, uploads to the backport-NEW queue are usually reviewed and
processed multiple times per week. This means, given the anticipated
upstream release cadence, the uploads to backports-NEW shouldn't impede
your work in sid by any means.
I hope this update helps everyone to better understand the situation we
are in.
Kind regards,
Micha
More information about the Pkg-rust-maintainers
mailing list