[Pkg-rust-maintainers] rustc_1.84.0+dfsg1-1~bpo12+1~bup_source.changes REJECTED
Fabian Grünbichler
debian at fabian.gruenbichler.email
Tue Feb 25 19:33:07 GMT 2025
On Sun, Feb 23, 2025, at 11:29 AM, Micha Lenk wrote:
> Hi all,
>
> I think after more than two weeks of rustc waiting in backports-NEW,
> it's time to be honest to myself. I don't have the energy to process
> this upload because of the extra effort that I'd like to put into
> validating this as an appropriate change for stable-backports. But
> unless you ask for it, I won't reject the upload either so that any of
> the other backports FTP masters could help out here. Maybe they have a
> better idea how to deal with the upload.
thanks for being transparent about it! FWIW - I do hope going forward starting with trixie, we won't need the stage0 (re)bootstrapping unless something unexpected comes up, in particular if we limit the backports architectures to a sensible subset that is of actual interest to users.
> On 22.01.25 17:31, Fabian Grünbichler wrote:
>> On Wed, Jan 22, 2025, at 5:34 AM, Michael Tokarev wrote:
>>> 22.01.2025 01:01, Micha Lenk wrote:
>>>> Additionally, I'd really appreciate some hints on how to validate the included
>>>> stage0 tarballs (once 1.84.0+dfsg1-1 entered testing, I would like to know!).
>>>>
>>>> Would you mind to explain a little bit on debian-backports at l.d.o why it is
>>>> worth to allow such a big delta in a backport?
>>>> What alternatives were considered?
>>> Rust is problematic to bootstrap. I can only be bootstrapped with a previous
>>> version of itself (1.83 in this case) being available as a stage0 compiler. [...]
>>>
>>> There are currently two possible ways to handle this, plus a few varietes:
>>>
>>> 1. using the pre-compiled version from the upstream as the stage0. These
>>> versions are especially provided by the upstream for this very purpose.
>>> This is basically the standard way to bootstrap rust.
>>>
>>> This is exactly what this upload does. I grabbed the upstream stage0
>>> binaries for the 3 architectures (please note: this backport will only
>>> be available for the architectures where stage0 is available, not for
>>> all architectures in debian). There's a d/rules rule especially for
>>> this purpose, to download upstream-published stage0 for the given list
>>> of architectures, and pack it as rust_$version-stage0.orig.tar.gz --
>>> see debian/make_orig-stage0_tarball.sh and debian/get-stage0.py
>
> Yet back to my initial question: How can I validate (not just trust)
> this re-packaged rust_$version-stage0.orig.tar.gz was created exactly
> this way? Is the invocation of that d/rules rule sufficient to obtain
> the same rust_$version-stage0.orig.tar.gz as is shipped in the upload?
it is not currently designed to be 100% reproducible (in particular, mtimes are clamped to SOURCE_DATE_EPOCH, so if you re-run it after updating the changelog it might be different?). the contained stage0 tarballs should match though if you repeat the steps, unless upstream for some reason retracted their stage0, in which case they should fail.
> As an exception, I am willing to go some extra mile here, but I need a
> bit more of explicit guidance, so that I can follow everything you were
> doing to prepare this upload.
I don't have access to the stage0 tarball that mjt uploaded, but if you repeat the invocation
upstream_bootstrap_arch="amd64 arm64 s390x" debian/rules source_orig-stage0
in the unpacked source tree, you *should* get a tarball with contained tarballs with identical checksums, even if the "main" stage0 tarball has slightly different metadata and thus a different checksum.
>>> It is interesting - current build procedure does NOT have any way
>>> to validate the stage0 binaries it is downloading. It looks like
>>> a serious omission which should be addressed in the rust source
>>> package, by maybe verifying gpg signatures with the already
>>> available debian/upstream/signing-key.asc.
>> most points above are correct, but the prebuilt stage0 from upstream is verified. the trust anchor/chain looks like this:
>>
>> 1.) when importing a new upstream version, the upstream source tarball is verified via the upstream signing key (GPG atm, with plans to switch to another mechanism in the future)
>> 2.) that upstream source tarball is heavily pruned (it would be 4GB otherwise, including a full copy of gcc and llvm)
>> 3.) the upstream source tarball (both in pristine form, as well as after repacking/pruning) contains a file containing the information about the prebuilt stage0 artifacts provided by upstream (`src/stage0`)
>> 4.) the debian/rules target and corresponding script to download the stage0 for (re)bootstrapping purposes uses rustc own build entry point (somewhat confusingly called bootstrap and existing as a python script and a rust binary, the former builds and calls the latter)
>> 5.) this bootstrap tool from upstream uses the `src/stage0` file to validate the checksums of downloaded stage0 components before moving them into place
>>
>> as a result the `debian/rules` target can only download the stage0 artifacts matching the ones upstream used (and as a result, is also limited to those architectures where upstream provides such a prebuilt stage0 - atm this excludes armel and mips64el among the release architectures).
> How could independent reviewers convince themselves that the
> intermediate artifacts were indeed created following exactly this
> procedure? And are there any signatures on the intermediate artifacts
> helping to paper trail the process?
if the stage0 tarballs are those from upstream, they should match the checksums contained in the `src/stage0` file of the source package (which comes from upstream's released src tarball which is currently PGP signed and can be retrieved via uscan, although the repacking there takes quite a while ;)). this is used to verify the downloaded tarballs as well (via `src/bootstrap/bootstrap.py`). of course, those stage0 tarballs only exist for a subset of Debian architectures (so the full set of architectures would need to be complemented with manually built ones), and there might be other situations (like the t64 transition) where a different/patched stage0 is needed. I do hope that such a thing would explicitly be called out when uploading to backports, no matter who does such an upload.
>>> Theoretically it is possible (maybe) to build rust for bookworm-backports
>>> using rust in trixie. The problem here is that the resulting binaries
>>> require libc6 from trixie to run, - maybe this can be worked around by
>>> providing static binaries, so glibc from bookworm can be used when
>>> actually compiling rust. This would be seriously complicated though,
>>> since we'll need some serious assistance from you to use .debs from
>>> trixie to build stuff for bookworm-backports.
>> I don't think that this would be any better than the current approach to be honest ;)
> Agreed. No, it isn't better.
>> starting with trixie I'd like to provide each version via backports as soon as they hit testing. that way we shouldn't have to do prebuilt-stage0 builds at all.
> And that has my full support.
great! :)
>> bin-NEW turnaround (for rustc at least) is pretty fast nowadays (and I hope the same will be true for backports once I start uploading rustc regularly there? ;)). a new upstream release happens every 6 *weeks*, not months. and like you said, each version needs to be built and uploaded in turn if we want to avoid the rebootstrap dance with the prebuilt (or downloaded during build time) stage0.
>
> Nowadays, uploads to the backport-NEW queue are usually reviewed and
> processed multiple times per week. This means, given the anticipated
> upstream release cadence, the uploads to backports-NEW shouldn't impede
> your work in sid by any means.
the uploads to sid/experimental (bin-NEW) and those to backports-NEW would be staggered anyhow, something like:
- upstream releases at time X
- experimental/bin-NEW upload usually a few days later
- ACCEPT usually 0-2 days later
- sid upload a few days to 1-2 weeks later, depending on whether there is fallout or not
- testing migration 5-10 days later, depending on whether there is fallout or conflicting transitions
X repeats every 6 weeks, but the earliest bpo upload would happen once it hits testing, which is 3-4 weeks shifted, almost perfectly in the middle of the 6 week cycle ;)
> I hope this update helps everyone to better understand the situation we
> are in.
it does, and likewise! thanks for the transparency and your thoughts (and being careful about what hits the archive!)
Fabian
More information about the Pkg-rust-maintainers
mailing list