-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement The Update Framework (TUF) for Project Signing #3724
base: master
Are you sure you want to change the base?
Conversation
text/3724-theupdateframework.md
Outdated
## (cargo-tuf-lib) Standard TUF Implementation | ||
|
||
We propose creating a new crate, `cargo-tuf-lib`, which shall be used by both Cargo and Rustup for doing TUF synchronization and update procedures. This library shall be a shim wrapper around the `rust-tuf` crate (https://github.com/rustfoundation/rust-tuf), providing a simplified and shared interface for doing synchronization and verification of the TUF repositories and their files. | ||
|
||
The API surface of this crate is to be determined upon implementation in Cargo and Rustup. However, because both tools will need to perform synchronization and validation against the tuf-root repository, they shall used this shared interface to guarantee compatibility. | ||
|
||
This API will include operations to sync the TUF repositories efficiently, and to perform a verified download of an object. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsure if we should get this specific about the implementation in this RFC. Also, if this is a Rust Project crate, it is then subject to https://rust-lang.github.io/rfcs/3119-rust-crate-ownership.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. This has been dropped as an explicit part of the RFC. We will still leave it to cargo and/or rustup to bespoke or unify the implementation. There is a very real circumstance where it can be implemented in cargo and rustup will just utilize that in some manner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unresolving as I'm not seeing any RFC update that drops this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I'll let you see the update and resolve if its acceptable.
text/3724-theupdateframework.md
Outdated
Creation of a new `~/.cargo/tuf` directory. (If Cargo stores its registry information in another directory, the `tuf` directory should be stored alongside the `registry` directory.) This directory shall be used for all TUF operations by project tools (both Rustup and Cargo). The cargo folder was chosen as the main location of residence for these files given that although Rustup will be performing the initialization of these folders, there is already a precedent set for shared files living within the cargo folder. | ||
|
||
- `~/.cargo/tuf` The top-level directory of local copies of TUF repositories | ||
- `~/.cargo/tuf/root` a copy of the `tuf-root` repository locally synchronized | ||
- `~/.cargo/tuf/crates` a copy of the `tuf-crates` repository locally synchronized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For crates, I would assume this is per-registry and needs to be stored in a registry-specific location, maybe as a sibling to the index?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good call out. I have updated the RFC to that language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unresolving as I'm not seeing any update that resolves this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I'll let you see the update and resolve if its acceptable.
|
||
## TAP-16 Implementation | ||
|
||
We're proposing to use [TAP-16](https://github.com/theupdateframework/taps/blob/master/tap16.md) to provide efficient update checking and download sizes. TAP-16 uses Merkle trees rather than full lists for the download of a snapshot of the inventory of a repository (`snapshot.json`). We want to ensure that, as crates.io grows, the total size clients have to download when checking for updates remains small. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice, what is the expected impact for this for crates.io's current size and if it grew to the size of some of the larger registries for other languages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it would be good to understand the time and space complexities of the operations involved.
Does TAP-16 prevent a full snapshot being necessary at all, or does it just reduce the download size? I think it should be possible to validate a set of dependencies without having to have a full snapshot of the repository...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree that these are the critical questions. Based on my reading of TAP-16, the answers entirely depend on the implementation details that are specified in the POUF. I think we need these answers in order to stabilize this functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand TAP-16 correctly, each target file has an associated Merkle metadata file which functions as an inclusion proof: it provides the chain of hashes needed to link the particular leaf value to the Merkle root. It's described as follows:
Once the Merkle tree is generated, the repository must create a snapshot Merkle metadata file for each targets metadata file. This file must contain the leaf contents and the path to the root of the Merkle tree. This path must contain the hashes of nodes needed to reconstruct the tree during verification, including the leaf's sibling (see diagram). In addition the path should contain direction information so that the client will know whether each listed node is a left or right sibling when reconstructing the tree.
This should be sufficient for clients to verify a particular target metadata file is included in the Merkle root by downloading that Merkle metadata file alone without the need to download anything else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. But the root of the tree will be invalidated by every single publish (3 per min). All 170K snapshot Merkle metadata
include the path all the way to the root so will need to be replaced on every single publish. That is a lot more for crates.io to send to s3. I doubt that this is sustainable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it would probably need a service to dynamically produce the Merkle proofs as needed rather than trying to precalculate them on every update
text/3724-theupdateframework.md
Outdated
- `cargo-tuf-lib::sync` attempted prior to an index update | ||
- `cargo-tuf-lib::verify_snapshot` called on an index update on the entire index |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the sparse registry, we don't have a "index update" phase but we update the parts of the registry if-needed as we perform a registry operation. We also don't have the entire index.
Do we need to do this sync even if we won't download anything new from the registry? Could we instead only check if there was a change? Can we only check what changed or what is downloaded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated this language to reflect the bespoke TAP-16/per-crate downloads vs. the TUF standard full index synchronization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unresolving as there isn't an update for me to see this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I'll let you see the update and resolve if its acceptable.
Nominating this for the Leadership Council, to approve the one line-item about appointing the root quorum. See https://rust-lang.zulipchat.com/#narrow/channel/392734-council/topic/Approval.20of.20Council-related.20components.20of.20signing.20RFC for details. (ed: Inlining that:)
|
|
||
## TAP-16 Implementation | ||
|
||
We're proposing to use [TAP-16](https://github.com/theupdateframework/taps/blob/master/tap16.md) to provide efficient update checking and download sizes. TAP-16 uses Merkle trees rather than full lists for the download of a snapshot of the inventory of a repository (`snapshot.json`). We want to ensure that, as crates.io grows, the total size clients have to download when checking for updates remains small. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it would be good to understand the time and space complexities of the operations involved.
Does TAP-16 prevent a full snapshot being necessary at all, or does it just reduce the download size? I think it should be possible to validate a set of dependencies without having to have a full snapshot of the repository...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The RFC looks like a great improvement over the previous iteration, thank you so much for everyone who contributed to it! ❤️
The only piece of feedback I have (spread across multiple comments) is how the TUF roles are distributed, but otherwise this looks great to me.
text/3724-theupdateframework.md
Outdated
|
||
## Summary & Motivations | ||
|
||
We propose the creation of two distinct TUF repositories for signing of Rust Project content and crates, respectively. Two main motivations exist for separating these concerns: The cadence of content published within each, and the trust of each. Rustup and Rust releases (both nightly and stable) are conducted under a controlled and predictable manner which is managed by the Project. However, crates are published by the community, and as such we see a larger and much more varied volume of content which may exist within this repository. These repositories. We have additionally modeled signing the root of one repository by the other - this implicitly grants us a chain of trust from the "Project" (tuf-root) to the separate crates.io repository (tuf-crates). The sections below go into more detail on each repository and its configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should have a unified TUF repository for both crates.io and releases.
On the technical side, there is no difference in trust between having separate repositories or a single one, as with partial delegation we can prevent one from changing the other. The frequency of changes also shouldn't impact TUF (to my understanding), as both rustup and Cargo will have to still download the snapshot JSON to see if updates are present.
On the social side, TUF is going to be mostly an implementation detail, and users should not be expected to manually verify the TUF repositories or even know they exist. If someone cares enough to actually check them manually, they will understand that the crates content signed with TUF is not endorsed like the releases are.
The disadvantage of the two repositories is that we have to maintain two different quorums, which adds additional overhead (especially for the crates.io one).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the technical side, there is no difference in trust between having separate repositories or a single one, as with partial delegation we can prevent one from changing the other. The frequency of changes also shouldn't impact TUF (to my understanding), as both rustup and Cargo will have to still download the snapshot JSON to see if updates are present.
From a full sync perspective and handling the snapshot files, there is actually a large technical difference between the cadence and size of updates that these will contain depending on how TAP-16 is implemented and for mirror syncing operations. While the rust-lang one will require updates on a pretty standard cadence, crates.io will be updating approximately 3 times a minute on average. For an external mirror to stay synchronized on snapshots and data - a unified repository would require them to update at that speed (regardless of if they are only mirroring rust releases). Finally, there is just the noise ratio on the different repositories for quorum and signing operations. Because we are utilizing the github repo as an additional sync point for signing activities occurring, they will inherently be noisier on the crates.io side
The disadvantage of the two repositories is that we have to maintain two different quorums, which adds additional overhead (especially for the crates.io one).
Correct me if I am wrong - but we saw this as a distinct advantage instead of a disadvantage. The crates.io team maintaining their quorum definitely should be an independent operation from the root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using delegated targets to solve the same problem, you can have a distinct set of keys for each sub-project, e.g. the root quorum secures the whole system, then you can delegate a target for cargo with their own quorum of keys, one for rustup with their own quorum of keys, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the rust-lang one will require updates on a pretty standard cadence, crates.io will be updating approximately 3 times a minute on average. For an external mirror to stay synchronized on snapshots and data - a unified repository would require them to update at that speed (regardless of if they are only mirroring rust releases).
Why is that?
A mirror that only cares about Rust releases could choose to synchronize every hour or every day, regardless of whether there is an unified TUF repository or not. The only thing preventing them from doing so is the timestamp expiring too soon, but we control the expiration of it and we can choose a sensible value.
A mirror that cares about both would have to update at the crates.io speed regardless. The fact there would be releases in the same repository as crates.io wouldn't slow it down, since it wouldn't have to download unmodified files again.
Finally, there is just the noise ratio on the different repositories for quorum and signing operations. Because we are utilizing the github repo as an additional sync point for signing activities occurring, they will inherently be noisier on the crates.io side
[...]
Correct me if I am wrong - but we saw this as a distinct advantage instead of a disadvantage. The crates.io team maintaining their quorum definitely should be an independent operation from the root.
I don't see why it matters that the crates.io team maintains their own quorum. This RFC is meant to guarantee to external users that releases and crates come from the Rust project. There is no need to encode our governance structure into the root of trust.
text/3724-theupdateframework.md
Outdated
|
||
##### Release (Stable/Beta) Role | ||
|
||
The Release role shall have the authority to only sign stable rust releases. We propose this role also consist of a quorum model, consisting of all members of the release team. This role should have a 3 member threshold, and always consist of all members of the release team. At the time a new stable release is being compiled and shipped, a signing quorum must be conducted for this release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The release team has been moving to get the release process as automated and hands-off as possible, and we finally achieved it a few months ago. It's now possible for members of the release team to start a whole release with a single command, and publishing releases doesn't require infra-admins privileges anymore.
This new release process doesn't let the person publishing the release control the contents of the release in any way, or have access to any signing key, and it only allows publishing the latest commit in the stable
branch (which went through CI).
Requiring three quarters of the release team to sign the release would feel like a regression to me, as it would add more overhead to the volunteers running the release. With the new release process, the risk of a release team member releasing a rogue release has been mostly mitigated.
It would also mean that release team members have to start carrying persistent private keys with privileged access, instead of the signing key living locked down in AWS KMS with full audit logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the language here to allow for the release team to create automation delegate keys like the crates.io team will utilize. I've also included language that the release team quorum will still be required to rotate the key every 3-6 mo to mitigate changes to the quorum and validation that the keys are still valid. is that 3-6mo resigning by the team more acceptable?
The Release role shall have the authority to only sign stable rust releases. We propose this role also consist of a quorum model, consisting of all members of the release team. This role should have a 3 member threshold, and always consist of all members of the release team. The release team shall be responsible for the creation, management and administration of delegate keys utilized for releases. We recommend that any delegate automation keys be stored in a secure keystore and have a regular update and rotation schedule which shall require a quorum of the release team to conduct; a timeframe of 3-6 month rotations is recommended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to decide whether stable releases should be signed by people, or by the automated release infrastructure. If we make a decision here that they should be signed by people, then we shouldn't allow the release team to delegate it to a machine.
Instead, if we decide that stable releases should be signed by our release automation, then we should just delegate to that key, without an intermediate "release team" quorum. See #3724 (comment) for my rationale for removing the "release team" quorum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @Mark-Simulacrum (release lead) for the decision of how we want to sign stable releases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the rationale in #3724 (comment) -- in general, my sense is that reducing the amount we need release team members to be "trusted" is best. (At least specially, modulo just having r+ rights on rust-lang/rust).
rotate the key every 3-6 mo to mitigate changes to the quorum
I think I tend to agree that verifying we can form a quorum makes sense. But I don't know that having the release team have its own quorum is all that useful. Forming the root quorum is periodically necessary for the same reason (right?), and I'd prefer fewer quorums in general. Rotating the actual in-use key is also not necessary for us to verify we could have done so; it's probably effectively harmless to do so though.
text/3724-theupdateframework.md
Outdated
|
||
###### Rustup Role | ||
|
||
This shall be a quorum based role, consisting of all members of the Rustup & Infrastructure team members. We recommend having at least a 3-member threshold. We have decided to have this roles quorum be broader to allow for emergency updates and releases of Rustup; we may want to increase the threshold when these teams have more members. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On one hand, the team here should be the release team, not the infrastructure team. In the past most rustup releases have been done by me and Mark with our release team hat.
On the other hand, JD has been working to migrate rustup to the same release process used by Rust releases, so my comment about stable releases applies here as well (they should not require individuals signing, the signing key should live in AWS KMS).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've collapsed the Rustup and Release roles into a single role which can have delegate sub keys.
text/3724-theupdateframework.md
Outdated
|
||
##### Root Role | ||
|
||
The root role of the tuf-crates repository shall consist of all members of the crates.io rust team with a threshold of 3. As a special case, updating this role shall also require a resigning by the root role of the tuf-root repository (sign a metadata entry existing within tuf-root). This means any changes to the membership of the crates.io team will also require a signing ceremony via github by the root quorum. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the quorum set here to the crates.io team, rather than the same quorum as the other repository (either by reusing the same repository or having the same keys as the quorum)? The crates.io team is not involved in operating the infrastructure (they wouldn't have the access to manage the target role).
Having every member of the crates.io team being part of the quorum would mean onboarding new contributors would both be harder (as it would require a quorum event) and would imply a lot more trust given to the new member (compared to just approval rights on the repository).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Turbo87 @LawnGnome @carols10cents Do you have thoughts on this?
In spirit, the separation allows us to keep the project root as "delegates of authority", while the crates.io team is "responsible for all crates.io TUF".
Mainly this comes down to two things:
- we wanted to allow the crates.io team to self-manage and modify their TUF repository - in the event it chains to the original root quorum, any changes or rotations to signing keys used by crates.io will require them to trigger and root signing event - which are not all people responsible for the crates.io, and are considered more of a trusted authority
- The repositories are already separated because of technical load, which isn't a default supported case by TUF - so we are free to choose a way to make sure trust crosses the boundary of the two repositories.
So the above two items leaves us with these choices here (assuming separate repositories):
- (The current option) Crates.io is root of their own repository, with out-of-band trust delegated from the other repository
- The project root quorum is also root of this repository, and the crates.io team is a delegate role. This will require them to respond to ceremonies in both repositories which pertain to them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The target role for crates.io will likely be a single key in AWS KMS, only used for machine actions by the crates.io application, that we will hopefully never rotate nor revoke 1. If the Rust project (through a quorum that hopefully includes also people from the crates.io team!) declares that automated key to be trusted, I don't see any practical reason for the crates.io team to setup processes to cryptographically vouch for that key.
In case there is a need to modify the set of crates.io roles due to a new feature (you mention trusted publishing for organizations), the design of the feature through the RFC process will take enough time that the change can just be rolled into the next root quorum re-signing.
So, in general I don't think we need to encode our organization chart into the root of trust. The purpose of TUF is just to ensure our users receive releases and the index as distributed by the Rust project, it doesn't need to encode our governance structure into the root of trust.
I consider the automated/machine signing keys for releases and crates.io to be "just" new pieces of infrastructure provided to the relevant teams. I'd like to think of this like the crates.io
domain name: it's managed and controlled by infra-admins, and when the crates.io team needs changes to it they ask for it and they get done.
Unless there is a need for a team to do manual signing actions, I don't think we need to consider things "delegated to a team", but rather delegated to a machine key that the relevant piece of infrastructure is given access to.
Footnotes
-
Unless I am missing something, if we have a key generated inside AWS KMS with all signing operations done through the KMS API and logged in CloudTrail, the only reason why we would need to rotate the key is if AWS KMS as a whole is compromised. At that point the Rust root of trust would be the least of anyone's problems. ↩
cc @tarcieri |
|
||
## Summary & Motivations | ||
|
||
We propose the creation of two distinct TUF repositories for signing of Rust Project content and crates, respectively. Two main motivations exist for separating these concerns: The cadence of content published within each, and the trust of each. Rustup and Rust releases (both nightly and stable) are conducted under a controlled and predictable manner which is managed by the Project. However, crates are published by the community, and as such we see a larger and much more varied volume of content which may exist within this repository. We have additionally modeled signing the root of one repository by the other - this implicitly grants us a chain of trust from the "Project" (tuf-root) to the separate crates.io repository (tuf-crates). The sections below go into more detail on each repository and its configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious why you're modeling this as two repositories with two separate roots rather than two delegated targets which effectively namespace resources and are managed under a single TUF repo.
The idea of a TUF root is it's self-signing, so if the keys listed in that file aren't intended to be able to sign future updates to the root, perhaps it isn't a root you want.
It sounds like the idea is you want to use tuf-root
to delegate authority to tuf-crates
, which sounds like a delegated target to me, e.g. you could have separate targets for crates
versus releases
or what have you, each with authority delegated to their own independent set of keys, but with a common root able to update the keys used to manage either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, see also #3724 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've responded in the comment @pietroalbini linked. We are seperating them for technical reasons and not social; having it be a delegated target will mean that releases are subject to the updates and snapshot bloat of the crates updates (releases being a much sparser and more regular cadence, while crates update 3x a minute).
text/3724-theupdateframework.md
Outdated
#### Terminology | ||
|
||
- `tuf`: The Update Framework and its specification | ||
- `targets`: The actual content and files distributed and to be signed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
target
is already an overloaded term in the Rust/Cargo world, we may want to be careful how we use it in documentation of this RFC to avoid having an additional meeting we need to disambiguate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed the definition term to artifact and defined target role separately instead.
|
||
All members of all signing quorums within the Rust Project will require hardware keys, the expenses for which will be covered by the Rust Foundation. | ||
|
||
## Root Quorum Model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some general notes about hardware-backed quorum models, based on our experience on PyPI when attempting to deploy TUF on PyPI (i.e. PEP 458):
- Hardware (i.e. HSM) backed tokens are difficult to operationalize over expected key lifetimes:
- You'll need to establish a chain of trust for the hardware itself (some, but not all, HSM models have baked-in device attestations).
- You'll need to perform a secure offline signing and enrollment ceremony (we designed a runbook for PyPI, but it's pretty old at this point and was also constrainted by HSM vendor limitations that should be re-evaluated)
- You'll need a testable compromise and roation process, to prevent/limit normalization of deviance around key management and enrollment into the quorum.
- Each quorum party will ideally physically secure their HSM in a way that stymies a medium-complexity physical adversary: PyPI chose tamper-evident bags, and in practice each quorum member will need multiple bags and a tag-in-tag-out procedure for removing their key from their bag if occasional key operations are expected.
- This section is currently a little light on cryptographic details: it specifies the size of the quorum, but it doesn't say which types and sizes of keys are permitted in the root set, or how the community will verify that a key is actually enrolled within a particular HSM. For PyPI we stipulated a mix of P-256 and P-384 keys due to the limitations of HSMs at the time, but it might be possible to do Ed25519 keypairs with current commercial HSMs. We also prepared HSM-level attestations of key possession, although in practice hardware limitations meant that only the YubiHSMs actually supported root key attestation (versus attestation of the HSM itself).
As a whole, these were nontrivial issues to address in our initial attempt to implement TUF on PyPI, and IMO they're a large part of why TUF (in the form of PEP 458) hasn't materialized on PyPI. So as part of this RFC I recommend the Rust's Leadership Council think about the over/under on a hardware quorum model versus something simpler (e.g. a soft root key in a cloud HSM) or even a different, smaller-footprint architecture (like a transparency scheme). I'll leave a separate comment on the latter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add to what @woodruffw wrote, for our HSM enrolment we require each key-holder to upload a device and key attestation that can be verified up to the manufacturer's root CA. Verification of this is done via GitHub actions, so for each new key added we get automatic verification.
We also hook this into the TUF verification process, so that each time a TUF metadata document is updated we run a workflow that verify the TUF signatures via HSM's attested device key. This may be slightly overkill, but makes it simple and foolproof to see that no extra keys has been added that is not known and approved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pietroalbini @Mark-Simulacrum @joshtriplett @jdno
I'd like your opinions on whether you want to lock us in to a cryptographic requirements and setup in this RFC, or leave it as an exercise in best practices during implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I would want to see in the RFC is the threat model for quorum keys, not mandate which approach to take to satisfy that threat model.
For example, it would be interesting to know whether the threat model here is "it should not be possible for a remote attacker to exfiltrate the key" (which could be satisfied by mailing everyone an hardware token, and the use of live distros to perform quorum operations) or "it should not be possible for physical attacks to compromise the key be detectable that a physical compromise happened" (which would require tamper-evident storage).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally Pietro's ask for threat model over exact approach makes sense to me. It might be useful as a separate document (e.g., for T-council + infra (or maybe security team of some kind?) review) that predates this RFC and outlines the proposed threat model at a high level (e.g. "AWS KMS private key leak", " AWS KMS cloudtrail logs inaccurate", "government mandated action to owners of quorum keys", ...) and what we expect those to cost in complexity / $ for impl and maintenance. I started a thread about that on Zulip yesterday.
IMO, we should also pair that threat model with a very rough feasibility analysis -- e.g., to points raised above around key types, I think we should be making sure we don't land on something that isn't implementable. But I don't think this RFC should mandate e.g. P-384 keys or w/e. For one thing, if we expect that we'll be updating all of this for PQ on a time table of ~years, it seems accurate that all those keys will change :)
To which my answer remains the same: better tooling (a largely one-time cost). |
Sorry, jumping in with two quick nits. I think transparency systems are sometimes considered opposite of e.g., TUF or anything based off of cryptographic signatures, but that is not the case. Server-side signing (be it on a tlog or an NPM registry) does not provide the same security argument as producer/client-side signing (e.g., as done by TUF targets, PGP, in-toto, a signed SBOM, etc).
Nit 1: Which I believe achieves the exact same security properties as the current cargo index on a git repository. I'm not sure why adding another historic-MHT backend (or any hash-chained variant thereof) to store metadata would achieve anything new. At best, you could argue that adding a CI job to sign commits would suffice and provide the same fundamental win as a "traditional BT system".
Witnessing (providing protection against fork*/split-view attacks) is not the same as monitoring (studying the semantic properties of a log entry to identify maliciously-written entries), and, as far as we know, there is near 0 independent monitoring of most BT deployments --- there is almost no knowledge of how a good entry looks like! Further, I find it strange to push back against PKI, given that operationalizing a k-n witness/monitor system that notifies and/or blocks known-bad entries is also an open research problem, and also requires a PKI-like system for enrollment of monitors and gossiping between them. Using TUF or I don't mean to be snarky, but I sometimes wonder whether, if SSL was being worked on in 2020+, people would argue that we just need a "html and javascript transparency" and do away without the PKI. |
This might be a misunderstanding of what you mean (in which case I apologize), but I believe that the two aren't analogous in this case: the fact that the current cargo index is on (You're right that
This is a true and valid criticism. Apart from Go's sumdb, there are scant examples of real-world BT deployments with claimant models/personas to reason about. The only one besides Go that I'm aware of is Homebrew's use of Sigstore in an effectively-BT setting, which almost certainly has no independent monitors at the moment (besides myself, which wouldn't be fair to count 🙂). At the same time, I think this is also true in practice for packaging ecosystem deployments of TUF -- we don't have PEP 458 yet for PyPI, and to my understanding the RubyGems' TUF implementation from 2013 didn't fully materialize (I apologize if I'm mischaracterizing things there).
The main argument here is that transparency is an independently valuable property, one that TUF can't (at present?) provide. The secondary argument (to your point above) is that, given a choice between a hardware backed k-of-n PKI and a k-n distributed witness/monitor PKI, the former is harder for the index persona to operationalize. That doesn't mean that the latter is easy (or, on net, even exactly as hard), but that it's easier for the index itself while achieving similar cryptographic properties, plus transparent properties.
I think that would be silly, so I appreciate the snark. However, there's an underlying truth that's been revealed by the last 30 years of operational failures in the Web PKI: we need a PKI for the public web, but the public web has also become more secure as we've reduced the number of independent PKI venodrs on it and forced them into Certificate Transparency. In other words: in 2020+, I think it would be a correct observation that adding auditability to a set of smaller PKIs is a better ecosystem-level design decision than standing up a new PKI. As a commenting note: this is a concrete RFC, so I don't want to drag the thread into a more abstract non-crates discussion about alternatives. I think I've registered my concerns to a degree that I personally consider appropriate and I appreciate that they've been responded to in a detailed, thoughtful, and considerate manner. With that, I'll cease my posting and let this RFC run through with the concrete consideration it deserves. |
I know Sigstore has already been discussed, but I believe it can provide this capability, either via a self-hosted deployment as described in bring your own TUF, or via signing TUF metadata files which I believe can work via Sigstore-as-a-service (I'm a bit confused, because I swear TUF and in-toto used to be explicitly listed as artifact formats natively supported by cosign in addition to OCI, eBPF, WASM, etc but now I can't find a reference to that anymore). I could be mistaken as I'm not a Sigstore expert, so I would be curious to hear from others who have opined on Sigstore, as well as the authors of the Sigstore-related RFCs (cc @lulf). If I understand correctly, it seems like something easy to adopt incrementally as a retroactive add-on, and thus something which this RFC doesn't need to directly concern itself with other than a potential mention for future work. |
RFCs usually consider the credible alternatives, and binary transparency seems to be the main one to consider as @woodruffw discussed above. It'd be good if the RFC could discuss this specifically in its text (right now it does not). The way I'd most prefer to see that presented would be for this RFC to start by laying out the specific security and operational goals it hopes to achieve (and why), and the specific security claims it wants to make to end users (and what assumptions those security claims make), and to then analyze and compare the proposed instantiation of TUF and of some reasonable binary transparency scheme on how they might meet those goals and support those security claims. It would be good too for the drawbacks to discuss the required and ongoing operational maturity that running a PKI demands, and it would be good for the RFC to compare TUF with BT here also. I find myself wondering whether any formal analysis has been done as to the startup and ongoing costs here, in skilled personnel and other things, that we'd be committing ourselves to by adopting this. If the idea is that we should do TUF and also do binary transparency, as has also been discussed in this thread, then some details around that would be good to discuss also in the future possibilities section. |
Above, @woodruffw discusses the experience of PyPI in adopting TUF. The experience of PyPI here seems highly relevant. We'd hate to repeat any mistakes they've made or have to relearn lessons that they've learned. If we want a broader set of perspectives, perhaps we could reach out to others from that community also to collect experiences. (Perhaps, @woodruffw, you could suggest others on the relevant team that might have valuable experiences to share? I'm reaching out myself to some people who may be able to share experiences with adopting TUF or BT in similar systems with us.) In any case, it seems that it would be valuable for this RFC to try to collect and capture the experiences of PyPI and others in adopting this framework (e.g. in the prior art section), and to discuss any ways in which our situation is different or ways that we've adjusted our own approach so as to avoid any problems or hardships that others have encountered. |
|
||
## Crates.io changes | ||
|
||
- Prior to updating the index, crates.io shall perform the online signing of the index entry to update the targets and sign the index entry, saving this as a like-pathed artifact in the TUF repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from just this sentence alone I unfortunately have no clue what that actually means for us 😅
side note: what happens if the update of the TUF repository is successful but then the index repo update fails? or if the sparse index upload fails? what happens if both indexes are out-of-sync?
the indexes are treated as eventually consistent by crates.io. I'm not sure if we can actually guarantee a TUF repo update happening before and close in time to the two separate index updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think implementation wise the TUF copy of the index is going to have to be a 3ed eventually consistent source. At least at first.
At some point we could start maintaining the Git index using an entirely separate server that watches for changes in the TUF index then downloads them then commits them. Similarly we could have a separate service that watches the TUF index and uploads the latest index files to S3 and does CDN invalidations. But this is all fundamentally "eventually consistent".
S3 does have some APIs for referring to old versions of the file. So we could customize TUF to refer to the historical files already uploaded to the sparse index. But this would require a custom variation of TUF for our purposes, which does not seem to be the approach in this RFC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Turbo87 Yes, this means that updates will fail until the two are consistent - and why it is recommended that the signing occur prior to the index actually reflecting it. I think it is a choice for the cargo and crates teams to decide on if eventually consistent with room for error on a given crate update occuring at the same time a fetch occurs is acceptable. Given the sparse method of the index and signing, the error would only occur if someone is attempting to pull an update immediately after it was pushed and prior to signing occuring (I know this may happen in some CI setups though).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the sparse method of the index and signing, the error would only occur if someone is attempting to pull an update immediately after it was pushed and prior to signing occuring (I know this may happen in some CI setups though).
Unfortunately, a user who published will almost immediately check to see if it works. Especially if the next thing they are going to do is publish the next library in the chain. So much so that cargo publish
now automatically checks the index in a loop waiting to see it show up.
So this corner case is one we will need to account for.
|
||
## Root Quorum Model | ||
|
||
The root key shall follow a `5-of-9` authentication model for all operations. We consider this a reasonable middle ground of quorum to prevent malicious activity, while allowing for rapid response to an event requiring a quorum. These events are iterated below in [When the Quorum will be needed][when-the-quorum-will-be-needed]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to play devil's advocate here: what would happen if the majority of keyholders met at a large event and were all suddenly incapacitated? is there a way to recover from such a worst-case scenario?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not. We would need to reinitialize the repo with a new root creation ceremony and go about a method of affirming our trust of that without it just being signed by the previous quorum. Likely this would occur via a bespoke process of signing the new quorum with the remaining members and having to out-of-band allow for this instance (it would be a special update of cargo/rustup to allow this).
This is why it was stated in the quorum recommendations as:
- There is value in the members of the quorum generally residing in geographically and politically distributed locations most of the time. This RFC does not place any requirements regarding quorum member colocation for events or travel; the Rust Project and the quorum members can appropriately evaluate the value of such events and the corresponding risks.
git is also a protocol, with intentional security properties and design choices. Using it as a building block is as reasonable as choosing a vanilla merkle hash tree or any other authenticated data structure.
sparse-index is implemented using sparse checkouts from git, afair, but I haven't followed this work too closely I'll admit.
Of course you can! Hell, I implemented inclusion proof commitments on this paper in the browser, in javascript, and with minimal overhead (compared to a github API call). I'm not sure what's so special about a TL that people assume this is not doable with any other scheme. A git commit is an irregular MHT as any other, including a TL.
This is what I'm trying to get at: a vanilla BT implementation barely exists, or if it exists doesn't implement the same security properties as TUF (or any code signing solution for that matter). My original point was to make sure we stop making this non-sequitur of an argument. You can have BT with code signing. Hell, you should have BT with code signing. Sigstore is one option --- which is perfectly compatible with TUF, as you know :) Ironically, one of the main motivations for the speranza paper was that some cargo users were doxed by kiwifarms using historical cargo metadata (which I think was cited on a previous cargo RFC, #3403) As per the monitoring, I believe we should be upfront about it. At times I try to warn the community about upselling a TL as if it was some sort of magical blockchain --- hell, if you came here arguing for people to "just use a blockchain" you'd probably be laughed out of the room.
I'm afraid to add that you also are not operating a transparency log on either the PyPI or the Homebrew case. I believe it's perfectly reasonable for a community repository to rely on hosted solutions (as a hosted sigstore instance), but arguing that hosting a BT log is easier than rolling PKI is disingenous. Let me put it differently: every instance of a large transparency or codesigning solution (be it Sigstore, a TL, whatever) that we can contrast against has a large team of engineers on a MANGA company or a large non-for-profit with multiple stakeholders (such as the Linux Foundation):
This is in contrast with the PyPI and Rust case, which as far as I know doesn't have the continuous backing of a large company with people running this infra as $DAYJOB. I may be missing a success story, but outside of the linux distro case (who work tirelessly to provide both codesigning and transparency) I really can't come up with any that is not an ad-hoc, stick it in a TL BT deployment (as Firefox's used to be). I believe this is important to highlight if the authors decide making a comparative study of deployments of e.g., TUF or BT.
That's also my main argument. If transparency is an independently valuable property, why argue against another independently valuable property that's being proposed? Metaphorically speaking, it'd be like telling something they shouldn't get the burger because they are also ordering a coke. To reiterate: every time somebody argues for CT/TL on a codesigning issue is making people confuse both of them, for no particular reason.
This is very close to a division by zero because we have exactly 0 k-of-n distributed witness/monitor PKI deployments, let alone "operationalized".
I like this, but I really want to highlight the word "adding" that you used, because it's the correct one.
I agree with this. I also was hesitant to dragging things on but I think that ship has sailed now that people are conflating TL and code signing (again, sigh). Moving on to reply to @tarcieri (not sure if I should be posting twice, apologies if I violate etiquette)
Certainly! they were not removed from first class, but now the log doesn't serve these payloads (but instead just holds the hashes). This connects to the conversation above about reducing mission creep between the projects. You can still submit TUF and in-toto types into the log using the library/tools. However, you'll have to store the payload on your side. Sorry I'm linking for a PR but we're undergoing a re-write of the docs. See here for the types supported.
Yep, this is why I'm making that big of a fuzz on the fact that these are tangential (as @woodruffw put it), or rather complementary (as I would've liked to say). Ideally, you want to have the properties of the TL/BT log, while managing trust information using something like TUF. You can then sprinkle in-toto attestations as another incremental portion, but I don't want to get ahead of myself --- this is a TUF/codesigning RFC after all. |
It is not. It uses a file for each crate presented by the crates.io api. There is no git repo involved with it at all. The sparse registry has been the default for a while now. |
In short, I agree with @SantiagoTorres that transparency in all its forms (whether Sigstore, BT, etc) is usually misunderstood as a red herring in these discussions because you really want both (producer-side) codesigning and transparency, and they are not contradictory. To briefly see why, consider: transparency advocates usually assume that package registries are untrusted, and so you need transparent logs (TLs) to audit them, but what stops these untrusted registries—or even the TLs themselves—from simply spamming the TLs with new malicious package versions? Congratulations: you now have to reactively catch and block these immutable, transparent malware. With codesigning such as with TUF, even a registry compromise can prevent attackers from tampering with packages given the right PKI setup. I agree with @tarcieri and @traviscross that we can and should talk about both TUF and TLs (whether Sigstore, BT, etc), especially with a threat model, the different problems they solve, a security analysis, and how they can work together. As for experience operating TUF metadata repositories at scale, I recommend talking to people who have done it such as the Uptane community (including Datadog Remote Configuration) and Drupal (@ergonlogic). IIUC the RubyGems TUF integration was unfortunately never merged due to reviewer bandwidth1. Similarly, to the best of my understanding, the initial PyPI PEP 458 integration was not completed due to reviewer, contractor, and contributor bandwidth, and it simply fell through the cracks (as is typical in OSS). As Santiago mentioned, TUF integrations for package registries so far have unfortunately simply never received the kind of commercial support TLs have, thereby lending the illusion that the latter is "simpler" than the former. You can only do so much with part-time volunteer and contract work. RSTUF (@kairoaraujo and friends) solves this problem by abstracting away TUF as a collection of services you can run on registries themselves, but we still need a lot of support here, perhaps from the Rust community. My thinking is that we can and should host managed TUF metadata repositories for OSS package registries, and there are ways to do this as securely as possible. Anyway, I'm excited about this RFC, and will make it a point to review it now that I'm back in a similar time zone 🙂 Footnotes |
Trying to follow the jargon-laden discussions on transparency logs vs PKIs and having skimmed the RFC to see if I could make sense of all this, I think it would be good to do this and work from a more conceptual-level threat model on towards the point where the trade-offs between TLs and TUF become clearer. |
@djc re: jargon/etc. I think this would be valuable. Is this something where patches are welcome? I'm not very familiar with the rust RFC process, but I'd be happy to throw in some text to help further the discussion/evaluation/understanding of the proposal |
I'd leave that to the RFC authors to decide. (I'll just point that I don't think the solution here is to just add a bunch of glossary to explain all the terms of art, but rather to work from a threat model and conceptual practices towards concrete algorithms and systems.) |
- Creation of `rust-lang/tuf-root` and `rust-lang/tuf-crates` repositories on GitHub | ||
- Initiation of the root signing ceremony via tuf-on-ci on each repository | ||
- Facilitate the initial and subsequent signing events |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand it, none of these three items actually require the involvement of the Infrastructure Team. The repositories can be created using the team repo, and a team with write access to them can configure tuf-on-ci in these repositories autonomously. Only eventual interactions with cloud-based resources (e.g. AWS KMS) would require support by the Infrastructure Team.
Given that the Infrastructure Team is already understaffed and has (in my opinion) way too many existing responsibilities, I'm wondering if it would make sense to create a new t-signing
(sub) team (or similar) that owns the implementation and maintenance of TUF.
The infra-team has a successful track record of collaborating with other teams to provide cloud resources for them, so I have no concerns about working on the CDN together. But I'd like to see the signing effort owned by people who are passionate about the subject and can dedicate the necessary amount of time and effort to make it a success. 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to leave the creation (or not) of a sub-team or other teams in the language here as the infra team still. I think the language makes it acceptable that the infra team would delegate to a subteam, or even pass it off on another team, while the initial creation still rests with you.
That said, does the language as is reflect that thought enough for you?
|
||
##### Root Role | ||
|
||
The root role of the tuf-root shall be a TUF role consisting of 9 members with a 5 member threshold for signing (5-of-9); please reference the Root Quorum Model section below for details on how this role should be managed and its members selected. The sole purpose of this role shall be delegating authority to the other roles within the tuf-root repository (when members of these roles change). Finally, this role shall also be used for signing the tuf-crates root.json - thus protecting the chain of trust between tuf-root and tuf-crates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finally, this role shall also be used for signing the tuf-crates root.json - thus protecting the chain of trust between tuf-root and tuf-crates.
So IIUC both repos will share the same threshold of root keys at any given time? You should also think about how to rotate the root keys, and keep both repos in sync.
|
||
## TUF Management | ||
|
||
We propose the adaptation and implementation of TUF-on-CI (https://github.com/theupdateframework/tuf-on-ci) to manage roots and signing events via GitHub CI. This provides a GitHub-centric workflow for performing signing ceremonies via Pull Requests directly on the TUF repositories in question. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think even @jku would say that tuf-on-ci is better suited for the tuf-root
rather than the tuf-crates
repo.
For the latter, you probably want to use Repository Service for TUF (RSTUF). We are thinking about running RSTUF as a managed service on behalf of OSS package repos like PyPI, and would love to collaborate on testing it for Crates, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct me if I am wrong, but I think we can still use RSTUF in conjunction with tuf-on-ci
if the crates.io team wanted to go that way. Our main motivation of tuf-on-ci
is to make the management and transparency of the roles, delegates, rotation, etc. as transparent as possible. So we could, hypothetically, shift to RSTUF
at a future date while still using tuf-on-ci
to manage our quorums.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on rust-lang/rust#133638 (comment), it sounds like we will still need a long-lived(?) key for Debian and other distros to verify signatures on distributed tarballs, or work with them to integrate with TUF. Is there some standard approach here that we should be expecting to pursue here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a separate delegation from tuf-crates
that handles the direct signing of crates. Specifically, they would include crates along with, say, detached GPG signatures.
@rustbot labels +T-leadership-council -I-council-nominated This RFC makes a specific ask of the council:
There was a request for the council to commit to this ahead of any proposed FCP for this RFC. We on the council discussed this in our 2024-11-08 meeting without consensus. In discussion, people recognized this as a big ask and were concerned about it being difficult to do a good job at this. At the same time, nobody as I recall proposed to delegate this. The next step here is probably to give the council, if possible, more information and assurance about how this might be more feasible and sustainable than what we may worry it is. Probably we were also looking for a sense of where this RFC stands with regard to the consensus that it would need among the other teams. To that end, probably the right thing to do, absent a consensus from us to precommit here, would be to include us on any proposed FCP. Wearing our council hats, we're not going to delve into the technical details; we'd just be signing off on policy matters. |
I wonder if there would be interest in "TUF as a service"? We can do this in a secure way1, and more importantly, it would remove the burden of directly understanding, implementing, and maintaining the signing of package indices from Crates. We are already beginning to test this for another package repository, and would be happy to test it for Crates, too. Finally, not sure what an FCP is, but happy to help how I can there. Footnotes
|
FCP stands for Final Comment Period. It's technically the ten day period after we've gotten enough checkboxes to accept a proposal to give once last chance to raise blocking concerns. In practice, and especially when used as a verb, FCP tends to mean proposing to merge or accept a change, which is when rfcbot puts all the checkboxes up and waits for people to check them. |
@traviscross the RFC says the leadership council is tasked with "selecting and managing the quorum membership", but it sounds like it even as currently worded it could optionally delegate the authority to a set of 3rd party key custodians who manage keys in the quorum. For example, perhaps the infosec teams of some of the Foundation's large corporate donors could be leveraged here? |
As a general status update to folks - I am currently trying out various implementations of TAP-16 (and a few other bespoke synchronization methods). We are currently trying to minimize the amount of downloads occuring on any given fetch from the sparse index to prevent requiring full snapshot downloads every time. TAP-16 is a conceptual solution to this that we iterated here in the RFC - but I'm also experimenting with other concepts on the snapshots themselves (or the elimination of the snapshot phase, and accepting that risk). We are making sure these solutions are feasible and acceptable to be presented in this RFC. |
Avoid a markdown rendering bug (from @Turbo87) Co-authored-by: Tobias Bieniek <[email protected]>
SHA-256, not 512. Co-authored-by: Tobias Bieniek <[email protected]>
This RFC was co-authored by Walter Pearce (@walterhpearce) and Josh Triplett (@joshtriplett).
Here, we propose the alternative adoption and implementation of The Update Framework for providing the chain and trust and implementing signatures for crates and releases. This provides us with the same mitigations and protections as in the previous RFC, utilizing the standard TUF framework for achieving it using new industry standard techniques, tailored for the Rust ecosystem.
Big thanks to @epage @Eh2406 @mdtro @woodruffw for the insights and discussion around this. Also heartfelt thanks to anyone else I missed who participated in the RustConf 2024 Cargo Vault discussions around this topic.
We're going to have follow-up discussions with the infrastructure team on deploying and documenting the infrastructure for this, and on using this infrastructure to set up mirrors (which was one of the primary motivations for creating this infrastructure). Depending on the complexity of setting up mirroring, we may follow up with a subsequent RFC on mirroring.
(This RFC supersedes and closes #3579, the previous draft Public Key Infrastructure RFC, which did not use TUF.)
Rendered