woodruffw
a month ago
> In its current form, however, trusted publishing applies to a limited set of use cases. Support is restricted to a small number of CI providers, it cannot be used for the first publish of a new package, and it does not yet offer enforcement mechanisms such as mandatory 2FA at publish time. Those constraints have led maintainer groups to caution against treating trusted publishing as a universal upgrade, particularly for high-impact or critical packages.
This isn't strictly accurate: when we designed Trusted Publishing for PyPI, we designed it to be generic across OIDC IdPs (typically CI providers), and explicitly included an accommodation for creating new projects via Trusted Publishing (we called it "pending" publishers[1]). The latter is something that not all subsequent adopters of the Trusted Publishing technique have adopted, which is IMO both unfortunate and understandable (since it's a complication over the data model/assumptions around package existence).
I think a lot of the pains here are self-inflicted on GitHub's part here: deciding to remove normal API credentials entirely strikes me as extremely aggressive, and is completely unrelated to implementing Trusted Publishing. Combining the two together in the same campaign has made things unnecessarily confusing for users and integrators, it seems.
[1]: https://docs.pypi.org/trusted-publishers/creating-a-project-...
veeti
a month ago
Yet in practice, only the big boys are allowed to become "Trusted Publishers":
> In the interest of making the best use of PyPI's finite resources, we only plan to support platforms that have a reasonable level of usage among PyPI users for publishing. Additionally, we have high standards for overall reliability and security in the operation of a supported Identity Provider: in practice, this means that a home-grown or personal use IdP will not be eligible.
How long until everyone is forced to launder their artifacts using Microsoft (TM) GitHub (R) to be "trusted"?
[1] https://docs.pypi.org/trusted-publishers/internals/#how-do-i...
woodruffw
a month ago
I wrote a good chunk of those docs, and I can assure you that the goal is always to add more identity providers, and not to enforce support for any particular provider. GitHub was only the first because it’s popular; there’s no grand evil theory beyond that.
VorpalWay
a month ago
So if I self host my own gitea/forgejo instance, will trusted publishing work for me?
woodruffw
a month ago
If you had enough users and demonstrated the ability to securely manage a PKI, then I don’t see why not. But if it’s just you on a server in your garage, then there would be no advantage to either you or to the ecosystem for PyPI to federate with your server.
That’s why API tokens are still supported as a first-class authentication mechanism: Trusted Publishing is simply not a good fit in all possible scenarios.
pamcake
a month ago
> if it’s just you on a server in your garage, then there would be no advantage to either you or to the ecosystem for PyPI to federate with your server.
Why not leave decision on what providers to trust to users, instead of having a centrally managed global allowlist at the registry? Why should he registry admin be the one to decide who is fit to publish for each and all packages?
woodruffw
a month ago
> Why not leave decision on what providers to trust to users, instead of having a centrally managed global allowlist at the registry?
We do leave it to users: you can always use an API token to publish to PyPI from your own developer machine (or server), and downstreams are always responsible for trusting their dependencies regardless of how they’re published.
The reason Trusted Publishing is limited at the registry level is because it takes time and effort (from mostly volunteers) to configure and maintain for each federated service, and the actual benefit of it rounds down to zero when a given service has only one user.
> Why should he registry admin be the one to decide who is fit to publish for each and all packages?
Per above, the registry admin doesn’t make a fitness decision. Trusted Publishing is an optional mechanism.
(However, this isn’t to say that the registry doesn’t reserve this right. They do, to prevent spamming and other abuse of the service.)
acdha
a month ago
They’re running the most popular registry but nothing says you can’t use your own to implement whatever policy you want. The default registry has a tricky balance of needing to support inexperienced users while also only having a very modest budget compared to the companies which depend on it, and things like custom authentication flows are disproportionately expensive.
pamcake
a month ago
What's the issue exactly?
They seem to manage to handle account signups with email addresss from unknown domain names just as fine as for hotmail.com and gmail.com. I don't see how this is any different.
The whole point of standards like OIDC (and supposedly TP) is that there is no need for provider-specific implemenations or custom auth flows as long as you follow the spec and protocol. It's just some fields that can be put in a settings UI configurable by the user.
woodruffw
a month ago
It’s completely different. An email signup doesn’t involve a persistent trust relationship between PyPI and an OIDC identity provider. The latter imposes code changes, availability requirements, etc.
(But also: for completely unrelated reasons, PyPI can and will ban email domains that it believes are sources of abuse.)
eesmith
a month ago
According to their docs, they have a "have high standards for overall reliability and security in the operation of a supported Identity Provider: in practice, this means that a home-grown or personal use IdP will not be eligible."
If you think your setup meets those standards, you'll need to use Microsoft (TM) GitHub (R) to contact them.
VorpalWay
a month ago
In other words, it is a clear centralization drive. No two ways about it.
eesmith
a month ago
PyPI is already centralized.
Back when I started with PyPI, manual upload through the web interface was the only possibility. Have they gotten rid of that?
My understanding is that "trusted publishing"[0] was meant as an additional alternative to that sort of manual processing. It was never decentralized. As I recall, the initial version only supported GitHub and (I think) GitLab.
[0] I do not trust Microsoft as an intermediary to my software distribution. I don't use Microsoft products or services, including GitHub.
Yes, this makes contacting PyPI support via GitHub impossible for me. That is one of the reasons I stopped using PyPI and instead distribute my wheels from my own web site.
jopsen
a month ago
npm is centralized to start with, so how is this a problem?
epage
a month ago
As I'm not familiar with the npm ecosystem so maybe I'm misunderstanding this but it sounds like they removed support for local publishes (via a token) in favor of CI publishing using Trusted Publishing.
If that is correct, I thought this was discussed when Trusted Publishing was proposed for Rust that it was not meant to replace local publishing, only harden CI publishing.
woodruffw
a month ago
> If that is correct, I thought this was discussed when Trusted Publishing was proposed for Rust that it was not meant to replace local publishing, only harden CI publishing.
Yes, that's right, and that's how it was implemented for both Rust and Python. NPM seems to have decided to do their own thing here.
(More precisely, I think NPM still allows local publishing with an API token, they just won't grant long-lived ones anymore.)
the_mitsuhiko
a month ago
I think the path to dependency on closed publishers was opened wide with the introduction of both attestations and trusted publishing. People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.
It was a good intention, but the ramifications of it I don't think are great.
woodruffw
a month ago
> People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.
I think this is unfortunately true, but it's also a tale as old as time. I think PyPI did a good job of documenting why you shouldn't treat attestations as evidence of security modulo independent trust in an identity[1], but the temptation to verify a signature and call it a day is great for a lot of people.
Still, I don't know what a better solution is -- I think there's general agreement that packaging ecosystems should have some cryptographically sound way for responsible parties to correlate identities to their packages, and that previous techniques don't have a great track record.
(Something that's noteworthy is that PyPI's implementation of attestations uses CI/CD identities because it's easy, but that's not a fundamental limitation: it could also allow email identities with a bit more work. I'd love to see more experimentation in that direction, given that it lifts the dependency on CI/CD platforms.)
blibble
a month ago
> It was a good intention, but the ramifications of it I don't think are great.
as always, the road to hell is paved with good intentions
the term "Trusted Publishing" implies everyone else is untrusted
quite why anyone would think Microsoft is considered trustworthy, or competent at operating critical systems, I don't know
woodruffw
a month ago
> the term "Trusted Publishing" implies everyone else is untrusted
No, it just means that you're explicitly trusting a specific party to publish for you. This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.
(The technique also has nothing to do with Microsoft, and everything to do with the fact that GitHub Actions is the de facto majority user demographic that needs targeting whenever doing anything for large OSS ecosystems. If GitHub Actions was owned by McDonalds instead, nothing would be any different.)
fc417fc802
a month ago
> This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.
The other difference is being subjected to a whitelisting approach. That wasn't previously the case.
It's frustrating that seemingly every time better authentication schemes get introduced they come with functionality for client and third party service attestation baked in. All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.
woodruffw
a month ago
> The other difference is being subjected to a whitelisting approach. That wasn't previously the case.
You are not being subjected to one. Again: you can always use an API token with PyPI, even on a CI/CD platform that PyPI knows how to do Trusted Publishing against. It's purely optional.
> All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.
That is what OIDC is. Well, not for a private key, but for a set of claims that constitute a machine identity, which the relying party can then do whatever it wants with.
But standards and interoperability don't mean that any given service will just choose to federate with every other service out there. Federation always has up-front and long-term costs that need to be balanced with actual delivered impact/value; for a single user on their own server, the actual value of OIDC federation versus an API token is nil.
fc417fc802
a month ago
Right, I meant that the new scheme is subject to a whitelist. I didn't mean to imply that you can't use the old scheme anymore.
> Federation always has up-front and long-term costs
Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.
> for a single user on their own server, the actual value of OIDC federation versus an API token is nil.
The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.
I guess there's some nonzero risk that a bunch of users all decide to use the same insecure OIDC service. But you might as well worry that a bunch of them all decide to use an insecure password manager.
> Well, not for a private key, but for a set of claims that constitute a machine identity
What's the difference between "set of claims" and "private key" here?
That last paragraph in GP was more a tangential rant than directly on topic BTW. I realize that OIDC makes sense here. The issue is that as an end user I have more flexibility and ease of use with my SSH keys than I do with something like a self hosted OIDC service. I can store my SSH keys on a hardware token, or store them on my computer blinded so that I need a hardware token or TPM to unlock them, or lots of other options. The service I'm connecting to doesn't need to know anything about my workflow. Whereas self hosting something like OIDC managing and securing the service becomes an entire thing on top of which many services arbitrarily dictate "thou shalt not self host".
It's a general trend that as new authentication schemes have been introduced they have generally included undesirable features from the perspective of user freedom. Adding insult to injury those unnecessary features tend to increase the complexity of the specification. In contrast, it's interesting to think how things might work if what we had instead was a single widely accepted challenge scheme such as SSH has. You could implement all manner of services such as OIDC on top of such a primitive while end users would retain the ability to directly use the equivalent of an SSH key.
woodruffw
a month ago
> Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.
Accepting email isn't really the same thing. I've linked some resources elsewhere in this thread that explain why OIDC federation isn't trivial in the context of machine identities.
> The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.
You can already do this by self-attenuating your PyPI API token, since it's a Macaroon. We designed PyPI's API tokens with exactly this in mind.
(This isn't documented particularly well, since nobody has clearly articulated a threat model in which a single user runs their own entire attenuation service only to restrict a single or small subset of credentials that they already have access to. But you could do it, I guess.)
> What's the difference between "set of claims" and "private key" here?
A private key is a cryptographic object; a "set of claims" is (very literally) a JSON object that was signed over as the payload of a JWT. You can't sign (or encrypt, or whatever) with a set of claims naively; it's just data.
fc417fc802
a month ago
Thank you again for taking the time to walk through this stuff in detail. I think what happened (is happening) with this stuff is a slight communication issue. Some of us (such as myself) are quite jaded at this point when we see a "new and improved" solution with "increased security" that appears to even maybe impinge on user freedoms.
I was unaware that macaroons could be used like that. That's really neat and that capability clears up an apparent point of confusion on my part.
Upon reflection, it makes sense that preventing self hosting would be a desirable feature of attested publishing. That way the developer, builder, and distributor are all independent entities. In that case the registry explicitly vetting CI/CD pipelines is a feature, not a bug.
The odd one out is trusted publishing. I had taken it to be an eventual replacement for API tokens (consider the npm situation for why I might have thought this) thus the restrictions on federation seemed like a problem. However if it's simply a temporary middle ground along the path to attested publishing and there's a separate mechanism for restricting self managed API tokens then the overall situation has a much better appearance (at least to my eye).
pamcake
a month ago
I mean, if it meant the infrastructure operated under a franchising model with distributed admin like McD, it would look quite different!
There is more than one way to interpret the term "trusted". The average dev will probably take away different implications than someone with your expertise and context.
I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.
In the same category: "Trusted Computing", "Zero trust" and "Passkeys are phishing-resistant"
woodruffw
a month ago
> I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.
I can tell you with absolute certainty that it really is just unfortunate. We just couldn’t come up with a better short name for it at the time; it was going to be either “Trusted Publishing” or “OIDC publishing,” and we determined that the latter would be too confusing to people who don’t know (and don’t care to know) what OIDC is.
There’s nothing nefarious about it, just the assumption that people would understand “trusted” to mean “you’re putting trust in this,” not “you have to use $vendor.” Clearly that assumption was not well founded.
fc417fc802
a month ago
Maybe signed publishing or verified publishing would have been better terms?
woodruffw
a month ago
It’s neither signed or verified, though. There’s a signature involved, but that signature is over a JWT not over the package.
(There’s an overlaid thing called “attestations” on PyPI, which is a form of signing. But Trusted Publishing itself isn’t signing.)
fc417fc802
a month ago
Re signed - that is a fair point, although it raises the question, why is the distributed artifact not cryptographically authenticated?
Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?
Unless there's some reason to hide the build source from downstream users of the package?
Re verified, doesn't this qualify as verifying that the source of the artifact is the expected CI/CD pipeline? I suppose "authenticated publishing" could also work for the same reason.
woodruffw
a month ago
> why is the distributed artifact not cryptographically authenticated?
With what key? That’s the layer that “attestations” add on top, but with Trusted Publishing there’s no user/package—associated signature.
> Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?
Yes, the goal is to authenticate the CI/CD pipeline (what we’d call a “machine identity”). And there is a signature involved, but it only verifies the identity of the pipeline, not the package being uploaded by that pipeline. That’s why we layer attestations on top.
(The reasons for this are unfortunately nuanced but ultimately boil down to it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way. I have some slides from talks I gave in the past that might help clarify Trusted Publishing, the relationship with signatures/attestations, etc.[1][2])
> I suppose "authenticated publishing" could also work for the same reason.
I think this would imply that normal API token publishing is somehow not authenticated, which would be really confusing as well. It’s really not easy to come up with a name that doesn’t have some amount of overlap with existing concepts, unfortunately.
fc417fc802
a month ago
> imply that normal API token publishing is somehow not authenticated
Fair enough, although the same reasoning would imply that API token publishing isn't trusted ... well after the recent npm attacks I suppose it might not be at that.
> With what key?
> And there is a signature involved,
So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.
> but it only verifies the identity of the pipeline,
I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.
Also, the more independent layers of authentication there are the fewer options an attacker will have. A hypothetical artifact that carried signatures from the developer, the pipeline, and the registry would have a very clear chain of custody.
> it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way
At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.
Has some key complexity gone over my head here?
BTW please don't take this the wrong way. It's not my intent to imply that I know better. As long as the process works it isn't my intent to critique it. I was just honestly surprised to learn that the package content itself isn't signed by the pipeline to prove provenance for downstream consumers and from there I'm just responding to the reasoning you gave. But if the current process does what it set out to do then I've no grounds to object.
woodruffw
a month ago
> So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.
The key involved is the OIDC IdP's key, which isn't controlled by the maintainer of the project. I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package. This would mean that any GitHub Actions workflow anywhere would be one signing bug away from impersonating signatures for every PyPI project, which would be exceedingly not good. It would also make the insider risk from a compromised CI/CD provider much larger.
(Again, I really recommend taking a look at the talks I linked. Both Trusted Publishing and attestations were multi-year projects that involved multiple companies, cryptographers, and implementation engineers, and most of your - very reasonable! - questions came up for us as well while designing and planning this work.)
> I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.
There are two things here:
1. Trusted Publishing provides a verifiable link between a CI/CD provider (the "machine identity") and a packaging index. This verifiable link is used to issue short-lived, self-scoping credentials. Under the hood, Trusted Publishing relies on a signature from the CI/CD provider (which is an OIDC IdP) to verify that link, but that signature is only over a set of claims about the machine identity, not the package identity.
2. Attestations are a separate digital signing scheme that can use a machine identity. In PyPI's case, we bootstrap trust in a given machine identity by seeing if a project is already enrolled against a Trusted Publisher that matches that identity. But other packaging ecosystems may do other things; I don't know how NPM's attestations work, for example. This digital signing scheme uses a different key, one that's short-lived and isn't managed by the IdP, so that signing events can be made transparent (in the "transparency log" sense) and are associated more meaningfully with the machine identity, not the IdP that originally asserted the machine identity.
> At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.
Yep, this is what attestations do. But a key piece of nuance: the pipeline doesn't "hold" a key per se, it generates a new short-lived key on each run and binds that key to the verified identity sourced from the IdP. This achieves the best of both worlds: users don't need to maintain a long-lived key, and the IdP itself is only trusted as an identity source (and is made auditable for issuance behavior via transparency logging). The end result is that clients that verify attestations don't verify using a specific key; the verify using an identity, and ensure that any particular key matches that identity as chained through an X.509 CA. That entire process is called Sigstore[1].
And no offense taken, these are good questions. It's a very complicated system!
fc417fc802
a month ago
> I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package.
There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.
Obviously the signing key can't be shared between projects any more than the API token is. I think I see where the misunderstanding arose now. Because I said "just verify the pipeline identity" and you interpreted that as "let end users get things signed by a single global provider key" or something to that effect, right?
The only difference I had intended to communicate was the ability of the downstream consumer to verify the same claim (via signature) that the registry currently verifies via token. But it sounds like that's more or less what attestation is? (Hopefully I understood correctly.) But that leaves me wondering why Trusted Publishing exists at all. By the time you've done the OIDC dance why not just sign the package fingerprint and be done with it? ("We didn't feel like it" is of course a perfectly valid answer here. I'm just curious.)
I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?
> the IdP itself is only trusted as an identity source
"Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)
woodruffw
a month ago
> There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.
From what authority? Where does that key come from, and why would a verifying party have any reason to trust it?
(I'm not trying to be tendentious, so sorry if it comes across that way. But I think you're asking good questions that lead to the design that we arrived at with attestations.)
> I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?
The technical blocker is that there's no obvious way to create a user-originated key that's verifiably associated with a machine identity, as originally verified from the IdP's OIDC credential. You could do something like mash a digest into the audience claim, but this wouldn't be very auditable in practice (since there's no easy way to shoehorn transparency atop that). But some people have done some interesting exploration in that space with OpenPubKey[1], and maybe future changes to OIDC will make something like that more tractable.
> "Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)
Yes, but that's why PyPI (and everyone else who uses Sigstore) mediates its use of OIDC IdPs through a transparency logging mechanism. This is in effect similar to the situation with CAs on the web: a CA can always go rogue, but doing so would (1) be detectable in transparency logs, and (2) would get them immediately evicted from trust roots. If we observed rogue activity from GitHub's IdP in terms of identity issuance, the response would be similar.
fc417fc802
a month ago
Okay. I see the lack of benefit now but regardless I'll go ahead and respond to clear up some points of misunderstanding (and because the topic is worthwhile I think).
> From what authority?
The registry. Same as the API token right now.
> The technical blocker is that there's no obvious way to create a user-originated key
I'm not entirely clear on your meaning of "user originated" there. Essentially I was thinking something equivalent to the security of - pipeline generates ephemeral key and signs { key digest, package name, artifact digest }, registry auth server signs the digest of that signature (this is what replaces the API token), registry bulk data server publishes this alongside the package artifact.
But now I'm realizing that the only scenario where this offers additional benefit is in the event that the bulk data server for the registry is compromised but the auth server is not. I do think there's some value in that but the much simpler alternative is for the registry to tie all artifacts back to a single global key. So I guess the benefit is quite minimal. With both schemes downstream assumes that the registry auth server hasn't been compromised. So that's not great (but we already knew that).
That said, you mention IdP transparency logging. Could you not add an arbitrary residue into the log entry? An auth server compromise would still be game over but at least that way any rogue package artifacts would conspicuously be missing a matching entry in the transparency log. But that would require the IdP service to do its own transparency logging as well ... yeah this is quickly adding complexity for only very minimal gain.
Anyway. Hypothetical architectures aside, thanks for taking the time for the detailed explanations. Though it wasn't initially clear to me the rather minimal benefit is more than enough to explain why this general direction wasn't pursued.
If anything I'm left feeling like maybe the ecosystems should all just switch directly to attested publishing.
pamcake
a month ago
Thanks for replying.
I'm certainly not meaning to imply that you are in on some conspiracy or anything - you were already in here clarifying things and setting the record straight in a helpful way. I think you are not representative of industry here (in a good way).
Evangelists are certainly latching on to the ambiguity and using it as an opportunity. Try to pretend you are a caveman dev or pointy-hair and read the first screenful of this. What did you learn?
https://github.blog/changelog/2025-07-31-npm-trusted-publish...
https://learn.microsoft.com/en-us/nuget/nuget-org/trusted-pu...
https://www.techradar.com/pro/security/github-is-finally-tig...
These were the top three results I got when I searched online for "github trusted publishing" (without quotes like a normal person would).
Stepping back, could it be that some stakeholders have a different agenda than you do and are actually quite happy about confusion?
I have sympathy for that naming things is hard. This is Trusted Computing in repeat but marketed to a generation of laymen that don't have that context. Also similar vibes to the centralization of OpenID/OAuth from last round.
On that note, looking at past efforts, I think the only way this works out is if it's open for self-managed providers from the start, not by selective global allowlisting of blessed platform partners one by one on the platform side. Just like for email, it should be sufficient with a domain name and following the protocol.
greggman65
a month ago
Rust and Python appear to still long lived ones so it's only a matter of time until they get the same issues it would seem?
woodruffw
a month ago
For whatever reason, we haven't seen the same degree of self-perpetuating credential disclosure in either Rust or Python as an ecosystem. Maybe that trend won't hold forever, but that's the distinguishing feature here.
jacquesm
a month ago
> I think a lot of the pains here are self-inflicted on GitHub's part here
It is spelled 'Microsoft'.
What did you think would happen long term? I remember when that acquisition happened and there were parties thrown all around, MS finally 'got' open source.
And never mind feeding all of the GitHub contents to their AI.
woodruffw
a month ago
My point was that these are political and logistical problems latent to GitHub/Microsoft/whatever, not to Trusted Publishing as a design. I don't think I materially disagree with you about Microsoft not having a sterling reputation.
jacquesm
a month ago
Yes, but I think that that more than anything is the driver behind these decisions.
woodruffw
a month ago
Which ones? It wasn't a driver behind our decisions when we designed Trusted Publishing originally; the fact that GitHub has been such a mess has been a consistent source of tsuris in my life.
user
a month ago
user
a month ago
pamcake
a month ago
"In its current form" is in context of NPM, where I think it's accurate.
Great to see PyPi taking a more reasonable path.