OCSP Service Has Reached End of Life

118 pointsposted 6 hours ago
by pfexec

34 Comments

lol768

4 hours ago

The ship has very much sailed now with ballot SC63, and this is the result, but I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable). You run into so many problems with the size of them, the updates not propagating immediately etc. It's just an ugly solution to the problem, that you then have to introduce further hacks (Bloom filters) atop of it all to make the whole mess work. I'm glad that Mozilla have done lots of work in this area with CRLite, but it does all feel like a bodge.

The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages.

The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research.

And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?

ekr____

4 hours ago

The problem with requiring OCSP stapling is that it's not practically enforceable without breakage.

The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem.

As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers.

[0] https://security.googleblog.com/2014/09/gradually-sunsetting... [1] As an aside it's not clear that OCSP stapling is better than short-lived certs.

lol768

3 hours ago

I think you are correct. There were similar issues with Firefox rolling out SameSite=Lax by default, and I think those plans are now indefinitely on hold as a result of the breakage it caused. It's a hard problem to solve.

> As an aside it's not clear that OCSP stapling is better than short-lived certs.

I agree this should be the end goal, really.

woodruffw

3 hours ago

> Why is that somehow okay, but OCSP not?

I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them.

dogma1138

7 minutes ago

I never understood why they didn’t tried to push OSCP into DNS.

You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow.

gerdesj

2 hours ago

"And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?"

Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated.

SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot.

dogma1138

6 minutes ago

You still are reaching out to authoritative servers for that domain so someone else other than the destination knows what you are looking for.

The 47 day life expectancy isn’t going to come until 2029 and it might get pushed.

Also 47 days is still too long if certificates are compromised.

PunchyHamster

2 hours ago

It's funny that putting some random records in DNS is enough to have enough "ownership" to make a cert for one but we can't use same method for publishing revoking

ocdtrekkie

2 hours ago

The entire existence of CAs is a pointless and mystical venture to ensure centralized control of the Internet that, since now entirely domain-validated, provides absolutely no security benefits over DNS. If your domain register/name server provider is compromised, CAs are already a lost cause.

tptacek

an hour ago

The DNS is more centralized than the WebPKI.

ocdtrekkie

an hour ago

Three browser companies on the west coast of the US effectively control all decisionmaking for WebPKI. The entire membership of the CA/B is what, a few dozen? Mostly companies which have no reason to exist except serving math equations for rent.

How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between.

The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet.

And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing.

tptacek

38 minutes ago

Incoherent. Browser vendors exert control by dint of controlling the browsers themselves, and are in the picture regardless of the trust system used for TLS. The question is, which is more centralized: the current WebPKI, which you say is also completely dependent on the DNS but involves more companies, or the DNS itself, which is axiomatically fewer companies?

I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.

sugarpimpdorsey

2 hours ago

This will not impact Chrome in any meaningful way because - in typical Google fashion - they invented their own bullshit called CRLSets that does not perform OCSP or CRL checks in any way, rather periodically downloads a preened blacklist from Google which it then uses to screen certificates.

Most people don't realize this.

It's quite insane given that Chrome will by default not check CRLs *at all* for internal, enterprise CAs.

stusmall

2 hours ago

What in the Sam Hill? This is a new one on me. Does anyone have any reading for their logic of why?

na4ma4

2 hours ago

I vaguely remember there was similar reasoning as why golang doesn't support them, they're "antiquated and useless".

I hit that road-block a lot when trying to do mTLS in the browser, that and dropping support for the [KeyGen](https://www.w3docs.com/learn-html/html-keygen-tag.html) tag.

creatonez

an hour ago

What are you on about? Literally all browsers have adopted the same strategy. It's not some secret they're trying to hide from you. It's a good thing.

zahlman

2 hours ago

Does this mean I should turn "security.OCSP.require" back off in Firefox?

GauntletWizard

4 hours ago

Ocsp has always represented a terrible design. If clients require it, then it becomes just a override on the not after date included in the certificate, that requires online access to the cert server. If it is not required, then it is useless, because blocking the ocsp responses is well within the capabilities of any man in the middle attack, and makes the servers themselves DDOS attack targets.

The alternative to the privacy nightmare is ocsp stapling, which has the first problem once again - it adds complexity to the protocol just to add an override of the not after attribute, when the not after attribute could be updated just as easily with the original protocol, reissuing the certificate. It was a Band-Aid on the highly manual process of certain issuance that once dominated the space.

Good riddance to ocsp, I for one will not miss it.

jeroenhd

4 hours ago

OCSP stapling was a good solution in the age of certificates that were valid for 10 years (which was the case for basic HTTPS certificates back in 2011 when OCSP stapling was introduced). In the age of 90 day certificates (to be reduced to a maximum of 47 days in a few years), it's not quite as necessary any more, but I don't think OCSP stapling is that problematic a solution.

Certificates in air-gapped networks are problematic, but that problem can be solved with dedicated CRL-only certificate roots that suffer all of the downsides of CRLs for cases where OCSP stapling isn't available.

Nobody will miss OCSP now that it's dead, but assuming you used stapling I think it was a decent solution to a difficult problem that plagued the web for more than a decade and a half.

tremon

3 hours ago

But that 47-day lifetime is enforced by the certificate authority, not by the browser, right? So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser. Or will browsers be instructed to reject long-lived certificates under specific conditions?

sugarpimpdorsey

3 hours ago

Wrong. Enforcement is done by the browser. Yes, a CA's certificate policy may govern how long a certificate they will issue. But should an error occur, and a long-lived cert issued (even maliciously), the browser will reject it.

The browser-CA cartels stay relatively in sync.

You can verify this for yourself by creating and trusting a local CA and try issuing a 5 year certificate. It won't work. You'll have a valid cert, but it won't be trusted by the browser unless the lifetime is below their arbitrary limit. Yet that certificate would continue to be valid for non-browser purposes.

ameliaquining

an hour ago

I just did this with a 20-year certificate and it worked fine in Chrome and Firefox. That said, my understanding is that the browsers exempt custom roots from these kinds of policies, which are only meant to constrain the behavior of publicly trusted CAs.

avianlyric

3 hours ago

> So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser.

How would a bad actor do that without a certificate authority being involved?

arccy

3 hours ago

the browsers will verify, and every cert will be checked against transparency logs. you won't be able to hide a long lived cert for very long.

tgsovlerkhgsel

3 hours ago

Shortening the certificate lifespan to e.g. 24h would have a number of downsides:

Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.

Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).

OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.

With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.

tptacek

2 hours ago

Well, OCSP is dead, so the real argument is over how low certificate lifetimes will be, not whether or not we might make a go of OCSP.

charcircuit

2 hours ago

>would increase a lot

You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.

>even harder to follow CT.

It should be no harder to follow than before.

tgsovlerkhgsel

an hour ago

Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse.

I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.

With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.

https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.

lokar

2 hours ago

You could extend the format to account for repetition of otherwise identical short ttl certs

layer8

3 hours ago

> the not after attribute could be updated just as easily with the original protocol, reissuing the certificate.

That's not a viable solution if the server you want to verify is compromised. The point of CRL and OCSP is exactly to ask the authority one higher up, without the entity you want to verify being able to interfere.

In non-TLS uses of X.509 certificates, OCSP is still very much a thing, by the way, as there is no real alternative for longer-lived certificates.

arccy

3 hours ago

actually that's pretty close to where we're going with ever shorter certificate lifetimes...

GauntletWizard

2 hours ago

In this scenario, where oscp is required and stapled: The CA can simply refuse to reissue the certificate if the host is compromised. It does not matter if it is refusing to issue an ocsp ticket or a new short lived cert.