Docker Systems Status: Full Service Disruption

278 pointsposted 10 hours ago
by l2dy

111 Comments

tj_591

2 hours ago

Hi all, Tushar from Docker here. We’re sorry about the impact our current outage is having on many of you. Yes, this is related to the ongoing AWS incident and we’re working closely with AWS on getting our services restored. We’ll provide regular updates on dockerstatus.com .

We know how critical Docker Hub and services are to millions of developers, and we’re sorry for the pain this is causing. Thank you for your patience as we work to resolve this incident. We’ll publish a post-mortem in the next few days once this incident is fully resolved and we have a remediation plan.

atymic

10 hours ago

reader_1000

8 hours ago

> We have identified the underlying issue with one of our cloud service providers.

Isn't it everyone using multiple cloud providers nowadays? Why are they affected by single cloud provider outage?

lvncelot

7 hours ago

I think more often than not, companies are using a single cloud provider, and even when multiple are used, it's either different projects with different legacy decisions or a conscious migration.

True multi-tenancy is not only very rare, it's an absolute pain to manage as soon as people start using any vendor-specific functionality.

dijit

4 hours ago

> as soon as people start using any vendor-specific functionality

It's also true in circumstances where things have the same name but act differently.

You'd be forgiven for believing that AWS IAM and GCP IAM are the same thing for example, but in GCP an IAM Role is simply a list of permissions that you can attach to an identity. In AWS an IAM Role is the identity itself.

Other examples; if you're coming from GCP, you'd be forgiven for thinking that Networks are regional in AWS, which will be annoying to fix later when you realise you need to create peering connections.

Oh and while default firewall rules are stateful on both, if you dive into more advanced network security, the way rules are applied and processed can have subtle differences. The inherent global nature of the GCP VPC means firewall rules, by default, apply across all regions within that VPC, which requires a different mindset than AWS where rules are scoped more tightly to the region/subnet.

There's like, hundreds of these little details.

DiggyJohnson

2 hours ago

Sounds like we’ve walked a similar path on this. Especially with IAM and network policies.

> There’s like hundreds of these little issues

Exactly. If it is a handful of things that is fine. It’s often as you describe.

OtherShrezzing

4 hours ago

I think there's some irony in Docker being impacted specifically, as they're one of the main tools to help achieve true multi-tenancy.

DiggyJohnson

2 hours ago

Depends on if you’re using Docker or Podman Desktop versus straight Docker/Podman and where you’re pulling your images from.

ikiris

an hour ago

Multi cloud is just a way to have the outages of both.

jelder

4 hours ago

No, that's pretty rare, and generally means you can't count on any features more sophisticated than VMs and object storage.

On the other hand, it's pretty embarrassing at this point for something as fundamental as Docker to be in a single region. Most cloud providers make inter-region failover reasonably achievable.

richardwhiuk

2 hours ago

Almost all cloud providers help here by having inter-region failures as well.

There are multiple AWS services which are "global" in the sense that they are entirely hosted out of AWS East 1

roywiggins

5 hours ago

You can be multi-cloud in the sense that you aren't dependent on any single provider, or in the sense that you are dependent on all of them.

pmontra

38 minutes ago

Because even if service A is using multiple cloud providers not all the external services they use are doing the same thing, especially the smallest one or the cheapest ones. At least one of them is on AWS East-1, fails and degrades service A or takes it down.

Being multi-cloud does not come for free: time, engineers, knowledge and ultimately money.

rcxdude

7 hours ago

Because it's hard enough to distribute a service across multiple machines in the same DC, let alone across multiple DCs and multiple providers.

walkabout

2 hours ago

> Isn't it everyone using multiple cloud providers nowadays?

Oh yes. All of them, in fact, especially if you count what key vendors host on.

> Why are they affected by single cloud provider outage?

Every workload is only on one cloud. Nb this doesn’t mean every workflow is on only one cloud. Important distinction since that would be more stable.

DiggyJohnson

2 hours ago

Multi cloud is not nearly as trivial as often implied to implement for real world complex projects. Things get challenging the second your application steps off the happy path

wredcoll

2 hours ago

> Isn't it everyone using multiple cloud providers nowadays? Why are they affected by single cloud provider outage?

No? I very much doubt anyone is doing that.

postexitus

7 hours ago

Not only they are not using multiple cloud providers, they are not using multiple cloud locations.

madisp

6 hours ago

they are using multiple cloud providers, but judging by the cloudflare r2 outage affecting them earlier this year I guess all of them are on the critical path?

nobleach

6 hours ago

Looking at the landscape around me, no. Everyone is in crisis cost-cutting, "gotta show that same growth the C-suite saw during Covid" mode. So being multi-provider, and even in some cases, being multi-regional, is now off the table. It's sad because the product really suffers. But hey, "growth".

ic4l

9 hours ago

This broke our builds since we rely on several public Docker images, and by default, Docker uses docker.io.

Thankfully, AWS provides a docker.io mirror for those who can't wait:

  FROM public.ecr.aws/docker/library/{image_name}
In the error logs, the issue was mostly related to the authentication endpoint:

https://auth.docker.io → "No server is available to handle this request"

After switching to the AWS mirror, everything built successfully without any issues.

CamouflagedKiwi

8 hours ago

Mild irony that Docker is down because of the AWS outage, but the AWS mirror repos are still running...

firloop

9 hours ago

I wasn't able to get this working, but I was able to use Google's mirror[0] just fine.

Just had to change

    FROM {image_name}
to

    FROM mirror.gcr.io/{image_name} 
Hope this helps!

[0]: https://cloud.google.com/artifact-registry/docs/pull-cached-...

ic4l

9 hours ago

We tried this initially

  FROM mirror.gcr.io/{image_name}
We received

  failed to resolve source metadata for mirror.gcr.io/
So it looks like these services may not be true mirrors, and just functioning as a library proxy with a cache.

If you're image is not cached on one of these then you may be SOL.

da768

7 hours ago

During the last Docker Hub outage we found Google mirrors lost all image tags after a while. Image digest references would probably work

KronisLV

9 hours ago

I guess people who are running their own registries like Nexus and build their own container images from a common base image are feeling at least a bit more secure in their choice right now.

Wonder how many builds or redeployments this will break. Personally, nothing against Docker or Docker Hub of course, I find them to be useful.

tom1337

8 hours ago

We are using base images but unfortunately some github actions are pulling docker images in their prepare phase - so while my application would build, I cannot deploy it because the CI/CD depends on dockerhub and you cannot change where these images are pulled from (so they cannot go through a pull-through cache)…

roryirvine

5 hours ago

My advice: document the issue, and use it to help justify spending time on removing those vestigial dependencies on Docker asap.

It's not just about reducing your exposure to third parties who you (presumably) don't have a contract with, it's also good mitigation against potential supply chain attacks - especially if you go as far as building the base images from scratch.

tom1337

5 hours ago

Yea we have thought about that - I also want to remove most dependencies on externally imported actions on GitHub CI and probably just go back to simple bash scripts. Our actions are not that complicated and there is little benefit in using some external action to run ESLint than just run the command inside the action directly. Saves time and reduces dependencies - just need to get time to do that…

enigmo

4 hours ago

mirrors can be configured in dockerd or buildkit. if you can update the config (might need a self-hosted runner?) it’s a quick fix - see https://cloud.google.com/artifact-registry/docs/pull-cached-... for an example. aws and azure are similar.

nusl

8 hours ago

Currently unable to do much of anything new in dev/prod environments without manual workarounds. I'd imagine the impact is pretty massive.

Asside; seems Signal is also having issues. Damn.

cebert

8 hours ago

I’m not sure that the impact will be that big. Most organizations have their own mirrors for artifacts.

VenturingVole

8 hours ago

From what I've seen: I highly doubt it.

Edit to add: This might spur on a few more to start doing that, but people are quick to forget/prioritise other areas. If this keeps happening then it will change.

walkabout

2 hours ago

“Their own” can and often does mean something hosted on a major cloud provider (whether they manage it in-house or pay a vendor for their system)

nusl

8 hours ago

Yeah, perhaps. I don't know how many folks host mirrors. Most places I've worked for didn't, though this is anecdotal.

phillebaba

8 hours ago

I would say most people would say it't best practice while a minority actually does it.

CaptainOfCoit

7 hours ago

Seems related to size and/or maturity if anything. I haven't seen any startups less than five year old doing anything like that, but I also haven't seen any huge enterprise not doing that, YMMV.

Sphax

9 hours ago

We run Harbor and mirror every base image using its Proxy Cache feature, it's quite nice. We've had this setup for years now and while it works fine, Harbor has some rough edges.

thephyber

7 hours ago

I came here to mention that any non-trivial company depending on Docker images should look into a local proxy cache. It’s too much infra for a solo developer / tiny organization, but is a good hedge against DockerHub, GitHub repo, etc downtime and can run faster (less ingress transfer) if located in the same region as the rest of your infra.

jsmeaton

9 hours ago

Guess where we host nexus..

frenkel

9 hours ago

Only if they get their base images from somewhere else...

bravetraveler

8 hours ago

Pull-through caches are still useful even when the upstream is down... assuming the image(s) were pulled recently. The HEAD to upstream will obviously fail [when checking currency], but the software is happy to serve what it has already pulled.

Depends on the implementation, of course: I'm speaking to 'distribution/distribution', the reference. Harbor or whatever else may behave differently, I have no idea.

theanonymousone

4 hours ago

It's quite funny/interesting that this is higher in HN front page than the news of the AWS outage that caused it.

mcintyre1994

4 hours ago

Not on the real secret front page! https://news.ycombinator.com/active :)

pknopf

3 hours ago

What does the "active" page sort by?

mcintyre1994

2 hours ago

According to https://news.ycombinator.com/lists it's "Most active current discussions"

I find that it better surfaces the best discussion when there are multiple threads (like in this example), and it keeps showing slightly older threads for longer when there's still discussion happening.

cakeday

2 hours ago

That's informative, I wasn't aware of that way to view HN, thanks.

phillebaba

8 hours ago

Shameless plug but this might be a good time to install Spegel in your Kubernetes clusters if you have critical dependencies on Docker Hub.

https://spegel.dev/

osivertsson

7 hours ago

If it really is fully open-source please make that more visible on your landing page.

It is a huge deal if I can start investigating and deploying such a solution as a techie right away, compared to having to go through all the internal hoops for a software purchase.

CaptainOfCoit

7 hours ago

How hard is it to go to the GitHub repository and open the LICENSE file that is in almost every repository? Would have taken you less time than writing that comment, and showed you it's under MIT.

rplnt

6 hours ago

It's not entirely uncommon to only have parts of the solution open. So a license on one repo might not be the whole story and looking further would take more time than giving a good suggestion to the author.

giobox

3 hours ago

Agreed. For all the people arguing "just click the link and the license is there!!", I have been burned several times before where a technical solution has a prominent open permissive license github repo (MIT or similar etc) based component as its primary home, only to discover later on that essential parts of the system are in other less permissive or private repos behind subscriptions or fees.

CaptainOfCoit

2 hours ago

The rest of us get around that particular issue by going through the source code and all the tradeoffs before we download, include and adopt a dependency, not after.

giobox

an hour ago

Good for you! This of course doesn't help in the situation where a dependency author retroactively changes the licensing state of a component, or reconfigures the project to rely on a new external dependency with differing license states (experienced both of these too!).

Having the landing page explain the motivations of the authors vis-a-vis open source goes a long way to providing the context for whatever licensing is appearing in the source repos, and helps understand what the future steer for the project is likely to be.

There are loads of ostensibly open source projects out there whose real goal is to drive sales of associated software and services, often without which the value of the opensource components is reduced, especially in the developer tooling space.

CaptainOfCoit

24 minutes ago

> Good for you! This of course doesn't help in the situation where a dependency author retroactively changes the licensing state of a component, or reconfigures the project to rely on a new external dependency with differing license states (experienced both of these too!).

No, but I also don't see why that matters a lot. Once you adopted a third party project as a dependency, you also implicitly sign up to whatever changes they do, or you get prepared for staying on a static version with only security fixes you apply yourself. This isn't exactly new problems nor rocket science, we've been dealing with these sort of things for decades already.

> There are loads of ostensibly open source projects out there whose real goal is to drive sales of associated software and services, often without which the value of the opensource components is reduced, especially in the developer tooling space.

Yeah, which is kind of terrible, but also kind of great. But in the end, ends up being fairly easy to detect one way or another, with the biggest and reddest signal being VC funded with no public pricing.

kelvinjps10

6 hours ago

Also it's good feedback for the developer of this solution

BolexNOLA

5 hours ago

If I have to dig through your website/documentation to find basic information we’re not getting off to a great start. It’s pretty common for open source projects to proudly proclaim they are open source from the get-go. “____ is an open source tool for ______.” Simple as that

CaptainOfCoit

5 hours ago

Today's kids are way too lazy.

BolexNOLA

5 hours ago

Seriously all the nitpicking I see of any project people post here but “tell us you’re open source at the top when you’re open source” means we’re lazy? Being open source is an important decision and you should tell people! It’s a good thing!

Isn’t a big part of getting a project out there actually letting people know what it is? Especially if you’re trying to give a tool to the open source-valuing community. That’s a high priority for them. That’s like having a vegan menu and not saying you’re a vegan restaurant anywhere public facing.

CaptainOfCoit

4 hours ago

I agree it's a good thing, but I'd also agree it's not something you need/have to shove in people's faces, especially when it's literally one click away to find out (The GitHub icon in the top right takes you to the repository, and you don't even have to scroll or click anything, the sidebar shows "MIT License" for you).

surajrmal

4 hours ago

There is a GitHub icon fairly prominent on the top right. Choosing to spend precious text for a fleeting would be user on it is a choice and not everyone wants to market that fact very prominently. Should everyone who writes their project in rust include that prominently as well? It seemingly markets very well and a lot of people seem to care about that as well.

storm1er

7 hours ago

What's the difference with kuik? Spegel seems too complicated for my homelab, but could be a nice upgrade for my company

Kuik: https://github.com/enix/kube-image-keeper?tab=readme-ov-file...

phillebaba

6 hours ago

It's been a while since I looked at kuik, but I would say the main difference is that Spegel doesn't do any of the pulling or storage of images. Instead it relies on Containerd to do it for you. This also means that Spegel does not have to manage garbage collection. The nice thing with this is that it doesn't change how images are initially pulled from upstream and is able to serve images that exist on the node before Spegel runs.

Also it looks kuik uses CRDs to store information about where images are cached, while Spegel uses its own p2p solution to do the routing of traffic between nodes.

If you are running k3s in your homelab you can enable Spegel with a flag as it is an embedded feature.

CaptainOfCoit

7 hours ago

There is a couple of alternatives that mirrors more than just Docker Hub too, most of them pretty bloated and enterprisey, but they do what they say on the tin and saved me more than once. Artifactory, Nexus Repository, Cloudsmith and ProGet are some of them.

phillebaba

7 hours ago

Spegel does not only mirror Docker Hub, and works a lot differently than the alternatives you suggested. Instead of being yet another failure point closer to your production environment, it runs a distributed stateless registry inside of your Kubernetes cluster. By piggy backing off of Containerds image store it will distribute already pulled images inside of the cluster.

CaptainOfCoit

7 hours ago

I'll be honest and say I hadn't heard of Spegel before, and just read the landing page which says "Speed up container pulls and minimize downtime with a stateless peer-to-peer OCI registry mirror for efficient image distribution", so it isn't exactly clear you can use it for more things than container images.

mike-cardwell

6 hours ago

This looks good, but we're using GKE and it looks like it only works there with some hacks. Is there a timeline to make it work with GKE properly?

phillebaba

5 hours ago

I am having some discussions about getting things working on GKE but I can't give an ETA as it really depends on how things align with deployment schedules. I am positive however that this will soon be resolved.

helpfulmandrill

8 hours ago

I wonder if this is why I also can't log in to O'Reilly to do some "Docker is down, better find something to do" training...

p0w3n3d

7 hours ago

Just install a pull-through proxy that will store all the packages recently used.

darkamaul

6 hours ago

For other people impacted, what helped me this morning was to use the `ghcr`, albeit this is not a one-to-one replacement.

Ex: `docker pull ghcr.io/linuxcontainers/debian-slim:latest`

l2dy

8 hours ago

Recovering as of October 20, 2025 09:43 UTC

> [Monitoring] We are seeing error rates recovering across our SaaS services. We continue to monitor as we process our backlog.

jabiko

5 hours ago

Its impressive that even though registry-1.docker.io returned 503 errors they where able to keep a the metric "Docker Registry Uptime" at 100%.

lbruder

5 hours ago

Well, the server was up, it was just returning HTTP 503...

dd_xplore

8 hours ago

Does it decrease the AWS's nine 9s ?

speedgoose

7 hours ago

The marketing department did the maths and they said no.

nobleach

6 hours ago

"MOST of the time" we're nine 9s.

jdthedisciple

9 hours ago

So thus far today outages are reported from

- AWS

- Vercel

- Atlassian

- Cloudflare

- Docker

- Google (see downdetector)

- Microsoft (see downdetector)

What's going on?

d4rkp4ttern

7 hours ago

Reddit appears to be only semi operational. Frequent “rate limit” errors and empty pares while just browsing. Not sure if related

ta1243

9 hours ago

Or they all rely on AWS, because over the last 15 years we've built an extremely fragile interconnected global system in the pursuit of profit, austerity, and efficiency

benrutter

9 hours ago

Wait, Google and Microsoft rely on AWS? That seems unlikely? (does it? I wouldn't really know to be honest)

ssl-3

8 hours ago

In terms of user reports: Some users don't know what the hell is going on. This is a constant.

For instance: When there's a widespread Verizon cellular outage, sites like downdetector will show a spike in Verizon reports.

But such sites will also show a spike in AT&T and T-Mobile reports. Even though those latter networks are completely unaffected by Verizon's back-end issues, the graphs of user reports are consistently shaped the same for all 3 carriers.

This is just because some of the users doing the reporting have no clue.

So when the observation is "AWS is in outage and people are reporting issues at Google, and Microsoft," then the last two are often just factors of people being people and reporting the wrong thing.

(You're hanging out on HN, so there's very good certainty that you know what precisely what cell carrier you're using and also can discern the difference betwixt an Amazon, a Google, and a Microsoft. But lots of other people are not particularly adept at making these distinctions. It's normal and expected for some of them to be this way at all times.)

tpetry

4 hours ago

Thats true. And big part of the reason is the user‘s browser. They use Microsoft Edge or Google Chrome and can‘t open a page and there are weird error messages? oh, thats probably a Google issue…

thephyber

7 hours ago

It’s very likely they’ve bought companies that were built on AWS and haven’t migrated to use their homegrown cloud platforms.

ta1243

9 hours ago

More likely the outage reports for google and microsoft are based around systems which also include aws

CrayKhoi

8 hours ago

They might be using third party services that rely on AWS.

throw-10-13

7 hours ago

dns outage at aws exposing how overly centralized our infra is

cloudking

5 hours ago

I'm fairly new to Docker. Do folks really rely on public images and registries for production systems? Seems like a brittle strategy.

edoceo

3 hours ago

Yes, 1000s of orgs. Larger players might use a pull-through-cache - but it's not as common as it should be. Similar issue for other software-supply-chain (NPM, pyPi, etc)

sschueller

9 hours ago

What are good proxy/mirror solutions to mitigate such issues? Best would be an all in one solution that for example also handles nodejs, packigist etc.

bravetraveler

8 hours ago

Pulp is a popular project for 'one stop shop', I believe. Personally, always used project-specific solutions like 'distribution/distribution' for containers from the CNCF. This allows for pull-through caching with relatively little setup work.

conradfr

9 hours ago

Is there a built-in way to bypass the request to the registry if your base layers are cached?

2OEH8eoCRo0

3 hours ago

The internet was designed to be fault tolerant and distributed from the beginning and we still ended up with a handful of mega hosts.

danvesma

10 hours ago

...well this explains a lot about how my morning is going...

gjvc

2 hours ago

mirror.gcr.io is your friend

wolfgangbabad

8 hours ago

even reddit throws a lot of 503s when adding/editing comments

throw-10-13

7 hours ago

reddit is always going down, thats the least surprising thing about this