AWS multiple services outage in us-east-1

2246 pointsposted 4 months ago
by kondro

485 Comments

time0ut

4 months ago

Interesting day. I've been on an incident bridge since 3AM. Our systems have mostly recovered now with a few back office stragglers fighting for compute.

The biggest miss on our side is that, although we designed a multi-region capable application, we could not run the failover process because our security org migrated us to Identity Center and only put it in us-east-1, hard locking the entire company out of the AWS control plane. By the time we'd gotten the root credentials out of the vault, things were coming back up.

Good reminder that you are only as strong as your weakest link.

SOLAR_FIELDS

4 months ago

This reminds me of the time that Google’s Paris data center flooded and caught on fire a few years ago. We weren’t actually hosting compute there, but we were hosting compute in AWS EU datacenter nearby and it just so happened that the dns resolver for our Google services elsewhere happened to be hosted in Paris (or more accurately it routed to Paris first because it was the closest). The temp fix was pretty fun, that was the day I found out that /etc/hosts of deployments can be globally modified in Kubernetes easily AND it was compelling enough to want to do that. Normally you would never want to have an /etc/hosts entry controlling routing in kube like this but this temporary kludge shim was the perfect level of abstraction for the problem at hand.

1970-01-01

4 months ago

I remember Facebook had a similar story when they botched their BGP update and couldn't even access the vault. If you have circular auth, you don't have anything when somebody breaks DNS.

ttul

4 months ago

There is always that point you reach where someone has to get on a plane with their hardware token and fly to another data centre to reset the thing that maintains the thing that gives keys to the thing that makes the whole world go round.

vladvasiliu

4 months ago

> Identity Center and only put it in us-east-1

Is it possible to have it in multiple regions? Last I checked, it only accepted one region. You needed to remove it first if you wanted to move it.

barbazoo

4 months ago

Wow, you really *have* to exercise the region failover to know if it works, eh? And that confidence gets weaker the longer it’s been since the last failover I imagine too. Thanks for sharing what you learned.

shawabawa3

4 months ago

for what it's worth, we were unable to login with root credentials anyway

i don't think any method of auth was working for accessing the AWS console

reenorap

4 months ago

It's a good reminder actually that if you don't test the failover process, you have no failover process. The CTO or VP of Engineering should be held accountable for not making sure that the failover process is tested multiple times a month and should be seamless.

hinkley

4 months ago

Too much armor makes you immobile. Will your security org be held to task for this? This should permanently slow down all of their future initiatives because it’s clear they have been running “faster than possible” for some time.

Who watches the watchers.

ej_campbell

4 months ago

Totally ridiculous that AWS wouldn't by default make it multi-region and warn you heavily that your multi-region service is tied to a single region for identity.

The usability of AWS is so poor.

ct520

4 months ago

I always find it interesting how many large enterprises have all these DR guidelines but fail to ever test. Glad to hear that everything came back alright

ransom1538

4 months ago

People will continue to purchase Mutli-AZ and multi-region even though you have proved what a scam it is. If east region goes down, ALL amazon goes down, feel free to change my mind. STOP paying double rates for multi region.

ozim

4 months ago

Sounds like a lot of companies need to update their BCP after this incident.

michaelcampbell

4 months ago

"If you're able to do your job, InfoSec isn't doing theirs"

0x5345414e

4 months ago

This is having a direct impact on my wellbeing. I was at Whole Foods in Hudson Yards NYC and I couldn’t get the prime discount on my chocolate bar because the system isn’t working. Decided not to get the chocolate bar. Now my chocolate levels are way too low.

tonymet

4 months ago

"alexa turn on coffee pot" stopped working this morning, and I'm going bonkers.

pewpew_

4 months ago

I was attempting to use self checkout for some lunch I grabbed from the hotbar and couldn’t understand why my whole foods barcode was failing. It took me a full 20 seconds to realize the reason for the failure.

dewarrn1

4 months ago

This is a fun example, but now you've got me wondering: has anyone checked on folks who might have been in an Amazon Go store during the outage?

nxpnsv

4 months ago

Life indeed is a struggle

colechristensen

4 months ago

I had to buy a donut and the gas station with cash, like a peasant.

TZubiri

4 months ago

That's it, internet centralization has gone too far, call your congress(wo)man

JCM9

4 months ago

Have a meeting today with our AWS account team about how we’re no longer going to be “All in on AWS” as we diversify workloads away. Was mostly about the pace of innovation on core services slowing and AWS being too far behind on AI services so we’re buying those from elsewhere.

The AWS team keeps touting the rock solid reliability of AWS as a reason why we shouldn’t diversify our cloud. Should be a fun meeting!

radium3d

4 months ago

Once you've had an outage on AWS, Cloudflare, Google Cloud, Akismet. What are you going to do? Host in house? None of them seem to be immune from some outage at some point. Get your refund and carry on. It's less work for the same outcome.

cmiles8

4 months ago

This. When Andy Jassy got challenged by analysts on the last earnings call on why AWS has fallen so far behind on innovation in areas his answer was a hand wavy response that diverted attention to say AWS is durable, stable, and reliable and customers care more about that. Oops.

ifwinterco

4 months ago

Everything except us-east-1 is generally pretty reliable. At $work we have a lot of stuff that's only on eu-west-1 (yes not the best practice) and we haven't had any issues, touch wood

tete

4 months ago

> The AWS team keeps touting the rock solid reliability of AWS as a reason why we shouldn’t diversify our cloud. Should be a fun meeting!

This is and was never true. I've done setups in the past where monitoring happened "multi cloud" with also multiple dedicated servers. Was pretty broad so you could actually see where things broke.

Was quite some time ago so I don't have the data, but AWS never came out on top.

It actually matched largely with what netcraft.com put out. Not sure if they still do that and release those things to the public.

llmslave

4 months ago

AWS has been in long term decline, most of the platform is just in keeping the lights on mode. Its also why they are behind on AI, alot of would be innovative employees get crushed under red tape and performance management

GoblinSlayer

4 months ago

But then you will be affected by outages of every dependency you use.

1-6

4 months ago

Glad that you're taking the first step toward resiliency. At times, big outages like these are necessary to give a good reason why the company should Multicloud. When things are working without problems, no one cares to listen to the squeaky wheel.

morshu9001

4 months ago

This was a single region outage, right? If you aren't cross-region, cross-cloud is the same but harder

jen20

4 months ago

I would be interested in a follow up in 2-3 years as to whether you've had fewer issues with a multi-cloud setup than just AWS. My suspicion is that will not be the case.

lootgraft

4 months ago

> The AWS team keeps touting the rock solid reliability of AWS as a reason why we shouldn’t diversify our cloud.

If an internal "AWS team" then this translates to "I am comfortable using this tool, and am uninterested in having to learn an entirely new stack."

If you have to diversify your cloud workloads give your devops team more money to do so.

ej_campbell

4 months ago

Aren't you deployed in multiple regions?

BoredPositron

4 months ago

Still no serverless inference for models or inference pipes that are not available on bedrock, still no auto scaling GPU workers. We started bothering them in 2022...crickets

wrasee

4 months ago

Please tell me there was a mixup and for some reason they didn’t show up.

indoordin0saur

4 months ago

Seems like major issues are still ongoing. If anything it seems worse than it did ~4 hours ago. For reference I'm a data engineer and it's Redshift and Airflow (AWS managed) that is FUBAR for me.

markus_zhang

4 months ago

It has been quite a while, wondering how many 9s are dropped.

365 day * 24 * 0.0001 is roughly 8 hours, so it already lost the 99.99% status.

outworlder

4 months ago

I'm wondering why your and other companies haven't just evicted themselves from us-east-1. It's the worst region for outages and it's not even close.

Our company decided years ago to use any region other than us-east-1.

Of course, that doesn't help with services that are 'global', which usually means us-east-1.

throwaway-aws9

4 months ago

You have to remember that health status dashboards at most (all?) cloud providers require VP approval to switch status. This stuff is not your startup's automated status dashboard. It's politics, contracts, money.

PeterCorless

4 months ago

Downdetector had 5,755 reports of AWS problems at 12:52 AM Pacific (3:53 AM Eastern).

That number had dropped to 1,190 by 4:22 AM Pacific (7:22 AM Eastern).

However, that number is back up with a vengeance. 9,230 reports as of 9:32 AM Pacific (12:32 Eastern).

Part of that could be explained by more people making reports as the U.S. west coast awoke. But I also have a feeling that they aren't yet on top of the problem.

belter

4 months ago

This looks like one their worst outage in 15 years and us-east-1 still shows as degraded but I had no outages, as dont use us-east-1. Are you seeing issues on other regions?

https://health.aws.amazon.com/health/status?path=open-issues

The closest to their identification of a root cause seems to be this one:

"Oct 20 8:43 AM PDT We have narrowed down the source of the network connectivity issues that impacted AWS Services. The root cause is an underlying internal subsystem responsible for monitoring the health of our network load balancers. We are throttling requests for new EC2 instance launches to aid recovery and actively working on mitigations."

jread

4 months ago

Lambda create-function control plane operations are still failing with InternalError for us - other services have recovered (Lambda, SNS, SQS, EFS, EBS, and CloudFront). Cloud availability is the subject of my CS grad research, I wrote a quick post summarizing the event timeline and blast radius as I've observed it from testing in multiple AWS test accounts: https://www.linkedin.com/pulse/analyzing-aws-us-east-1-outag...

Forricide

4 months ago

Definitely seems to be getting worse, outside of AWS itself, more websites seem to be having sporadic or serious issues. Concerning considering how long the outage has been going.

whaleofatw2022

4 months ago

Dangerous curiosity ask, is whether the number of folks off for Diwali is a factor or not?

I.e. lots of folks that weren't expected to work today and/or trying to round them up to work the problem.

napolux

4 months ago

worst of all: ring alarm unstoppable siren because the app is down and the keyboard was removed by my parents and put "somewhere in the basement".

autophagian

4 months ago

Yeah. We had a brief window where everything resolved and worked and now we're running into really mysterious flakey networking issues where pods in our EKS clusters timeout talking to the k8s API.

mvdtnz

4 months ago

The problems now seem mostly related to starting new instances. Our capacity is slowly decaying as existing services spin down and new EC2 workloads fail to start.

baubino

4 months ago

Basic services at my worksite have been offline for almost 8 hours now (things were just glitchy for about 4 hours before that). This is nuts.

assholesRppl2

4 months ago

Yep, confirmed worse - DynamoDB now returning "ServiceUnavailableException"

JCM9

4 months ago

Agree… still seeing major issues. Briefly looked like it was getting better but things falling apart again.

tlogan

4 months ago

I noticed the same thing and it seems to have gotten much worse around 8:55 a.m. Pacific Time.

By the way, Twilio is also down, so all those login SMS verification codes aren’t being delivered right now.

wavemode

4 months ago

SEV-0 for my company this morning. We can't connect to RDS anymore.

jmuguy

4 months ago

Yeah we were fine until about 1030 eastern and have been completely down since then, Heroku customer.

davedx

4 months ago

Andy Jassy is the Tim Cook of Amazon

Rest and vest CEOs

perching_aix

4 months ago

In addition to those, Sagemaker also fails for me with an internal auth error specifically in Virginia. Fun times. Hope they recover by tomorrow.

steveBK123

4 months ago

Agreed, every time the impacted services list internally gets shorter, the next update it starts growing again.

A lot of these are second order dependencies like Astronomer, Atlassian, Confluent, Snowflake, Datadog, etc... the joys of using hosted solutions to everything.

jonplackett

4 months ago

The problem is now that, what’s anyone going to do? Leave?

I remember a meme years ago about Nestle. It was something like: GO ON, BOYCOT US - I BET YOU CAN’T - WE MAKE EVERYTHING.

Same meme would work for Aws today.

ljdtt

4 months ago

first time i see "fubar", is that a common expression on the industry? jsut curious (english is not my native language)

nikolay

4 months ago

Choosing us-east-1 as your primary region is good, because when you're down, everybody's down, too. You don't get this luxury with other US regions!

rdhatt

4 months ago

One unexpected upside moving from a DC to AWS is when a region is down, customers are far more understanding. Instead of being upset, they often shrug it off since nothing else they needed/wanted was up either.

ta1243

4 months ago

It took me so long to realise this is what's important in enterprise. Uptime isn't important, being able to blame someone else is what's important.

If you're down for 5 minutes a year because one of your employees broke something, that's your fault, and the blame passes down through the CTO.

If you're down for 5 hours a year but this affected other companies too, it's not your fault

From AWS to Crowdstrike - system resilience and uptime isn't the goal. Risk mitigation isn't the goal. Affordability isn't the goal.

When the CEO's buddies all suffer at the same time as he does, it's just an "act of god" and nothing can be done, it's such a complex outcome that even the amazing boffins at aws/google/microsoft/cloudflare/etc can't cope.

If the CEO is down at a different time than the CEO's buddies then it's that Dave/Charlie/Bertie/Alice can't cope and it's the CTO's fault for not outsourcing it.

As someone who likes to see things working, it pisses me off no end, but it's the way of the world, and likely has been whenever the owner and CTO are separate.

sam1r

4 months ago

Sometimes we all need a tech shutdown.

Gigachad

4 months ago

I was once told that our company went with Azure because when you tell the boomer client that our service is down because Microsoft had an outage, they go from being mad at you, to accepting that the outage was an act of god that couldn’t be avoided.

Sparkyte

4 months ago

I am down with that lets all build in US-East-1.

kelseydh

4 months ago

Is us-east-1 equally unstable to the other regions? My impression was that Amazon deployed changes to us-east-1 first so it's the most unstable region.

ej_campbell

4 months ago

And all your dependencies are co-located.

tokioyoyo

4 months ago

Doing pretty well up here in Tokyo region for now! Just can't log into console and some other stuff.

stepri

4 months ago

“Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1. We are working on multiple parallel paths to accelerate recovery.”

It’s always DNS.

Nextgrid

4 months ago

I wonder how much of this is "DNS resolution" vs "underlying config/datastore of the DNS server is broken". I'd expect the latter.

koliber

4 months ago

It's always US-EAST-1 :)

shamil0xff

4 months ago

Might just be BGP dressed as DNS

bayindirh

4 months ago

Even when it's not DNS, it's DNS.

nijave

4 months ago

I don't think that's necessarily true. The outage updates later identified failing network load balancers as the cause--I think DNS was just a symptom of the root cause

I suppose it's possible DNS broke health checks but it seems more likely to be the other way around imo

commandersaki

4 months ago

Someone probably failed to lint the zone file.

us0r

4 months ago

Or expired domains which I suppose is related?

dexterdog

4 months ago

That's why they wrote the haiku

indycliff

4 months ago

the answer is always DNS

mlhpdx

4 months ago

Cool, building in resilience seems to have worked. Our static site has origins in multiple regions via CloudFront and didn’t seem to be impacted (not sure if it would have been anyway).

My control plane is native multi-region, so while it depends on many impacted services it stayed available. Each region runs in isolation. There is data replication at play but failing to replicate to us-east-1 had no impact on other regions.

The service itself is also native multi-region and has multiple layers where failover happens (DNS, routing, destination selection).

Nothing’s perfect and there are many ways this setup could fail. It’s just cool that it worked this time - great to see.

Nothing I’ve done is rocket science or expensive, but it does require doing things differently. Happy to answer questions about it.

SteveNuts

4 months ago

> Our static site has origins in multiple regions via CloudFront and didn’t seem to be impacted

This seems like such a low bar for 2025, but here we are.

AndrewKemendo

4 months ago

How did you do resilient auth for keys and certs?

zild3d

4 months ago

active/active? curious what the data stack looks like as that tends to be the hard part

chibea

4 months ago

One main problem that we observed was that big parts of their IAM / auth setup was overloaded / down which led to all kinds of cascading problems. It sounds as if Dynamo was reported to be a root cause, so is IAM dependent on dynamo internally?

Of course, such a large control plane system has all kinds of complex dependency chains. Auth/IAM seems like such a potentially (global) SPOF that you'd like to reduce dependencies to an absolute minimum. On the other hand, it's also the place that needs really good scalability, consistency, etc. so you probably like to use the battle proof DB infrastructure you already have in place. Does that mean you will end up with a complex cyclic dependency that needs complex bootstrapping when it goes down? Or how is that handled?

julianozen

4 months ago

There was a very large outage back in ~2017 that was caused by DynamoDB going down. Because EC2 stored its list of servers in DynamoDB, EC2 went down too. Because DynamoDB ran its compute on EC2, it was suddenly no longer able to spin up new instances to recover.

It took several days to manually spin up DynamoDB/EC2 instances so that both services could recover slowly together. Since then, there was a big push to remove dependencies between the “tier one” systems (S3, DynamoDB, EC2, etc.) so that one system couldn’t bring down another one. Of course, it’s never foolproof.

cyberax

4 months ago

When I worked at AWS several years ago, IAM was not dependent on Dynamo. It might have changed, but I highly doubt this. Maybe some kind of network issue with high-traffic services?

> Auth/IAM seems like such a potentially (global) SPOF that you'd like to reduce dependencies to an absolute minimum.

IAM is replicated, so each region has its own read-only IAM cache. AWS SigV4 is also designed to be regionalized, if you ever wondered why the signature key derivation has many steps, that's exactly why ( https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_s... ).

cowsandmilk

4 months ago

Many AWS customers have bad retry policies that will overload other systems as part of their retries. DynamoDB being down will cause them to overload IAM.

wwdmaxwell

4 months ago

I think Amazon uses an internal platform called Dynamo as a KV store, it’s different than DynamoDB, so im thinking the outage could be either a dns routing issue or some kind of node deployment problem.

Both of which seem to prop up in post mortems for these widespread outages.

devmor

4 months ago

I find it very interesting that this is the same issue that took down GCP recently.

sammy2255

4 months ago

Can't resolve any records for dynamodb.us-east-1.amazonaws.com

However, if you desperately need to access it you can force resolve it to 3.218.182.212. Seems to work for me. DNS through HN

curl -v --resolve "dynamodb.us-east-1.amazonaws.com:443:3.218.182.212" https://dynamodb.us-east-1.amazonaws.com/

XCSme

4 months ago

It's always DNS

planckscnst

4 months ago

There's also dynamodb-fips.us-east-1.amazonaws.com if the main endpoint is having trouble. I'm not sure if this record was affected the same way during this event.

rstupek

4 months ago

thank you for that info!!!!!

sam1r

4 months ago

Dude!! Life saver.

emrodre

4 months ago

Their status page (https://health.aws.amazon.com/health/status) says the only disrupted service is DynamoDB, but it's impacting 37 other services. It is amazing to see how big a blast radius a single service can have.

jamesbelchamber

4 months ago

It's not surprising that it's impacting other services in the region because DynamoDB is one of those things that lots of other services build on top of. It is a little bit surprising that the blast radius seems to extend beyond us-east-1, mind.

In the coming hours/days we'll find out if AWS still have significant single points of failure in that region, or if _so many companies_ are just not bothering to build in redundancy to mitigate regional outages.

I'm looking forward to the RCA!

thmpp

4 months ago

AWS engineers are trained to use their internal services for each new system. They seem to like using DynamoDB. Dependencies like this should be made transparent.

nevada_scout

4 months ago

It's now listing 58 impacted services, so the blast radius is growing it seems

littlecranky67

4 months ago

The same page now says 58 services - just 23 minutes after your post. Seems this is becoming a larger issue.

tonymet

4 months ago

i'm surprised / bothered that the history log shows the issues starting AM 10/20 -- when they seemed to have started around midnight 10/19

Aachen

4 months ago

Signal is down from several vantage points and accounts in Europe, I'd guess because of this dependence on Amazon overseas

We're having fun figuring out how to communicate amongst colleagues now! It's when it's gone when you realise your dependence

tcumulus

4 months ago

Well, at least I now know that my Belgian university's Blackboard environment is running on AWS :)

BoredPositron

4 months ago

Our last resort fallbacks are channels on different IRC servers. They always hold.

Traubenfuchs

4 months ago

> We're having fun figuring out how to communicate amongst colleagues now!

When Slack was down we used... google... google mail? chat. When you go to gmail there is actually a chat app on the left.

lexandstuff

4 months ago

Thankfully Slack is still holding up.

JCM9

4 months ago

At 3:03 AM PT AWS posted that things are recovering and sounded like issue was resolved.

Then things got worse. At 9:13 AM PT it sounds like they’re back to troubleshooting.

Honestly sounds like AWS doesn’t even really know what’s going on. Not good.

vishnugupta

4 months ago

This is exacerbated by the fact that this is Diwali week which means the most of Indian engineers will be out on leave. Tough luck.

melozo

4 months ago

Even internal Amazon tooling is impacted greatly - including the internal ticketing platform which is making collaboration impossible during the outage. Amazon is incapable of building multi-region services internally. The Amazon retail site seems available, but I’m curious if it’s even using native AWS or is still on the old internal compute platform. Makes me wonder how much juice this company has left.

stevejb

4 months ago

Amazon's revenue in 2024 was about the size of Belgium's GDP. Higher than Sweden or Ireland. It makes a profit similar to Norway, without drilling for offshore oil or maintaining a navy. I think they've got plenty of juice left.

willsmith72

4 months ago

It seems reasonable to me that Amazon (retail) would build better AZ redundancy into their services than say Snapchat or a bank

NelsonMinar

4 months ago

I saw a quote from a high end AWS support engineer that said something like "submitting tickets for AWS problems is not working reliably: customers are advised to keep retrying until the ticket is submitted".

nettlin

4 months ago

> The Amazon retail site seems available, but I’m curious if it’s even using native AWS or is still on the old internal compute platform.

Some parts of amazon.com seem to be affected by the outage (e.g. product search: https://x.com/wongmjane/status/1980318933925392719)

qingcharles

4 months ago

Amazon customer support had a banner saying it was unavailable for most of the day. Couldn't get anything less than 5 day shipping on any item today.

fairity

4 months ago

As this incident unfolds, what’s the best way to estimate how many additional hours it’s likely to last? My intuition is that the expected remaining duration increases the longer the outage persists, but that would ultimately depend on the historical distribution of similar incidents. Is that kind of data available anywhere?

greybeard69

4 months ago

To my understanding the main problem is DynamoDB being down, and DynamoDB is what a lot of AWS services use for their eventing systems behind the scenes. So there's probably like 500 billion unprocessed events that'll need to get processed even when they get everything back online. It's gonna be a long one.

froobius

4 months ago

Yes, with no prior knowledge the mathematically correct estimate is:

time left = time so far

But as you note prior knowledge will enable a better guess.

movpasd

4 months ago

I used Claude to get the outage start and ends from the post-event summaries for major historical AWS outages: https://aws.amazon.com/premiumsupport/technology/pes/

The cumulative distribution actually ends up pretty exponential which (I think) means that if you estimate the amount of time left in the outage as the mean of all outages that are longer than the current outage, you end up with a flat value that's around 8 hours, if I've done my maths right.

Not a statistician so I'm sure I've committed some statistical crimes there!

Unfortunately I can't find an easy way to upload images of the charts I've made right now, but you can tinker with my data:

    cause,outage_start,outage_duration,incident_duration
    Cell management system bug,2024-07-30T21:45:00.000000+0000,0.2861111111111111,1.4951388888888888
    Latent software defect,2023-06-13T18:49:00.000000+0000,0.08055555555555555,0.15833333333333333
    Automated scaling activity,2021-12-07T15:30:00.000000+0000,0.2861111111111111,0.3736111111111111
    Network device operating system bug,2021-09-01T22:30:00.000000+0000,0.2583333333333333,0.2583333333333333
    Thread count exceeded limit,2020-11-25T13:15:00.000000+0000,0.7138888888888889,0.7194444444444444
    Datacenter cooling system failure,2019-08-23T03:36:00.000000+0000,0.24583333333333332,0.24583333333333332
    Configuration error removed setting,2018-11-21T23:19:00.000000+0000,0.058333333333333334,0.058333333333333334
    Command input error,2017-02-28T17:37:00.000000+0000,0.17847222222222223,0.17847222222222223
    Utility power failure,2016-06-05T05:25:00.000000+0000,0.3993055555555555,0.3993055555555555
    Network disruption triggering bug,2015-09-20T09:19:00.000000+0000,0.20208333333333334,0.20208333333333334
    Transformer failure,2014-08-07T17:41:00.000000+0000,0.13055555555555556,3.4055555555555554
    Power loss to servers,2014-06-14T04:16:00.000000+0000,0.08333333333333333,0.17638888888888887
    Utility power loss,2013-12-18T06:05:00.000000+0000,0.07013888888888889,0.11388888888888889
    Maintenance process error,2012-12-24T20:24:00.000000+0000,0.8270833333333333,0.9868055555555555
    Memory leak in agent,2012-10-22T17:00:00.000000+0000,0.26041666666666663,0.4930555555555555
    Electrical storm causing failures,2012-06-30T02:24:00.000000+0000,0.20902777777777776,0.25416666666666665
    Network configuration change error,2011-04-21T07:47:00.000000+0000,1.4881944444444444,3.592361111111111

rwky

4 months ago

Generally expect issues for the rest of the day, AWS will recover slowly, then anyone that relies on AWS will recovery slowly. All the background jobs which are stuck will need processing.

jameshart

4 months ago

Rule of thumb is that the estimated remaining duration of an outage is equal to the current elapsed duration of the outage.

kuon

4 months ago

I realize that my basement servers have better uptime than AWS this year!

I think most sysadmin don't plan for AWS outage. And economically it makes sense.

But it makes me wonder, is sysadmin a lost art?

tredre3

4 months ago

> But it makes me wonder, is sysadmin a lost art?

Yes. 15-20 years ago when I was still working on network-adjacent stuff I witnessed the shift to the devops movement.

To be clear, the fact that devops don't plan for AWS failures isn't an indication that they lack the sysadmin gene. Sysadmins will tell you very similar "X can never go down" or "not worth having a backup for service Y".

But deep down devops are developers who just want to get their thing running, so they'll google/serveroverflow their way into production without any desire to learn the intricacies of the underlying system. So when something breaks, they're SOL.

"Thankfully" nowadays containers and application hosting abstracts a lot of it back away. So today I'd be willing to say that devops are sufficient for small to medium companies (and dare I say more efficient?).

archon810

4 months ago

That's not very surprising. At this point you could say that your microwave has a better uptime. The complexity comparison to all the Amazon cloud services and infrastructure would be roughly the same.

TheCraiggers

4 months ago

> But it makes me wonder, is sysadmin a lost art?

I dunno, let me ask chatgpt. Hmmm, it said yes.

asah

4 months ago

Does this include your SPoF Internet connection?

kalleboo

4 months ago

It's fun watching their list of "Affected Services" grow literally in front of your eyes as they figure out how many things have this dependency.

It's still missing the one that earned me a phone call from a client.

hbn

4 months ago

I know Postman has kinda gone to shit over the years but it's hilarious my local REST client that makes requests from my machine has AWS as a dependency .

I found that out about Plex during an outage too.

zenexer

4 months ago

It's seemingly everything. SES was the first one that I noticed, but from what I can tell, all services are impacted.

hvb2

4 months ago

In AWS, if you take out one of dynamo db, S3 or lambda you're going to be in a world of pain. Any architecture will likely use those somewhere including all the other services on top.

If in your own datacenter your storage service goes down, how much remains running

mlrtime

4 months ago

When these major issues come up, all they have is symptoms and not causes. Maybe not until the dynamo oncall comes on and says its down, then everyone knows at least the reason for their teams outage.

The scale here is so large they don't know the complete dependency tree until teams check-in on what is out or not, growing this list. Of course most of it is automated, but getting on 'Affected Services' is not.

0x002A

4 months ago

As Amazon moves from day-1 company as it claimed once, to be the sales company like Oracle focusing on raking money, expect more outages to come, and longer to be resolved.

Amazon is burning and driving away the technical talent and knowledge knowing the vendor lock-in will keep bringing the sweet money. You will see more sales people hoovering around your c-suites and executives, while you will face even worse technical support, that seem not knowing what they are talking about, yet alone to fix the support issue you expect to be fixed easily.

Mark my words, and if you are putting your eggs in one basket, that basket is now too complex and too interdependent, and the people who built and knew those intricacies are driven away with RTOs, move to hubs. Eventually those services; all others (and also aws services themselves) heavily dependent on, might be more fragile than the public knows.

joncrane

4 months ago

>You will see more sales people hoovering around your c-suites and executives, while you will face even worse technical support, that seem not knowing what they are talking about, yet alone to fix the support issue you expect to be fixed easily.

WILL see? We've been seeing this since 2019.

zht

4 months ago

Do you have data suggesting AWS outages are more frequent and/or take longer to resolve?

hopelite

4 months ago

That is why technical leaders’ role wouldn’t demand they not only gather data, but also report things like accurate operational, alternative, and scenario cost analysis; financial risks; vendor lock-in; etc.

However, as may be apparent just from that small set, it is not exactly something technical people often feel comfortable with doing. It is why at least in some organizations you get the friction of a business type interfacing with technical people in varying ways, but also not really getting along because they don’t understand each other and often there are barriers of openness.

0x002A

4 months ago

https://www.theregister.com/2025/10/20/aws_outage_amazon_bra... quoting from this

"And so, a quiet suspicion starts to circulate: where have the senior AWS engineers who've been to this dance before gone? And the answer increasingly is that they've left the building — taking decades of hard-won institutional knowledge about how AWS's systems work at scale right along with them."

...

"AWS has given increasing levels of detail, as is their tradition, when outages strike, and as new information comes to light. Reading through it, one really gets the sense that it took them 75 minutes to go from "things are breaking" to "we've narrowed it down to a single service endpoint, but are still researching," which is something of a bitter pill to swallow. To be clear: I've seen zero signs that this stems from a lack of transparency, and every indication that they legitimately did not know what was breaking for a patently absurd length of time."

....

"This is a tipping point moment. Increasingly, it seems that the talent who understood the deep failure modes is gone. The new, leaner, presumably less expensive teams lack the institutional knowledge needed to, if not prevent these outages in the first place, significantly reduce the time to detection and recovery. "

...

"I want to be very clear on one last point. This isn't about the technology being old. It's about the people maintaining it being new. If I had to guess what happens next, the market will forgive AWS this time, but the pattern will continue."

SeanAnderson

4 months ago

Looks like it affected Vercel, too. https://www.vercel-status.com/

My website is down :(

(EDIT: website is back up, hooray)

l5870uoo9y

4 months ago

Static content resolves correctly but data fetching is still not functional.

jellyfishbeaver

4 months ago

I had a chuckle on my way home yesterday. Standing on the train platform and seeing "Next departure in: (Vercel Connection Error)" on the screen. :P

TechDebtDevin

4 months ago

Imagine using vercel, a company that literally contributes to the starvation of children and is proud of it. Also, literally just learn to use a Dockerfile and a vps, like why do these PaaS even exist, you're paying 3x for the same AWS services.

maximefourny

4 months ago

Have you done anything for it to be back up? Looks like mines are still down.

TiredOfLife

4 months ago

Service that runs on aws is down when aws is down. Who knew.

rwky

4 months ago

To everyone that got paged (like me), grab a coffee and ride it out, the week can only get better!

esskay

4 months ago

To everyone who was supposed to get paged but didn't, do a postmortem, chances are your service is running via Twilio and needs migrating elsewhere.

ivad

4 months ago

Seems to have taken down my router "smart wifi" login page, and there's no backup router-only login option! Brilliant work, linksys....

ibejoeb

4 months ago

This is just a silly anecdote, but every time a cloud provider blips, I'm reminded. The worst architecture I've ever encountered was a system that was distributed across AWS, Azure, and GCP. Whenever any one of them had a problem, the system went down. It also cost 3x more than it should.

abujazar

4 months ago

I find it interesting that AWS services appear to be so tightly integrated that when there's an issue in a region, it affects most or all services. Kind of defeats the purported resiliency of cloud services.

rirze

4 months ago

We just had a power outage in Ashburn starting at 10 pm Sunday night. It restored at 3:40am ish, and I know datacenters have redundant power sources but the timing is very suspicious. The AWS outage supposedly started at midnight

tonypapousek

4 months ago

Looks like they’re nearly done fixing it.

> Oct 20 3:35 AM PDT

> The underlying DNS issue has been fully mitigated, and most AWS Service operations are succeeding normally now. Some requests may be throttled while we work toward full resolution. Additionally, some services are continuing to work through a backlog of events such as Cloudtrail and Lambda. While most operations are recovered, requests to launch new EC2 instances (or services that launch EC2 instances such as ECS) in the US-EAST-1 Region are still experiencing increased error rates. We continue to work toward full resolution. If you are still experiencing an issue resolving the DynamoDB service endpoints in US-EAST-1, we recommend flushing your DNS caches. We will provide an update by 4:15 AM, or sooner if we have additional information to share.

Waterluvian

4 months ago

I know there's a lot of anecdotal evidence and some fairly clear explanations for why `us-east-1` can be less reliable. But are there any empirical studies that demonstrate this? Like if I wanted to back up this assumption/claim with data, is there a good link for that, showing that us-east-1 is down a lot more often?

mittermayr

4 months ago

Careful: NPM _says_ they're up (https://status.npmjs.org/) but I am seeing a lot of packages not updating and npm install taking forever or never finishing. So hold off deploying now if you're dependent on that.

rsanheim

4 months ago

I wonder what kind of outage or incident or economic change will be required to cause a rejection of the big commercial clouds as the default deployment model.

The costs, performance overhead, and complexity of a modern AWS deployment are insane and so out of line with what most companies should be taking on. But hype + microservices + sunk cost, and here we are.

weberer

4 months ago

Llama-5-beelzebub has escaped containment. A special task force has been deployed to the Virginia data center to pacify it.

JCM9

4 months ago

US-East-1 is more than just a normal region. It also provides the backbone for other services, including those in other regions. Thus simply being in another region doesn’t protect you from the consistent us-east-1 shenanigans.

AWS doesn’t talk about that much publicly, but if you press them they will admit in private that there are some pretty nasty single points of failure in the design of AWS that can materialize if us-east-1 has an issue. Most people would say that means AWS isn’t truly multi-region in some areas.

Not entirely clear yet if those single points of failure were at play here, but risk mitigation isn’t as simple as just “don’t use us-east-1” or “deploy in multiple regions with load balancing failover.”

JPKab

4 months ago

The length and breadth of this outage has caused me to lose so much faith in AWS. I knew from colleagues who used to work there how understaffed and inefficient the team is due to bad management, but this just really concerns me.

rose-knuckle17

4 months ago

aws had an outage. Many companies were impacted. Headlines around the world blame AWS. the real news is how easy it is to identify companies that have put cost management ahead of service resiliency.

Lots of orgs operating wholly in AWS and sometimes only within us-east-1 had no operational problems last night. Some that is design (not using the impacted services). Some of that is good resiliency in design. And some of that was dumb luck (accidentally good design).

Overall, those companies that had operational problems likely wouldn't have invested in resiliancy expenses in any other deployment strategy either. It could have happened to them in Azure, GCP or even a home rolled datacenter.

pjmlp

4 months ago

It just goes to show the difference between best practices in cloud computing, and what everyone ends up doing in reality, including well known industry names.

Aldipower

4 months ago

My minor 2000 users web app hosted on Hetzner works fyi. :-P

runako

4 months ago

Even though us-east-1 is the region geographically closest to me, I always choose another region as default due to us-east-1 (seemingly) being more prone to these outages.

Obviously, some services are only available in us-east-1, but many applications can gain some resiliency just by making a primary home in any other region.

me551ah

4 months ago

We created a single point of failure on the Internet, so that companies could avoid single points of failure in their data centers.

jacquesm

4 months ago

Every week or so we interview a company and ask them if they have a fall-back plan in case AWS goes down or their cloud account disappears. They always have this deer-in-the-headlights look. 'That can't happen, right?'

Now imagine for a bit that it will never come back up. See where that leads you. The internet got its main strengths from the fact that it was completely decentralized. We've been systematically eroding that strength.

esskay

4 months ago

Er...They appear to have just gone down again.

1970-01-01

4 months ago

Someone, somewhere, had to report that doorbells went down because the very big cloud did not stay up.

I think we're doing the 21st century wrong.

o1o1o1

4 months ago

I'm so happy we chose Hetzner instead but unfortunately we also use Supabase (dashboard affected) and Resend (dashboard and email sending affected).

Probably makes sense to add "relies on AWS" to the criteria we're using to evaluate 3rd-party services.

amadeoeoeo

4 months ago

Oh no... may be LaLiga found out pirates hosting on AWS?

philipp-gayret

4 months ago

Our Alexa's stopped responding and my girl couldn't log in to myfitness pal anymore.. Let me check HN for a major outage and here we are :^)

At least when us-east is down, everything is down.

tedk-42

4 months ago

Internet, out.

Very big day for an engineering team indeed. Can't vibe code your way out of this issue...

stavros

4 months ago

AWS truly does stand for "All Web Sites".

jjice

4 months ago

We got off pretty easy (so far). Had some networking issues at 3am-ish EDT, but nothing that we couldn't retry. Having a pretty heavily asynchronous workflow really benefits here.

One strange one was metrics capturing for Elasticache was dead for us (I assume Cloudwatch is the actual service responsible for this), so we were getting no data alerts in Datadog. Took a sec to hunt that down and realize everything was fine, we just don't have the metrics there.

I had minor protests against us-east-1 about 2.5 years ago, but it's a bit much to deal with now... Guess I should protest a bit louder next time.

igleria

4 months ago

funny that even if we have our app running fine in AWS europe, we are affected as developers because of npm/docker/etc being down. oh well.

ksajadi

4 months ago

A lot of status pages hosted by Atlasian StatusPage are down! The irony…

Isuckatcode

4 months ago

Man , I just wanted to enjoy celebrating Diwali with my family but been up from 3am trying to recover our services. There goes some quality time

colesantiago

4 months ago

It seems that all the sites that ask for distributed systems in their interview and has their website down wouldn't even pass their own interview.

This is why distributed systems is an extremely important discipline.

comrade1234

4 months ago

I like that we can advertise to our customers that over the last X years we have better uptime than Amazon, google, etc.

bob1029

4 months ago

One thing has become quite clear to me over the years. Much of the thinking around uptime of information systems has become hyperbolic and self-serving.

There are very few businesses that genuinely cannot handle an outage like this. The only examples I've personally experienced are payment processing and semiconductor manufacturing. A severe IT outage in either of these businesses is an actual crisis. Contrast with the South Korean government who seems largely unaffected by the recent loss of an entire building full of machines with no backups.

I've worked in a retail store that had a total electricity outage and saw virtually no reduction in sales numbers for the day. I have seen a bank operate with a broken core system for weeks. I have never heard of someone actually cancelling a subscription over a transient outage in YouTube, Spotify, Netflix, Steam, etc.

The takeaway I always have from these events is that you should engineer your business to be resilient to the real tradeoff that AWS offers. If you don't overreact to the occasional outage and have reasonable measures to work around for a day or 2, it's almost certainly easier and cheaper than building a multi cloud complexity hellscape or dragging it all back on prem.

Thinking in terms of competition and game theory, you'll probably win even if your competitor has a perfect failover strategy. The cost of maintaining a flawless eject button for an entire cloud is like an anvil around your neck. Every IT decision has to be filtered through this axis. When you can just slap another EC2 on the pile, you can run laps around your peers.

d_burfoot

4 months ago

I think AWS should use, and provide as an offering to big customers, a Chaos Monkey tool that randomly brings down specific services in specific AZs. Example: DynamoDB is down in us-east-1b. IAM is down in us-west-2a.

Other AWS services should be able to survive this kind of interruption by rerouting requests to other AZs. Big company clients might also want to test against these kinds of scenarios.

padjo

4 months ago

Friends don’t let friends use us-east-1

polaris64

4 months ago

It looks like DNS has been restored: dynamodb.us-east-1.amazonaws.com. 5 IN A 3.218.182.189

__alexs

4 months ago

Is there any data on which AWS regions are most reliable? I feel like every time I hear about an AWS outage it's in us-east-1.

cmiles8

4 months ago

US-East-1 and its consistent problems are literally the Achilles Heel of the Internet.

goinggetthem

4 months ago

This is from Amazon's latest earnings call when Andy Jessy was asked why they aren't growing as much as there competitors

"I think if you look at what matters to customers, what they care they care a lot about what the operational performance is, you know, what the availability is, what the durability is, what the latency and throughput is of of the various services. And I think we have a pretty significant advantage in that area." also "And, yeah, you could just you just look at what's happened the last couple months. You can just see kind of adventures at some of these players almost every month. And so very big difference, I think, in security."

chermi

4 months ago

Stupid question, why isn't the stock down? Couldn't this lead to people jumping to other providers and at the very least require some pretty big fees for do dramatically breaking SLA? Is it just not a biggest fraction of revenue to matter?

helsinkiandrew

4 months ago

> The incident underscores the risks associated with the heavy reliance on a few major cloud service providers.

Perhaps for the internet as a whole, but for each individual service it underscores the risk of not hosting your service in multiple zones or having a backup

shakesbeard

4 months ago

Slack (canvas and huddles), Circle CI and Bitbucket are also reporting issues due to this.

shinycode

4 months ago

It’s that period of the year when we discover AWS clients that don’t have fallback plans

artyom

4 months ago

Amazon has spent most of its HR post-pandemic efforts in:

• Laying off top US engineering earners.

• Aggressively mandating RTO so the senior technical personnel would be pushed to leave.

• Other political ways ("Focus", "Below Expectations") to push engineering leadership (principal engineers, etc) to leave, without it counting as a layoff of course.

• Terminating highly skilled engineering contractors everywhere else.

• Migrating serious, complex workloads to entry-level employees in cheap office locations (India, Spain, etc).

This push was slow but mostly completed by Q1 this year. Correlation doesn't imply causation? I find that hard to believe in this case. AWS had outages before, but none like this "apparently nobody knows what to do" one.

Source: I was there.

czhu12

4 months ago

Our entire data stack (Databricks and Omni) are all down for us also. The nice thing is that AWS is so big and widespread that our customers are much more understanding about outages, given that its showing up on the news.

hobo_mark

4 months ago

When did Snapchat move out of GCP?

greatgib

4 months ago

When I follow the link, I arrive on a "You broke reddit" page :-o

ctbellmar

4 months ago

Various AI services (e.g. Perplexity) are down as well

bootsmann

4 months ago

Apparently hiring 1000s of software engineers every month was load bearing

renatovico

4 months ago

docker hub or github cache internal maybe is affected:

Booting builder /usr/bin/docker buildx inspect --bootstrap --builder builder-1c223ad9-e21b-41c7-a28e-69eea59c8dac #1 [internal] booting buildkit #1 pulling image moby/buildkit:buildx-stable-1 #1 pulling image moby/buildkit:buildx-stable-1 9.6s done #1 ERROR: received unexpected HTTP status: 500 Internal Server Error ------ > [internal] booting buildkit: ------ ERROR: received unexpected HTTP status: 500 Internal Server Error

amai

4 months ago

The internet was once designed to survive a nuclear war. Nowadays it cannot even survive until tuesday.

saejox

4 months ago

AWS has been the backbone of the internet. It is single point of failure most websites.

Other hosting services like Vercel, package managers like npm, even the docker registeries are down because of it.

mumber_typhoon

4 months ago

>Oct 20 12:51 AM PDT We can confirm increased error rates and latencies for multiple AWS Services in the US-EAST-1 Region. This issue may also be affecting Case Creation through the AWS Support Center or the Support API. We are actively engaged and working to both mitigate the issue and understand root cause. We will provide an update in 45 minutes, or sooner if we have additional information to share.

Weird that case creation uses the same region as the case you'd like to create for.

amai

4 months ago

We are on Azure. But our CI/CD pipelines are failing, because Docker is on AWS.

mcphage

4 months ago

It shouldn’t, but it does. As a civilization, we’ve eliminated resilience wherever we could, because it’s more cost-effective. Resilience is expensive. So everything is resting on a giant pile of single point of failures.

Maybe this is the event to get everyone off of piling everything onto us-east-1 and hoping for the best, but the last few outages didn’t, so I don’t expect this one to, either.

tonymet

4 months ago

I don't think blaming AWS is fair, since they typically exceed their regional and AZ SLAs

AWS makes their SLAs & uptime rates very clear, along with explicit warnings about building failover / business continuity.

Most of the questions on the AWS CSA exam are related to resiliency .

Look, we've all gone the lazy route and done this before. As usual, the problem exists between the keyboard and the chair.

ta1243

4 months ago

Paying for resilience is expensive. not as expensive as AWS, but it's not free.

Modern companies live life on the edge. Just in time, no resilience, no flexibility. We see the disaster this causes whenever something unexpected happens - the Evergiven blocking Suez for example, let alone something like Covid

However increasingly what should be minor loss of resilience, like an AWS outage or a Crowdstrike incident, turns into major failures.

This fragility is something government needs to legislate to prevent. When one supermarket is out that's fine - people can go elsewhere, the damage is contained. When all fail, that's a major problem.

On top of that, the attitude that the entire sector has is also bad. People thing IT should tail once or twice a year and it's not a problem. If that attitude affect truly important systems it will lead to major civil projects. Any civilitsation is 3 good meals away from anarchy.

There's no profit motive to avoid this, companies don't care about being offline for the day, as long as all their mates are also offline.

bigbuppo

4 months ago

Whose idea was it to make the whole world dependent on us-east-1?

Ekaros

4 months ago

Wasn't the point why AWS is so much premium that you will always get at least 6 nines if not more in availability?

rwke

4 months ago

With more and more parts of our lives depending on often only one cloud infrastructure provider as a single point of failure, enabling companies to have built-in redundancy in their systems across the world could be a great business.

Humans have built-in redundancy for a reason.

JCM9

4 months ago

US-East-1 is literally the Achilles Heel of the Internet.

mrbluecoat

4 months ago

> due to an "operational issue" related to DNS

Always DNS..

t1234s

4 months ago

Do events like this stir conversations in small to medium size businesses to escape the cloud?

binsquare

4 months ago

The internal disruption reviews are going to be fun :)

thomas_witt

4 months ago

Seems to be really only in us-east-1, DynamoDB is performing fine in production on eu-central-1.

nodesocket

4 months ago

Affecting Coinbase[1] as well, which is ridiculous. Can't access the web UI at all. At their scale and importance they should be multi-region if not multi-cloud.

[1] https://status.coinbase.com

jug

4 months ago

Of course this happens when I take a day off from work lol

Came here after the Internet felt oddly "ill" and even got issues using Medium, and sure enough https://status.medium.com

sitzkrieg

4 months ago

it is very funny to me that us-east-1 going down nukes the internet. all those multiple region reliability best practices are for show

michaelcampbell

4 months ago

Anthem Health call center disconnected my wife numerous times yesterday with an ominous robo-message of "Emergency in our call center"; curious if that was this. Seems likely, but what a weird message.

neuroelectron

4 months ago

Sounds like a circular error with monitoring is flooding their network with metrics and logs, causing DNS to fail and produce more errors, flooding the network. Likely root cause is something like DNS conflicts or hosts being recreated on the network. Generally this is a small amount of network traffic but the LBs are dealing with host address flux, causing the hosts to keep colliding host addresses as they attempt to resolve to a new host address which are being lost from dropped packets and with so many hosts in one AZ, there's a good chance they end up with a new conflicting address.

user

4 months ago

[deleted]

mentalgear

4 months ago

> Amazon Alexa: routines like pre-set alarms were not functioning.

It's ridiculous how everything is being stored in the cloud, even simple timers. It's past high time to move functionality back on-device, which would come with the advantage of making it easier to de-connect from big tech's capitalist surveillance state as well.

werdl

4 months ago

Looks like a DNS issue - dynamodb.us-east-1.amazonaws.com is failing to resolve.

geye1234

4 months ago

Potentially-ignoramus comment here, apologies in advance, but amazon.com itself appears to be fine right now. Perhaps slower to load pages, by about half a second. Are they not eating (much of) their own dog food?

CTDOCodebases

4 months ago

I'm getting rate limit issues on Reddit so it could be related.

menomatter

4 months ago

What are the design best practices and industry standards for building on-premise fallback capabilities for critical infrastructure? Say for health care/banking ..etc

renegade-otter

4 months ago

If we see more of this, it would not be crazy to assume that all this compelling of engineers to "use AI" and the flood of Looks Good To Me code is coming home.

skywhopper

4 months ago

There are plenty of ways to address this risk. But the companies impacted would have to be willing to invest in the extra operational cost and complexity. They aren’t.

altbdoor

4 months ago

Had a meeting where developers were discussing the infrastructure for an application. A crucial part of the whole flow was completely dependant on an AWS service. I asked if it was a single point of failure. The whole room laughed, I rest my case.

seviu

4 months ago

I can't log in to my AWS account, in Germany, on top of that it is not possible to order anything or change payment options from amazon.de.

No landing page explaining services are down, just scary error pages. I thought account was compromised. Thanks HN for, as always, being the first to clarify what's happening.

Scary to see that in order to order from Amazon Germany, us-east1 must be up. Everything else works flawlessly but payments are a no go.

raw_anon_1111

4 months ago

From the great Corey Quinn

Ah yes, the great AWS us-east-1 outage.

Half the internet’s on fire, engineers haven’t slept in 18 hours, and every self-styled “resilience thought leader” is already posting:

“This is why you need multi-cloud, powered by our patented observability synergy platform™.”

Shut up, Greg.

Your SaaS product doesn’t fix DNS, you're simply adding another dashboard to watch the world burn in higher definition.

If your first reaction to a widespread outage is “time to drive engagement,” you're working in tragedy tourism. Bet your kids are super proud.

Meanwhile, the real heroes are the SREs duct-taping Route 53 with pure caffeine and spite.

https://www.linkedin.com/posts/coquinn_aws-useast1-cloudcomp...

DanHulton

4 months ago

I forget where I read it originally, but I strongly feel that AWS should offer a `us-chaos-1` region, where every 3-4 days, one or two services blow up. Host your staging stack there and you build real resiliency over time.

(The counter joke is, of course, "but that's `us-east-1` already! But I mean deliberately and frequently.)

jamesbelchamber

4 months ago

This website just seems to be an auto-generated list of "things" with a catchy title:

> 5000 Reddit users reported a certain number of problems shortly after a specific time.

> 400000 A certain number of reports were made in the UK alone in two hours.

mannyv

4 months ago

This is why we use us-east-2.

munchlax

4 months ago

Nowadays when this happens it's always something. "Something went wrong."

Even the error message itself is wrong whenever that one appears.

comp_throw7

4 months ago

We're seeing issues with RDS proxy. Wouldn't be surprised if a DNS issue was the cause, but who knows, will wait for the postmortem.

twistedpair

4 months ago

Wow, about 9 hours later and 21 of 24 Atlassian services are still showing up as impacted on their status page.

Even @ 9:30am ET this morning, after this supposedly was clearing up, my doctor's office's practice management software was still hosed. Quite the long tail here.

https://status.atlassian.com/

0xbadcafebee

4 months ago

We never went down in us-east-1 during this incident. We have tons of high-traffic sites/services. Not multi-region, not multi-cloud.

You're gonna hear mostly complaints in this thread, but simple, resilient, single-region architecture is still reliable as hell in AWS, even in the worst region.

fujigawa

4 months ago

Appears to have also disabled that bot on HN that would be frantically posting [dupe] in all the other AWS outage threads right about now.

AtNightWeCode

4 months ago

Considering the history of east-1 it is fascinating that it still causes so many single point of failure incidents for large enterprises.

socalgal2

4 months ago

Amazon itself apperas to be out for some products. I get a "Sorry, We couldn't find that page" when clicking on products

the-chitmonger

4 months ago

I'm not sure if this is directly related, but I've noticed my Apple Music app has stopped working (getting connection error messages). Didn't realize the data for Music was also hosted on AWS, unless this is entirely unrelated? I've restarted my phone and rebooted the app to no avail, so I'm assuming this is the culprit.

ngruhn

4 months ago

Can't login to Jira/Confluence either.

aeon_ai

4 months ago

It's not DNS

There's no way it's DNS

It was DNS

moralestapia

4 months ago

Curious to know how much does an outage like this cost to others.

Lost data, revenue, etc.

I'm not talking about AWS but whoever's downstream.

Is it like 100M, like 1B?

itqwertz

4 months ago

Did they try asking Claude to fix these issues? If it turns out this problem is AI-related, I'd love to see the AAR.

nullorempty

4 months ago

It won't be over until long after AWS resolves it - the outages produce hours of inconsistent data. It especially sucks for financial services, things of eventual consistency and other non-transactional processes. Some of the inconsistencies introduced today will linger and make trouble for years.

karel-3d

4 months ago

Slack is down. Is that related? Probably is.

raspasov

4 months ago

02:34 Pacific: Things seem to be recovering.

rickette

4 months ago

Couple of years ago us-east was considered the least stable region here on HN due to its age. Is that still a thing?

lexandstuff

4 months ago

Yes, we're seeing issues with Dynamo, and potentially other AWS services.

Appears to have happened within the last 10-15 minutes.

wcchandler

4 months ago

This is usually something I see on Reddit first, within minutes. I’ve barely seen anything on my front page. While I understand it’s likely the subs I’m subscribed to, that was my only reason for using Reddit. I’ve noticed that for the past year - more and more tech heavy news events don’t bubble up as quickly anymore. I also didn’t see this post for a while for whatever reason. And Digg was hit and miss on availability for me, and I’m just now seeing it load with an item around this.

I think I might be ready to build out a replacement through vibe coding. I don’t like being dependent on user submissions though. I feel like that’s a challenge on its own.

webdoodle

4 months ago

I in-housed an EMR for a local clinic because of latency and other network issues taking the system offline several times a month (usually at least once a week). We had zero downtime the whole first year after bringing it all in house, and I got employee of the month for several months in a row.

whatsupdog

4 months ago

I can not login to my AWS account. And, the "my account" on regular amazon website is blank on Firefox, but opens on Chrome.

Edit: I can login into one of the AWS accounts (I have a few different ones for different companies), but my personal which has a ".edu" email is not logging in.

stego-tech

4 months ago

Not remotely surprised. Any competent engineer knows full well the risk of deploying into us-east-1 (or any “default” region for that matter), as well as the risks of relying on global services whose management or interaction layer only exists in said zone. Unfortunately, us-east-1 is the location most outsourcing firms throw stuff, because they don’t have to support it when it goes pear-shaped (that’s the client’s problem, not theirs).

My refusal to hoard every asset into AWS (let alone put anything of import in us-east-1) has saved me repeatedly in the past. Diversity is the foundation of resiliency, after all.

glemmaPaul

4 months ago

LOL making one db service a central point of failure, charge gold for small compute instances. Rage about needing Multi AZ, make the costs come onto the developer/organization. But, now fail on a region level, so are we going to now have multi-country setup for simple small applications?

thundergolfer

4 months ago

This is widespread. ECR, EC2, Secrets Manager, Dynamo, IAM are what I've personally seen down.

rafa___

4 months ago

"Oct 20 2:01 AM PDT We have identified a potential root cause for error rates for the DynamoDB APIs in the US-EAST-1 Region. Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1..."

It's always DNS...

lsllc

4 months ago

The Ring (Doorbell) App isn't working, nor is any the MBTA (Transit) Status pages/apps.

bgwalter

4 months ago

Probably related:

https://www.nytimes.com/2025/05/25/business/amazon-ai-coders...

"Pushed to use artificial intelligence, software developers at the e-commerce giant say they must work faster and have less time to think."

Every bit of thinking time spent on a dysfunctional, lying "AI" agent could be spent on understanding the system. Even if you don't move your mouse all the time in order to please a dumb middle manager.

l33tnull

4 months ago

I can't do anything for school because Canvas by Instructure is down because of this.

rdm_blackhole

4 months ago

My app deployed on Vercel and therefore indirectly deployed on us-east-1 was down for about 2 hours today then came back up and then went down again 10 minutes ago for 2 or 3 minutes. It seems like they are still intermittent issues happening.

sam1r

4 months ago

Chime has completely been down for almost 12 hours.

Impacting all banking series with red status error. Oddly enough, only their direct deposits are functioning without issues.

https://status.chime.com/

fsto

4 months ago

Ironically, the HTTP request to this article timed out twice before a successful response.

shawn_w

4 months ago

One of the radio stations I listen to is just dead air tonight. I assume this is the cause.

ssehpriest

4 months ago

Airtable is down as-well.

A lot of businesses have all their workflows depending on their data on airtable.

qrush

4 months ago

AWS's own management console sign-in isn't even working. This is a huge one. :(

moribvndvs

4 months ago

So, uh, over the weekend I decided to use the fact that my company needs a status checker/page to try out Elixir + Phoenix LiveView, and just now I found out my region is down while tinkering with it and watching Final Destination. That’s a little too on the nose for my comfort.

okr

4 months ago

Btw. we had a forced EKS restart last week on thursday due to Kubernetes updates. And something was done with DNS there. We had problems with ndots. Caused some trouble here. Would not be surprised, if it is related, heh.

assimpleaspossi

4 months ago

I'm thinking about that one guy who clicked on "OK" or hit return.

EbNar

4 months ago

May be because of this that trying to pay with PayPal on Lenovo's website has failed thrice for me today? Just asking... Knowing how everything is connected nowadays it wouldn't surprise me at all.

karel-3d

4 months ago

Slack was down, so I thought I will send message to my coworkers on Signal.

Signal was also down.

ronakjain90

4 months ago

we[1] operate out of `us-east-1` but chose to not use any of the cloud based vendor lockin (sorry vercel, supabase, firebase, planetscale etc). Rather a few droplets in DigitalOcean(us-east-1) and Hetzner(eu). We serve 100 million requests/mo, few million user generated content(images)/mo at monthly cost of just about $1000/mo.

It's not difficult, it's just that we engineers chose convenience and delegated uptime to someone else.

[1] - https://usetrmnl.com

littlecranky67

4 months ago

Maybe unrelated, but yesterday I went to pick up my package from an Amazon Locker in Germany, and the display said "Service unavailable". I'll wait until later today before I go and try again.

AbstractH24

4 months ago

Are there websites that do post-mortems for how the single points of failure impacted the entire internet?

Not just AWS, but Cloudflare and others too. Would be interesting to review them clinically.

dabinat

4 months ago

My site was down for a long time after they claimed it was fixed. Eventually I realized the problem lay with Network Load Balancers so I bypassed them for now and got everything back up and running.

mslm

4 months ago

Happened to be updating a bunch of NPM dependencies and then saw `npm i` freeze and I'm like... ugh what did I do. Then npm login wasn't working and started searching here for an outage, and wala.

fogzen

4 months ago

Great. Hope they’re down for a few more days and we can get some time off.

00deadbeef

4 months ago

It's not DNS

There's no way it's DNS

It was DNS

pageandrew

4 months ago

Can't even get STS tokens. RDS Proxy is down, SQS, Managed Kafka.

codebolt

4 months ago

Atlassian cloud is also having issues. Closing in on the 3 hour mark.

bstsb

4 months ago

glad all my services are either Hetzner servers or EU region of AWS!

t0lo

4 months ago

It's weird that we're living in a time where this could be a taste of a prolonged future global internet blackout by adversarial nations. Get used to this feeling I guess :)

grk

4 months ago

Does anyone know if having Global Accelerator set up would help right now? It's in the list of affected services, I wonder if it's useful in scenarios like this one.

jrm4

4 months ago

Hey wait wasn't the internet supposed to route around...?

aaronbrethorst

4 months ago

My ISP's DNS servers were inaccessible this morning. Cloudflare and Google's DNS servers have all been working fine, though: 1.1.1.1, 1.0.0.1, and 8.8.8.8

roosgit

4 months ago

Can confirm. I was trying to send the newsletter (with SES) and it didn't work. I was thinking my local boto3 was old, but I figured I should check HN just in case.

disposable2020

4 months ago

I seem to recall other issues around this time in previous years. I wonder if this is some change getting shoe-horned in ahead of some reinvent release deadline...

draxil

4 months ago

I was just about to post that it didn't affect us (heavy AWS users, in eu-west-1). Buut, I stopped myself because that was just massively tempting fate :)

toephu2

4 months ago

Half the internet goes down because part of AWS goes down... what happened to companies having redundant systems and not having a single point of failure?

vivzkestrel

4 months ago

stupid question: is buying a server rack and running it at home subject to more downtimes in a year than this? has anyone done an actual SLA analysis?

spwa4

4 months ago

Reddit seems to be having issues too:

"upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection timeout"

user

4 months ago

[deleted]

tomaytotomato

4 months ago

Slack, Jira and Zoom are all sluggish for me in the UK

twistedpair

4 months ago

I just saw services that were up since 545AM ET go down around 12:30PM ET. Seems AWS has broken Lambda again in their efforts to fix things.

megous

4 months ago

I didn't even notice anything was wrong today. :) Looks like we're well disconnected from the US internet infra quasi-hegemony.

hexbin010

4 months ago

Why after all these years is us-east-1 such a SPOF?

klon

4 months ago

Statuspage.io seems to load (but is slow) but what is the point if you can't post an incident because Atlassian ID service is down.

yuvadam

4 months ago

During the last us-east-1 apocalypse 14 years ago, I started awsdowntime.com - don't make me regsiter it again and revive the page.

alvis

4 months ago

Why would us-east-1 cause many UK banks and even UK gov web sites down too!? Shouldn't they operate in the UK region due to GDPR?

BiraIgnacio

4 months ago

It's scary to think about how much power and perhaps influence the AWS platform has. (albeit it shouldn't be surprising)

montek01singh

4 months ago

I cannot create a support ticket with AWS as well.

rob

4 months ago

My Alexa is hit or miss at responding to queries right now at 5:30 AM EST. Was wondering why it wasn't answering when I woke up.

danielpetrica

4 months ago

In this moments I think devs should invest in vendor independence if they can. While I'm not to that stage yet (cloudlfare dependence) using open technologies like docker (or Kubernetes), Traefik instead of managed services can help in this disaster situations by switching to a different provider in a faster way than having to rebuild from zero. as a disclosure I'm not still to that point on my infrastructure But I'm trying to slowly define one for my self

hipratham

4 months ago

Strangely some of our services are scaling up on east-1, and there is downtick on downdetector.com so issue might be resolving.

donmb

4 months ago

Asana down Postman workspaces don't load Slack affected And the worst: heroku scheduler just refused to trigger our jobs

jpfromlondon

4 months ago

This will always be a risk when sharecropping.

Danborg

4 months ago

r/aws not found

There aren't any communities on Reddit with that name. Double-check the community name or start a new community.

user

4 months ago

[deleted]

world2vec

4 months ago

Slack and Zoom working intermittently for me

jcmeyrignac

4 months ago

Impossible to connect to JIRA here (France).

kedihacker

4 months ago

Only us east 1 gets new services immediately others might do but not a guarantee. Which regions are a good alternative

user

4 months ago

[deleted]

pardner

4 months ago

Darn, on Heroku even the "maintenance mode" (redirects all routes to a static url) won't kick in.

user

4 months ago

[deleted]

antihero

4 months ago

My website on the cupboard laptop is fine.

cpfleming

4 months ago

Seems to be upsetting Slack a fair bit, messages taking an age to send and OIDC login doesn't want to play.

devttyeu

4 months ago

Can't update my selfhosted HomeAssistant because HAOS depends on dockerhub which seems to be still down.

suralind

4 months ago

I wonder how their nines are going. Guess they'll have to stay pretty stable for the next 100 years.

user

4 months ago

[deleted]

magnio

4 months ago

npm and pnpm are badly affected as well. Many packages are returning 502 when fetched. Such a bad time...

kevinsundar

4 months ago

AWS pros know to never use us-east-1. Just don't do it. It is easily the least reliable region

homeonthemtn

4 months ago

"We should have a fail back to US-West."

"It's been on the dev teams list for a while"

"Welp....."

tdiff

4 months ago

That strange feeling of the world getting cleaner for a while without all these dependant services.

dude250711

4 months ago

They are amazing at LeetCode though.

nik736

4 months ago

Twilio seems to be affected as well

busymom0

4 months ago

For me Reddit is down and also the amazon home page isn't showing any items for me.

ares623

4 months ago

Did someone vibe code a DNS change

1970-01-01

4 months ago

Completely detached from reality, AMZN has been up all day and closed up 1.6%. Wild.

sinpor1

4 months ago

His influence is so great that it caused half of the internet to stop working properly.

lawlessone

4 months ago

Am i imagining it or are more things like this happening in recent weeks than usual?

YouAreWRONGtoo

4 months ago

I don't get how you can be a trillion dollar company and still suck this much.

nivekney

4 months ago

Wait a second, Snapchat impacted AGAIN? It was impacted during the last GCP outage.

codegladiator

4 months ago

They haven't listed SES there yet in the affected services on their status page

TrackerFF

4 months ago

Lots of outage happening in Norway, too. So I'm guessing it is a global thing.

IOT_Apprentice

4 months ago

Apparently IMDb, an Amazon service is impacted. LOL, no multi region failover.

countWSS

4 months ago

Reddit itself breaking down and errors appear. Does reddit itself depends on this?

bpye

4 months ago

Amazon.ca is degraded, some product pages load but can't see prices. Amusing.

motbus3

4 months ago

Always a lovely Monday when you wake just in time to see everything going down

pmig

4 months ago

Thanks god we built all our infra on top of EKS, so everything works smoothly =)

assimpleaspossi

4 months ago

As of 4:26am Central Time in the USA, it's back up for one of my services.

sph

4 months ago

10:30 on a Monday morning and already slacking off. Life is good. Time to touch grass, everybody!

kkfx

4 months ago

Honestly anyone do have outages, that's nothing extraordinary, what's wrong is the number of impacted services. We choose (at least almost choose) to ditch mainframes for clusters also for resilience. Now with cheap desktop iron labeled "stable enough to be a serious server" we have seen mainframes re-created sometimes with a cluster of VM on top of a single server, sometimes with cloud services.

Ladies and Gentleman's it's about time to learn reshoring in the IT world as well. Owning nothing, renting all means extreme fragility.

bilekas

4 months ago

These things happen when profits are the measure everything. Change your provider, but if their number doesn't go up, they wont be reliable.

So your complaints matter nothing because "number go up".

I remember the good old days of everyone starting a hosting company. We never should have left.

ZeWaka

4 months ago

Alexa devices are also down.

Ygg2

4 months ago

Ironically enough I can't access Reddit due to no healthy upstream.

valdiorn

4 months ago

I missed a parcel delivery because a computer server in Virginia, USA went down, and now the doorbell on my house in England doesn't work. What. The. Fork.

How the hell did Ring/Amazon not include a radio-frequency transmitter for the doorbell and chime? This is absurd.

To top it off, I'm trying to do my quarterly VAT return, and Xero is still completely borked, nearly 20 hours after the initial outage.

jodrellblank

4 months ago

Another time to link The Machine Stops by E.M. Forster, 1909: https://web.cs.ucdavis.edu/~rogaway/classes/188/materials/th...

> “The Machine,” they exclaimed, “feeds us and clothes us and houses us; through it we speak to one another, through it we see one another, in it we have our being. The Machine is the friend of ideas and the enemy of superstition: the Machine is omnipotent, eternal; blessed is the Machine.”

..

> "she spoke with some petulance to the Committee of the Mending Apparatus. They replied, as before, that the defect would be set right shortly. “Shortly! At once!” she retorted"

..

> "there came a day when, without the slightest warning, without any previous hint of feebleness, the entire communication-system broke down, all over the world, and the world, as they understood it, ended."

gritzko

4 months ago

idiocracy_window_view.jpg

Liftyee

4 months ago

Damn. This is why Duolingo isn't working properly right now.

TrackerFF

4 months ago

Lots of outage in Norway, started approximately 1 hour ago for me.

ryanmcdonough

4 months ago

Now, I may well be naive - but isn't the point of these systems that you fail over gracefully to another data centre and no-one notices?

user

4 months ago

[deleted]

sineausr931

4 months ago

On a bright note, Alexa has stopped pushing me merchandise.

ta1243

4 months ago

Meanwhile my pair of 12 year old raspberry pi's hangling my home services like DNS survive their 3rd AWS us-east-1 outage.

"But you can't do webscale uptime on your own"

Sure. I suspect even a single pi with auto-updates on has less downtime.

nokeya

4 months ago

Serverless is down because servers are down. What an irony.

j45

4 months ago

More and more I want to be could agnostic or multi-cloud.

redeux

4 months ago

It’s a good day to be a DR software company or consultant

arrty88

4 months ago

I expect gcp and azure to gain some customers after this

nla

4 months ago

I still don't know why anyone would use AWS hosting.

chistev

4 months ago

What is HN hosted on?

croemer

4 months ago

Coinbase down as well

ecommerceguy

4 months ago

Just tried to get into Seller Central, returned a 504.

empressplay

4 months ago

Can't check out on Amazon.com.au, gives error page

XorNot

4 months ago

Well that takes down Docker Hub as well it looks like.

hippo77

4 months ago

Finally an upside to running on Oracle Cloud!

testemailfordg2

4 months ago

Seems like we need more anti-trust cases on AWS or need to break it down, it is becoming too big. Services used in rest of the world get impacted by issues in one region.

user

4 months ago

[deleted]

a-dub

4 months ago

i am amused at how us-east-1 is basically in the same location as where aol kept its datacenters back in the day.

thecopy

4 months ago

I did get 500 error from their public ECR too

jimrandomh

4 months ago

The RDS proxy for our postgres DB went down.

bicepjai

4 months ago

Is this the outage that took Medium down ?

BaudouinVH

4 months ago

canva.com was down until a few minutes ago.

codebolt

4 months ago

Atlassian cloud is having problems as well.

_pvzn

4 months ago

Can confirm, also getting hit with this.

al_james

4 months ago

Cant even login via the AWS access portal.

chasd00

4 months ago

wow I think most of Mulesoft is down, that's pretty significant in my little corner of the tech world.

seanieb

4 months ago

Clearly this is all some sort of mass delusion event, the Amazon Ring status says everything is working.

https://status.ring.com/

(Useless service status pages are incredibly annoying)

mk89

4 months ago

It's fun to see SRE jumping left and right when they can do basically nothing at all.

"Do we enable DR? Yes/No". That's all you can do. If you do, it's a whole machinery starting, which might take longer than the outage itself.

They can't even use Slack to communicate - messages are being dropped/not sent.

And then we laugh at the South Koreans for not having backed up their hard drives (which got burnt by actual fire, a statistically way less occurring event than an AWS outage). OK that's a huge screw up, but hey, this is not insignificant either.

What will happen now? Nothing, like nothing happened after Crowdstrike's bug last year.

mmmlinux

4 months ago

Ohno, not Fortnite! oh, the humanity.

t0lo

4 months ago

Can't log into tidal for my music

hubertzhang

4 months ago

I cannot pull images from docker hub.

goodegg

4 months ago

Terraform Cloud is having problem too

mpcoder

4 months ago

I can't even see my EKS clusters

zoklet-enjoyer

4 months ago

Is this why Wordle logged me out and my 2 guesses don't seem to have been recorded? I am worried about losing my streak.

tosh

4 months ago

SES and signal seem to work again

htrp

4 months ago

thundering herd problems.... every time they say they fix it something else breaks

8cvor6j844qw_d6

4 months ago

That's unusual.

I wss under the impression that having multiple available zones guarantees high availability.

It seems this is not the case.

fastball

4 months ago

One of my co-workers was woken up by his Eight Sleep going haywire. He couldn't turn it off because the app wouldn't work (presumably running on AWS).

teunlao

4 months ago

us-east-1 down again. We all know we should leave. None of us will.

president_zippy

4 months ago

I wonder how much better the uptime would be if they made a sincere effort to retain engineering staff.

Right now on levels.fyi, the highest-paying non-managerial engineering role is offered by Oracle. They might not pay the recent grads as well as Google or Microsoft, but they definitely value the principal engineers w/ 20 years of experience.

zwnow

4 months ago

I love this to be honest. Validates my anti cloud stance.

ryanmcdonough

4 months ago

Now, I may well be naive - but isn't the point of these systems that you fail over gracefully to another data centre and no-one notices?

bdangubic

4 months ago

you put your sh*t in us-east-1 you need to plan for this :)

askonomm

4 months ago

Docker is also down.

worik

4 months ago

This outage is a reminder:

Economic efficiency and technical complexity are both, separately and together, enemies of resilience

goodegg

4 months ago

Happy Monday People

ArcHound

4 months ago

Good luck to all on-callers today.

It might be an interesting exercise to map how many of our services depend on us-east-1 in one way or another. One can only hope that somebody would do something with the intel, even though it's not a feature that brings money in (at least from business perspective).

ktosobcy

4 months ago

Uhm... E(U)ropean sovereigny (and in general spreading the hosting as much as possbile) needed ASAP…

tosh

4 months ago

seeing issues with SES in us-east-1 as well

kitd

4 months ago

O ffs. I can't even access the NYT puzzles in the meantime ... Seriously disrupted, man

grenran

4 months ago

seems like services are slowly recovering

ivape

4 months ago

There are entire apps like Reddit that are still not working. What the fuck is going on?

redwood

4 months ago

Surprising and sad to see how many folks are using DynamoDB There are more full featured multi-cloud options that don't lock you in and that don't have the single point of failure problems.

And they give you a much better developer experience...

Sigh

pantulis

4 months ago

Now I know why the documents I was sending to my Kindle didn't go through.

iwontberude

4 months ago

worst outage since xmas time 2012

thinkindie

4 months ago

Today’s reminder: multi-region is so hard even AWS can’t get it right.

LightBug1

4 months ago

Remember when the "internet will just route around a network problem"?

FFS ...

solatic

4 months ago

And yet, AMZN is up for the day. The market doesn't care. Crazy.

JCharante

4 months ago

Ring is affected. Why doesn’t Ring have failover to another region?

Aldipower

4 months ago

altavista.com is also down!

chaidhat

4 months ago

is this why docker is down?

SergeAx

4 months ago

How much longer are we going to tolerate this marketing bullshit about "Designed to provide 99.999999999% durability and 99.99% availability"?

user

4 months ago

[deleted]

jumploops

4 months ago

"Never choose us-east-1"

aiiizzz

4 months ago

Slack was acting slower than usual, but did not go down. Color me impressed.

dangoodmanUT

4 months ago

Reminder that AZs don't go down

Entire regions go down

Don't pay for intra-az traffic friends

throw-10-13

4 months ago

imagine spending millions on devops and sre to still have your mission critical service go down because amazon still has baked in regional dependencies

jdlyga

4 months ago

Time to start calling BS on the 9's of reliability

DataDaemon

4 months ago

But but this is a cloud, it should exist in the cloud.

martinheidegger

4 months ago

Designed to provide 99.999% durability and 99.999% availability Still designed, not implemented

xodice

4 months ago

Major us-east-1 outages happened in 2011, 2015, 2017, 2020, 2021, 2023, and now again. I understand that us-east-1, N. VA, was the first DC but for fucks sake they've had HOW LONG to finish AWS and make us-east-1 not be tied to keeping AWS up.

nemo44x

4 months ago

Someone’s got a case of the Monday’s.

grebc

4 months ago

Good thing hyperscalers provide 100% uptime.

BartjeD

4 months ago

So much for the peeps claiming amazing Cloud uptime ;)

dorongrinstein

4 months ago

Anyone needing multi-cloud WITH EASE, please get in touch. https://controlplane.com

I am the CEO of the company and started it because I wanted to give engineering teams an unbreakable cloud. You can mix-n-match services of ANY cloud provider, and workloads failover seamlessly across clouds/on-prem environments.

Feel free to get in touch!

avi_vallarapu

4 months ago

This is the reason why it is important to plan Disaster recovery and also plan Multi-Cloud architectures.

Our applications and databases must have ultra high availability. It can be achieved with applications and data platforms hosted on different regions for failover.

Critical businesses should also plan for replication across multiple cloud platforms. You may use some of the existing solutions out there that can help with such implementations for data platforms.

- Qlik replicate - HexaRocket

and some more.

Or rather implement native replication solutions available with data platforms.