croemer
3 months ago
Preliminary post incident review: https://azure.status.microsoft/en-gb/status/history/
Timeline
15:45 UTC on 29 October 2025 – Customer impact began.
16:04 UTC on 29 October 2025 – Investigation commenced following monitoring alerts being triggered.
16:15 UTC on 29 October 2025 – We began the investigation and started to examine configuration changes within AFD.
16:18 UTC on 29 October 2025 – Initial communication posted to our public status page.
16:20 UTC on 29 October 2025 – Targeted communications to impacted customers sent to Azure Service Health.
17:26 UTC on 29 October 2025 – Azure portal failed away from Azure Front Door.
17:30 UTC on 29 October 2025 – We blocked all new customer configuration changes to prevent further impact.
17:40 UTC on 29 October 2025 – We initiated the deployment of our ‘last known good’ configuration.
18:30 UTC on 29 October 2025 – We started to push the fixed configuration globally.
18:45 UTC on 29 October 2025 – Manual recovery of nodes commenced while gradual routing of traffic to healthy nodes began after the fixed configuration was pushed globally.
23:15 UTC on 29 October 2025 - PowerApps mitigation of dependency, and customers confirm mitigation.
00:05 UTC on 30 October 2025 – AFD impact confirmed mitigated for customers.
xnorswap
3 months ago
33 minutes from impact to status page for a complete outage is a joke.
neya
3 months ago
In Microsoft's defense, Azure has always been a complete joke. It's extremely developer unfriendly, buggy and overpriced.
michaelt
3 months ago
If you call that defending microsoft, I'd hate to see what attacking them looks like :)
sfn42
3 months ago
I've only used Azure, to me it seems fine ish. Some things are rather overcomplicated and it's far from perfect but I assumed the other providers were similarly complicated and imperfect.
Can't say I've experienced many bugs in there either. It definitely is overpriced but I assume they all are?
sofixa
3 months ago
> In Microsoft's defense, Azure has always been a complete joke. It's extremely developer unfriendly, buggy and overpriced.
Don't forget extremely insecure. There is a quarterly critical cross-tenant CVE with trivial exploitation for them, and it has been like that for years.
madjam002
3 months ago
My favourite was the Azure CTO complaining that Git was unintuitive, clunky and difficult to use
rk06
3 months ago
Hmm, isn't that the same argument we use in defense of windows and ms teams?
campbel
3 months ago
As a technologist, you should always avoid MS. Even if they have a best-in-class solution for some domain, they will use that to leverage you into their absolute worst-in-class ecosystem.
hinkley
3 months ago
I see Amazon using a subset of the same sorts of obfuscations that Microsoft was infamous for. They just chopped off the crusts so it's less obvious that it's the same shit sandwich.
imglorp
3 months ago
That's about how long it took to bubble up three levels of management and then go past the PR and legal teams for approvals.
infaloda
3 months ago
More importantly `15:45 UTC on 29 October 2025 – Customer impact began.
16:04 UTC on 29 October 2025 – Investigation commenced following monitoring alerts being triggered. ` A 19-minute delay in alert is a joke.
Xss3
3 months ago
That does not say it took 19 minutes for alerts to appear. Following could mean any amount of time.
hinkley
3 months ago
10 minutes to alert, to avoid flapping false positives. 10 minute response window for first responders. Or, 5 minute window before failing over to backup alerts, and 4 minutes to wake up, have coffee, and open the appropriate windows.
thayne
3 months ago
Unfortunately,that is also typical. I've seen it take longer than that for AWS to update their status page.
The reason is probably because changes to the status page require executive approval, because false positives could lead to bad publicity, and potentially having to reimburse customers for failing to meet SLAs.
ape4
3 months ago
Perhaps they could set the time to when it really started after executive approval.
sbergot
3 months ago
and for a while the status was "there might be issues on azure portal".
ambentzen
3 months ago
There might have been, but they didn't know because they couldn't access it. Could have been something totally unrelated.
schainks
3 months ago
AWS either is “on it” or you they will say something somewhere between 60-90 minutes after impact.
We should be lucky MSFT is so consistent!
Hug ops to the Azure team, since management is shredding up talent over there.
HeavyStorm
3 months ago
I've been on bridges where people _forgot_ to send comms for dozens of minutes. Too many inexperienced people around these days.
skeezyjefferson
3 months ago
[flagged]
onionisafruit
3 months ago
At 16:04 “Investigation commenced”. Then at 16:15 “We began the investigation”. Which is it?
ssss11
3 months ago
Quick coffee run before we get stuck in mate
ozim
3 months ago
Load some carbs with chocolate chip cookies as well, that’s what I would do.
You don’t want to debug stuff with low sugar.
normie3000
3 months ago
One crash after another
red-iron-pine
3 months ago
burn a smoko and take a leak
taco_emoji
3 months ago
16:04 Started running around screaming
16:15 Sat down & looked at logsnot_a_bot_4sho
3 months ago
I read it as the second investigation being specific to AFD. The first more general.
onionisafruit
3 months ago
I think you’re right. I missed that subtlety on first reading.
oofbey
3 months ago
“Our protection mechanisms, to validate and block any erroneous deployments, failed due to a software defect which allowed the deployment to bypass safety validations.”
Very circular way of saying “the validator didn’t do its job”. This is AFAICT a pretty fundamental root cause of the issue.
It’s never good enough to have a validator check the content and hope that finds all the issues. Validators are great and can speed a lot of things up. But because they are independent code paths they will always miss something. For critical services you have to assume the validator will be wrong, and be prepared to contain the damage WHEN it is wrong.
neop1x
3 months ago
>> We began the investigation and started to examine configuration changes within AFD.
Troubleshooting has completed
Troubleshooting was unable to automatically fix all of the issues found. You can find more details below.
>> We initiated the deployment of our ‘last known good’ configuration.
System Restore can help fix problems that might be making your computer run slowly or stop responding.
System Restore does not affect any of your documents, pictures, or other personal data. Recently installed programs and drivers might be uninstalled.
Confirm your restore point
Your computer will be restored to the state it was in before the event in the Description field below.
notorandit
3 months ago
What puzzles me too is the time it took to recognize an outage.
Looks like there was no monitoring and no alerts.
Which is kinda weird.
hinkley
3 months ago
I've seen sensitivity get tuned down to avoid false positives during deployments or rolling restarts for host updates. And to a lesser extent for autoscaling noise. It can be hard to get right.
I think it's perhaps a gap in the tools. We apply the same alert criteria at 2 am that we do while someone is actively running deployment or admin tasks and there's a subset that should stay the same, like request failure rate, and others that should be tuned down, like overall error rate and median response times.
And it means one thing if the failure rate for one machine is 90% and something else if the cluster failure rate is 5%, but if you've only got 18 boxes it's hard to discern the difference. And which is the higher priority error may change from one project to another.
deadbolt
3 months ago
Just what you want in a cloud provider, right?