v_CodeSentinal
12 days ago
Hard agree. As LLMs drive the cost of writing code toward zero, the volume of code we produce is going to explode. But the cost of complexity doesn't go down—it actually might go up because we're generating code faster than we can mentally model it.
SRE becomes the most critical layer because it's the only discipline focused on 'does this actually run reliably?' rather than 'did we ship the feature?'. We're moving from a world of 'crafting logic' to 'managing logic flows'.
ottah
12 days ago
I dunno, I don't think in practice SRE or DevOPs are even really different from the people we used to call sys admins (former sysadmin myself). I think the future of mediocre companies is SRE chasing after LLM fires, but I think a competitive business would have a much better strategy for building systems. Humans are still by far the most efficient and generalized reasoners, and putting the energy intensive, brittle ai model in charge of most implementation is setting yourself up to fail.
stvvvv
12 days ago
Former sysadmin and I've been an SRE for >15 years now.
They are very different. If your SREs are spending much of their time chasing fires, they are doing it wrong.
mupuff1234
12 days ago
> But the cost of complexity doesn't go down
But how much of current day software complexity is inherent in the problem space vs just bad design and too many (human) chefs in the kitchen? I'm guessing most of it is the latter category.
We might get more software but with less complexity overall, assuming LLMs become good enough.
legorobot
12 days ago
I agree that there's a lot of complexity today due to the process in which we write code (people, lack of understanding the problem space, etc.) vs the problem itself.
Would we say us as humans also have captured the "best" way to reduce complexity and write great code? Maybe there's patterns and guidelines but no hard and fast rules. Until we have better understanding around that, LLMs may also not arrive at those levels either. Most of that knowledge is gleamed when sticking with a system -- dealing with past choices and requiring changes and tweaks to the code, complexity and solution over time. Maybe the right "memory" or compaction could help LLMs get better over time, but we're just scratching the surface there today.
LLMs output code as good as their training data. They can reason about parts of code they are prompted and offer ideas, but they're inherently based on the data and concepts they've trained on. And unfortunately...its likely much more average code than highly respected ones that flood the training data, at least for now.
Ideally I'd love to see better code written and complexity driven down by _whatever_ writes the code. But there will always been verification required when using a writer that is probabilistic.
oblio
12 days ago
That probably requires superhuman AI, though.
wavemode
12 days ago
By "SRE", are people actually talking about "QA"?
SREs usually don't know the first thing about whether particular logic within the product is working according to a particular set of business requirements. That's just not their role.
stvvvv
12 days ago
Good SREs at a senior level do. They are familiar with the product, and the customers and the business requirements.
Without that it's impossible to correctly prioritise your work.
wavemode
12 days ago
Any SRE who does that is really filling a QA role. It's not part of the SRE job title, which is more about deployments/monitoring/availability/performance, than about specific functional requirements.
In a well-run org, the software engineers (along with QA if you have them) are responsible for validation of requirements.
stvvvv
10 days ago
well-run ops requires knowing the business. It's not enough to know "This rpc is failing 100%", but also what the impact on the customer is, and how important to the business it is.
Mature SRE teams get involved with the development of systems before they've even launched, to ensure that they have reliability and supportability baked in from the start, rather than shoddily retrofitted.
zeroCalories
12 days ago
Most companies don't have QA anymore, just their CI/CD's automated tests.
belter
12 days ago
>> As LLMs drive the cost of writing code toward zero
And they drive the cost of validating the correctness of such code towards infinity...
storystarling
12 days ago
I see it less as SRE and more about defensive backend architecture. When you are dealing with non-deterministic outputs, you can't just monitor for uptime, you have to architect for containment. I've been relying heavily on LangGraph and Celery to manage state, basically treating the LLM as a fuzzy component that needs a rigid wrapper. It feels like we are building state machines where the transitions are probabilistic, so the infrastructure (Redis, queues) has to be much more robust than the code generating the content.
franktankbank
12 days ago
This sounds like the most min maxed drivel. What if I took every concept and dialed it to either zero or 11 and then picked a random conclusion!!!??