trjordan
4 hours ago
I was talking with somebody about their migration recently [0], and we got to speculating about AI and how it might have helped. There were basically 2 paths:
- Use the AI and ask for answers. It'll generate something! It'll also be pleasant, because it'll replace the thinking you were planning on doing.
- Use the AI to automate away the dumb stuff, like writing a bespoke test suite or new infra to run those tests. It'll almost certainly succeed, and be faster than you. And you'll move onto the next hard problem quickly.
It's funny, because these two things represent wildly different vibes. The first one, work is so much easier. AI is doing the job. In the second one, work is harder. You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing, because all the easy work happens in the background via LLM.
If you're in a position where there's any amount of competition (like at work, typically), it's hard to imagine where the people operating in the 2nd mode don't wildly outpace the people operating in the first, both in quality and volume of output.
But also, it's exhausting. Thinking always is, I guess.
[0] Rijnard, about https://sourcegraph.com/blog/how-not-to-break-a-search-engin...
klodolph
4 hours ago
I’ve tried the second path at work and it’s grueling.
“Almost certainly succeed” requires that you mostly plan out the implementation for it, and then monitor the LLM to ensure that it doesn’t get off track and do something awful. It’s hard to get much other work done in the meantime.
I feel like I’m unlocking, like, 10% or 20% productivity gains. Maybe.
rorylaitila
4 hours ago
Yeah I think this is what I've tried to articulate to people that you've summed up well with "You've compressed all your thinking work, back-to-back, and you're just doing hard thing after hard thing" - Most of the bottleneck with any system design is the hard things, the unknown things, the unintended-consequences things. The AIs don't help you much with that.
There is a certain amount of regular work that I don't want to automate away, even though maybe I can. That regular work keeps me in the domain. It leads to epiphany's in regards to the hard problems. It adds time and something to do in between the hard problems.
CuriouslyC
4 hours ago
I stay at the architecture, code organization and algorithm level with AI. I plan things at that level then have the agent do full implementation. I have tests (which have been audited both manually and by agents) and I have multiple agents audit the implementation code. The pipeline is 100% automated and produces very good results, and you can still get some engineering vibes from the fact that you're orchestrating a stochastic workflow dag!
danenania
4 hours ago
I'd actually say that you end up needing to think more in the first example.
Because as soon as you realize that the output doesn't do exactly what you need, or has a bug, or needs to be extended (and has gotten beyond the complexity that AI can successfully update), you now need to read and deeply understand a bunch of code that you didn't write before you can move forward.
I think it can actually be fine to do this, just to see what gets generated as part of the brainstorming process, but you need to be willing to immediately delete all the code. If you find yourself reading through thousands of lines of AI-generated code, trying to understand what it's doing, it's likely that you're wasting a lot of time.
The final prompt/spec should be so clear and detailed that 100% of the generated code is as immediately comprehensible as if you'd written it yourself. If that's not the case, delete everything and return to planning mode.