> The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.
That's the "promise", but in practice it's exactly what you don't want to do.
Models can't think. Logic, accuracy, truth, etc are not things models understand, nor do they understand anything. It's just a happy accident that sometimes their output makes sense to humans based on the statistical correlations derived during training.
> The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
Am I the only one who is not totally impressed by the quality of code LLMs generate? I've used Claude, Copilot, Codex and local options, all with latest models, and I have not been impressed on the greenfield projects I work on.
Yes, they're good for rote work, especially writing tests, but if you're doing something novel or off the beaten path, then just lol.
> I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.
If you don't understand these things yourself, how do you know the LLM is "correct" in what it outputs?
I'd venture to say the feeling that models can do it better than you comes from exactly that problem: you don't know enough to have educated opinions and insights into the problem you're addressing with LLMs, and thus can't accurately judge the quality of their solutions. Not that there's anything wrong with not knowing something, and this is not meant to be a swipe at you, your skills or knowledge, nor is my intention to make assumptions about you. It's just that when I use LLMs for non-trivial tasks that I'm intimately familiar with, I am not impressed. The more that I know about a domain, the more nits I can pick with whatever LLMs spew out, but when I don't know the domain, it seems like "magic", until I do some further research and find problems.
To address the bad feelings: I work with several AI companies, the ones that actually care about quality were very, very adamant about avoiding AI for development outside of doing augmented searches. They actively filtered out candidates that used AI for resumes and had AI slop code contributions, and do the same with their code base and development process. And it's not about worrying about their IP being siphoned off to LLM providers, but about the code quality in itself and the fact that there is deep value in the human beings working at a company understanding not only the code they write, but how the system works in the micro and macro levels. They're acutely aware of models' limitations and they don't want them touching their code capital.
--
I think these tools have value, I use them and reluctantly pay for them, but the idea that they're going to replace development with prompt writing is a pipe dream. You can only get so far with next-token generators.