_pdp_
a month ago
To be fair, AI cannot write the code!
It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.
My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.
In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.
DrewADesign
a month ago
If you’re writing simple code, it’s often a one-shot. With medium-complexity code, it gets the first 90% done in a snap. Easily faster than I could ever do it. The problem is that 90% is never the part that sucks up a bunch of time— it’s the final 10%, and in many cases for me, it’s been more hindrance than help. If I’d just taken the driver’s wheel, making heavy use of autocomplete, I’d have done better and with less frustration. Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.
hattmall
a month ago
Same... And the errors are often really nonsensical and nested in ways that a human or thinking brain simply would never do
rubslopes
a month ago
> Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.
That's the problem I've been facing. The AI does 90% of the work, but the 10% that I have to do myself is 20x harder because I don't have as much knowledge of the codebase as I would if I had written all of it by myself, like in the old days (2 years ago).
Gigachad
a month ago
Yeah that’s been my experience. The generators are shockingly good. But they don’t get it all the way, and then you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.
pylua
a month ago
Yeah, but you can ask ai questions about it so you can understand it faster
Gigachad
a month ago
You are trying to check the code for hallucinations. The AI will just hallucinate back a plausible answer for your questions. It’s entirely detached from the process that generated the code so it doesn’t have any more insight than you do on it.
DrewADesign
a month ago
Works great if you’re using a very common language. I wasted more time than I care to admit trying this with a pascal code base.
_pdp_
a month ago
You are absolutely right.
bdangubic
a month ago
> you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.
SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.
DrewADesign
a month ago
SWEs that don’t understand the strengths and limitations of their tools allowing them to choose the right one for the job won’t be software developers much longer, but they definitely won’t be plumbers. Maybe cycling through the gig platforms or working entry-level retail. Soft, arrogant, maladroit white collar workers make hilariously pathetic trade apprentices.
noremotefornow
a month ago
I’m very confused by this as in my domain space I’ve been able to nearly one-shot most coding assignments since this summer (really Sonnet3.5h) by pointing specific models at well-specified requirements. Things like breaking down a long functional or technical spec document into individual tasks, implementing, testing, deployment and change management. Yes, it’s rather straightforward scripting, like automation on Salesforce. That work is toast and spec-driven development will surge as people go more hands-off the direct manipulation of symbols representing machine instructions, on average.
kankerlijer
a month ago
There is vast difference between writing glue code and engineering systems. Who will come up with the next Spring Boot, Go, Rust, io_uring, or whatever, once the profession has completely reduced itself to pleasing short outcomes?
noremotefornow
a month ago
Same was asked of many transitions to a higher abstraction level. Who will know how to use Libraries if they google everything?
recursive
a month ago
Maybe some day we'll collectively figure it out. I'm confused how people are getting so much success out of it. That hasn't been my experience. I'm not sure what I'm doing wrong.
bugglebeetle
a month ago
Try using the brainstorming and execute plan loops with the superpowers plugin in Claude Code. It encapsulates the spec driven development process fairly well.
felipeerias
a month ago
This misunderstands what LLM-based tools mean for complex software projects. Nobody expects that you should be able to ask them to write you a whole kernel or a web engine.
Coding agents in particular can be very helpful for senior engineers as a way to carry out investigations, double-check assumptions, or automate the creation of some parts of the code.
One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.
The mid-term impact of this transition is hard to anticipate. We will probably get a wide range of cases, from hyper-productive small teams displacing larger but slower ones, to AI-enhanced developers in organisations with uneven adoption quietly enjoying a lot more free time while keeping the same productivity as before.
112233
a month ago
But how is the senior engineer to get any work done, if they need to babysit the agent and every two minutes accept/reject it's actions? Genuine question. Letting that thing do "whatever" usually means getting insane multiple thousands lines long pull request that will need to be discarded and redone anyway.
Related - how do you get that thing to stop writing comments? If asked not to do so, it will instead put that energy into docstrings, debug logs and what not, poisoning the code for any further "AI" processing.
Stuff like (this is an impression, not an actual output):
// Simplify process by removing redundand operations
int sz = 100;
// Optimized algorithm, removed mapping:
return lookup(load(sz));
Most stuff in comments is actively misleading.Also the horrible desire of writing new code and not reading docs, either in-project or on the web...
For writing ffmpeg invocations or single-screen bash scripts, great thing! For writing programs? Actively harmful
AStrangeMorrow
a month ago
Yeah for me the three main issues are: - overly defensive programming. In python that means try except everywhere without catching specific exceptions, hasattr checks, when replacing an approach by a new one adding a whole “backward compatibility” thing in case we need to keep the old approach etc. That leads to obfuscated errors, silent fails, bad values triggering old code etc - plain editing things it is not supposed to. That is “change A into B” and it does “ok I do B but I also removed C and D because they had nothing to do with A” or “I also changed C in E which doesn’t cover all the edge cases but I liked it better” - keep re-implementing logic instead of reusing
112233
a month ago
Oh, the defensive programming! That thing must have been trained on job interview code, or some enterprise stuff. Heaps of "improvements" and "corrections" that retry, stub, and simply avoid correctly doing stuff for no reason (fix deserialization bug that thing just caused? no, why! Let's instead assume API and docs are wrong and stuff is failing silently so let's retry all api calls N times, then insert some insane "default value in case API is unreachable" then run it, corrupt local db by writing that default everywhere, run some brain damaged test that checks that all values are present (they are, clonk just nuked them), claim extraordinary success and commit it with a message containing emoji medals and rockets).
And these "oh, I understand, C is completely incorrect" then proceeding to completely sabotage and invalidate everything.
Or assembling some nuclear python script like some McGyver and running it, to nuke even the repo itself if possible.
Best AAA comedy text adventure. Poor people who are forced to "work" like that. But cleanup work will be glorious. If companies will survive that long.
roncesvalles
a month ago
>One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.
This matches my experience. It's not useful for producing something that you wouldn't have been able to produce yourself, because you still need to verify the output itself and not just the behavior of the output when executed.
I'd peg this as the most fundamental difference in use between LLMs and deterministic compilers/transpilers/codegens.
AndrewKemendo
a month ago
>It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.
Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?
zingar
a month ago
Without arguing with your main point:
> (few people like writing unit tests)
The TDD community loves tests and finds writing code without tests more painful than writing tests before code.
Is your point that the TDD community is a minority?
> It does a better job at writing unit test (not perfect) then the fellow human programmer
I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.
rhines
a month ago
I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.
AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.
g-b-r
a month ago
It's very worrying that this comment was downvoted
zingar
a month ago
> it can even find bugs
This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?
csomar
a month ago
LLMs can traverse codebases and do research faster. But I can see this one backfiring badly as structural slop becomes more acceptable since you can throw an LLM at it an fix the bug. Eventually you'll reach a stage of stasis where your tech debt is so high, you can't pay the interest even with an LLM.
csomar
a month ago
> It is fascinating that it can bootstrap moderately complex projects form a single shot.
Similar to "git clone bigproject@github.git"? There is nothing fascinating about creating something that has existed around the training set. It is fascinating that the AI can make some variations from the original content though.
> If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms.
This is where all the "vibe-coders" disappears. LLMs can write code fast but so does copy-paste. Most of the "vibe-coded" stuff I see on the Internet is non-functional slop that is super-unoptimized and has open supabase databases.
To be clear, I am not against LLMs or embracing new technologies. I also don't have this idea that we have some kind of "craft" when we have been replacing other people for the last couple decades.
I've been building a game (fully vibe-coded, that is the rule is that I don't write/read any lines of code) and it has reached a stage where any LLM is unable to make any change without fully breaking it (for the curious: https://qpingpong.codeinput.com). The end result is quite impressive but it is far from replacing anyone who has been doing serious programming anytime soon.