_aavaa_
4 months ago
“Every line of AI-generated code is a plausible-looking liability. It may pass basic tests, only to fail spectacularly in production with an edge case you never considered.”
Every time I read something along the lines I have to wonder whose code these people review during code reviews. It’s not like the alternative is bulletproof code.
adocomplete
4 months ago
I was thinking the same thing. Humans push terrible code to production all the time that slips through code reviews. You spot it, you fix it, and move on.
kanwisher
4 months ago
Also a lot of the AI code reviewer tools catch bugs that you wouldn't catch otherwise
resize2996
4 months ago
I do not know the future, every line of code is a plausible-looking liability.
Balinares
4 months ago
Good code is explicit about its assumptions and enforces them; good companies set hiring bars so as to filter out developers that can't write good code.
There's no such thing as bulletproof, but there is definitely such a thing as knowing where your vital organs are and how to tell when they've been hit.
_aavaa_
4 months ago
> good companies set hiring bars so as to filter out developers that can't write good code.
And the others are going to be replaced by one engineer and an AI of equivalent caliber.
hitarpetar
4 months ago
that's right, all your coworkers are incompetent but YOU have the secret
_aavaa_
4 months ago
Aside from what you think that I think of myself, do you have a disagreement and actual counter argument to what I said?
1. Many developers are currently employees who objectively produce code of equal or lower quality than AI as it currently is.
2. It is cheaper and more productive to higher fewer, more competent, people and replace the less productive ones by AI (possible due to 1).
3. Short of regulations preventing it, companies will follow through on 2.
hitarpetar
4 months ago
> Many developers are currently employees who objectively produce code of equal or lower quality than AI as it currently is.
what can be asserted without evidence can also be dismissed without evidence
moomoo11
4 months ago
They set up a GitHub action that has AI do an immediate first pass (hallucinates high on drugs and not the good kind) and leave a review.
Considering 80% of team mates are usually dead weight or mid at best (every team is carried by that 1 or 2 guys who do 2-3x), they will do the bare minimum review. Let’s be real.. PIP is real. Job hopping because bad is real.
It’s a problem. I have dealt with this and had to fire.
user
4 months ago
user
4 months ago
nakamoto_damacy
4 months ago
The G in AGI is a big deal and it’s missing from LLMs.
Anything coded by an LLM risks being under-generalised.
Asking an LLM to think in a generalised way does not make it an AGI. The critical ability to generalised beyond learned patterns and to not only come up with arbitrary patterns but to use correct logic to derive them is missing from LLMs because LLMs do not have a logical layer, only a probabilistic one with learned constraints. The defect is the lack of internal logical constraints. It’s a big subject.
I say more about it here:
https://www.forbes.com/sites/hessiejones/2025/09/30/llms-are...
Aka
“layered system”
peacebeard
4 months ago
A lot of people seem to equate using AI tools and deploying code that you don’t understand. All code should be fully understood by the person using the AI tool, then again by the reviewer. The productivity benefit of these tools is still massive, and there is benefit to doing the research and investigation to understand what the LLM is doing if it was not clear up front.
ruszki
4 months ago
> All code should be fully understood by the person using the AI tool, then again by the reviewer.
Should, yeah. But that was not true even before LLMs.
peacebeard
4 months ago
Correct. The problem of poor code review is not new and it is not unique to LLMs.