almoehi
8 hours ago
What I found in practise is that AI generated code is typically 30% longer than it should be compared to how an experienced senior would write it.
It’s not that it is wrong or anything - it’s just unnecessary verbose.
Which you could argue is not a problem if it won’t be read by humans anyways anymore in the near future.
furyofantares
8 hours ago
> Which you could argue is not a problem if it won’t be read by humans anyways anymore in the near future.
It's a problem right now for code that isn't being read by humans.
LLM-backed agents start by writing slightly bad code that's a little too verbose, too careful in error handling, writes too much fallback code, among other common minor LLM-ish flaws. And then it's next turn of the crank sees all that, both as an example but also as code it must maintain, and is slightly more bad in all those ways.
This is why vibing ends up so bad. It keeps producing code that does what you asked for a fairly long time, so you can get a long way vibing. By the time you hit a brick wall it will have been writing very bad code for a long while, and it's not clear that it's easier to fix it than start over and try not to accept any amount of slop.
david-gpu
7 hours ago
> too careful in error handling, writes too much fallback code
Is it possible that your code goes a little cowboy when it comes to error handling? I don't think I've ever seen code that was too careful when it came to error handling -- but I wrote GPU drivers, so perhaps the expectations were different in that context.
furyofantares
3 hours ago
When I'm writing web services I think I handle almost every error and I don't have this complaint there.
When I'm writing video games there's lots of code where missing assets or components simply mean the game is misconfigured and won't work and I would like it to loudly and immediately fail. I often like just crashing there. There are better options sometimes too, making a lot of noise but allowing continuation. But LLMs seem to be bad at using those too.
Actually to go back to web services, I do still hate the way I've had LLMs handle errors there too - too often they handle them silently or worse, provide some fallback behavior that masks the error. They just don't write code that looks like it was written by someone with 1) some assumptions about how the code is going to be used 2) some ideas about how likely their assumptions are to be wrong or 3) some opinions about how they'd like to learn their assumptions are wrong if so.
hedora
3 hours ago
I’ve definitely seen agents add null checks to a computed value in a function, but then not change the return type to be non-null. Later, it adds a null checks at each call site, each with a different error message and/or behavior, but all unreachable.
For bonus points, it implements a redundant version of the same API, and that version can return null, so now the dozen redundant checks are sorta unreachable.