pornel
11 hours ago
Their default solution is to keep digging. It has a compounding effect of generating more and more code.
If they implement something with a not-so-great approach, they'll keep adding workarounds or redundant code every time they run into limitations later.
If you tell them the code is slow, they'll try to add optimized fast paths (more code), specialized routines (more code), custom data structures (even more code). And then add fractally more code to patch up all the problems that code has created.
If you complain it's buggy, you can have 10 bespoke tests for every bug. Plus a new mocking framework created every time the last one turns out to be unfit for purpose.
If you ask to unify the duplication, it'll say "No problem, here's a brand new metamock abstract adapter framework that has a superset of all feature sets, plus two new metamock drivers for the older and the newer code! Let me know if you want me to write tests for the new adapters."
unlikelytomato
10 hours ago
This is why I'm confused when people say it isn't ready to replace most of the programmer workforce.
lwansbrough
8 hours ago
For me, I'll do the engineering work of designing a system, then give it the specific designs and constraints. I'll let it plan out the implementation, then I give it notes if it varies in ways I didn't expect. Once we agree on a solution, that's when I set it free. The frontier models usually do a pretty good job with this work flow at this point.
danparsonson
7 hours ago
Yeah that describes most legacy codebases I've worked on XD
Foobar8568
7 hours ago
LLM code is higher quality than any codes I have seen in my 20 years in F500. So yeah you need to "guide" it, and ensure that it will not bypass all the security guidance for ex...But at least you are in control, although the cognitive load is much higher as well than just "blind trust of what is delivered".
But I can see the carnage with offshoring+LLM, or "most employees", including so call software engineer + LLM.
_0ffh
2 hours ago
Huh, that explains a lot about the F500, and their buzzword slogans like "culture of excellence".
LLM code is still mostly absurdly bad, unless you tell it in painstaking detail what to do and what to avoid, and never ask it to do a bigger job at a time than a single function or very small class.
Edit: I'll admit though that the detailed explanation is often still much less work than typing everything yourself. But it is a showstopper for autonomous "agentic coding".
thesz
6 hours ago
> LLM code is higher quality than any codes I have seen in my 20 years in F500.
"Any codes"?Foobar8568
6 hours ago
At least my comment hasn't been reviewed or written by a LLM.
And in my French brain, code or codebase is countable and not uncountable.
sebastiennight
5 hours ago
As far as I've ever heard, "le code" used in a codebase is uncountable, like "le café" you'd put in a cup, so we would still say "meilleur que tout le code que j'ai vu en 20 ans" and not "meilleur que tous les codes que j'ai vus en 20 ans".
There is a countable "code" (just like "un café" is either a place, or a cup of coffee, or a type of coffee), and "un code" would be the one used as a password or secret, as in "j'ai utilisé tous les codes de récupération et perdu mon accès Gmail" (I used all the recovery codes and lost Gmail access).
Foobar8568
5 hours ago
You are correct, we generally say le code. To be exact at that time, I was more thinking toutes les lignes de code.
troupo
2 hours ago
> As far as I've ever heard, "le code" used in a codebase is uncountable
Now I can't get the Pulp Fuction dialog out of my head.
- Do you know what they call code in France?
- No
- Le code
ahartmetz
44 minutes ago
As an additional wrinkle, the word seems quite French in origin in this case.
thesz
6 hours ago
I guess you can guide it to write in any style.
But what set me off is an universal qualifier: there was no code seen by you that is of equal quality or better that what LLMs generate.
mejutoco
5 hours ago
cows are brown, from one side.
https://www.neatorama.com/2007/01/22/a-mathematical-cow-joke...
Implicated
6 hours ago
I got curious and had to fire up the ol LLM to find out what the story is about the words that aren't pluralized - TIL about countable and uncountable nouns. I wonder if the guy giving you trouble about your English speaks French.
thesz
6 hours ago
I speak Russian and some English, but the question was about universal quantification: author declares that LLMs generate code of better quality than "any codes" he seen in his career.
iLoveOncall
5 hours ago
I'm native French and nobody would consider code countable. "codes" makes no sense. We'd talk about "lines of code" as a countable in French just like in English.
true_religion
29 minutes ago
Codes is a proper grammatical word in English, but we don’t use it in reference to general computer programming.
You can for example have two different organizations with different codes of conduct.
There is though nothing technically wrong with seeing each line of code as an complete individual code and referring to then multiple of them as codes.
Implicated
6 hours ago
You'll find, at times, that those communicating in a language that's not their primary language will tend to deviate from what one whose it was their primary language might expect.
If that's obvious to you than you're just being rude. If it's not obvious to you, then you'll also find this is a common deviance (plural 'code') from those who come from a particular primary language's region.
Edit; This got me thinking - what is the grammar/rule around what gets pluralized and what doesn't? How does one know that "code" can refer to a single line of code, a whole file of code, a project, or even the entirety of all code your eyes have ever seen without having to have an s tacked on to the end of it?
tsimionescu
6 hours ago
"Codes" as a way to refer to programs/libraries is actually common usage in academia and scientific programming, even by native English speakers. I believe, but am not sure, that it may just be relatively old jargon, before the use of "programs" became more common in the industry.
As for the grammar rule, it's the question of whether a word is countable or uncountable. In common industry usage, "code" is an uncountable noun, just like "flour" in cooking (you say 2 lines of code, 1 pound of flour).
It's actually pretty common for the same word to have both countable and uncountable versions, with different, though related, meanings. Typically the uncountable version is used with a measure of quantity, while the countable version denotes different kinds (flours - different types of flour; peoples - different groups of people).
Implicated
6 hours ago
> Typically the uncountable version is used with a measure of quantity, while the countable version denotes different kinds (flours - different types of flour; peoples - different groups of people).
This was very helpful, thank you! (I had just gotten off the phone with Claude learning about countable and uncountable nouns but those additional details you provided should prove quite valuable)
thesz
6 hours ago
The question was about universal quantification, not grammar error.
As if author of the comment had not seen any code that is better or of equal quality of code generated by LLMs.
Implicated
6 hours ago
Well now I look like an idiot. But I did learn some things! :D My apologies.
thaumasiotes
5 hours ago
> what is the grammar/rule around what gets pluralized and what doesn't? How does one know that "code" can refer to a single line of code, a whole file of code, a project, or even the entirety of all code your eyes have ever seen without having to have an s tacked on to the end of it?
Well, the grammar is that English has two different classes of noun, and any given noun belongs to one class or the other. Standard terminology calls them "mass nouns" and "count nouns".
The distinction is so deeply embedded in the language that it requires agreement from surrounding words; you might compare many [which can only apply to count nouns] vs much [only to mass nouns], or observe that there are separate generic nouns for each class [thing is the generic count noun; stuff is the generic mass noun].
For "how does one know", the general concept is that count nouns refer to things that occur discretely, and mass nouns refer to things that are indivisible or continuous, most prototypically materials like water, mud, paper, or steel.
Where the class of a noun is not fixed by common use (for example, if you're making it up, or if it's very rare), a speaker will assign it to one class or the other based on how they internally conceive of whatever they're referring to.
mettamage
6 hours ago
Giving it prompts of the Shannon project helps for security
YesBox
8 hours ago
Heh, people like to have someone else to blame.
iLoveOncall
5 hours ago
Really? Because this perfectly explains why it will never replace them: it needs an exact language listing everything required to function as you expect it.
You need code to get it to generate proper code.
abm53
4 hours ago
I think GP was a joke about the ability of a typical programmer.
I certainly read it as one and found it funny.
stingraycharles
11 hours ago
> If you ask to unify the duplication, it'll say "No problem, here's a brand new metamock abstract adapter framework that has a superset of all feature sets, plus two new metamock drivers for the older and the newer code! Let me know if you want me to write tests for the new adapters."
Nevermind the fact that it only migrated 3 out of 5 duplicated sections, and hasn’t deleted any now-dead code.
Mavvie
8 hours ago
Sounds like my coworkers.
lelanthran
4 hours ago
Maybe, but I'd bet a large sum of money that each of your coworkers aren't turning out this drivel at a rate of 3kLoC per hour.
Can you imagine working with someone who produces 100k lines of unmaintainable code in a single sprint?
This is your future.
Foobar8568
7 hours ago
That's the reality nobody really wants to say.
Jweb_Guru
6 hours ago
It's not reality. I'm really not a fan of the way that people excuse the really terrible code LLMs write by claiming that people write code just as bad. Even if that were true, it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later.
darkwater
4 hours ago
> it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later.
I had a coworker that more or less exactly did that. You left a comment in a ticket about something extra to be done, he answered "yes sure" and after a few days proceeded to close the ticket without doing the thing you asked. Depending on the quantity of work you had at the moment, you might not notice that until after a few months, when the missing thing would bite you back in bitter revenge.
imiric
6 hours ago
It's an easy copout.
Tool works as expected? It's superintelligence. Programming is dead.
Tool makes dumb mistake? So do humans.
brabel
5 hours ago
Yes and both are right. It’s a matter of which is working as expected and making fewer mistakes more often. And as someone using Claude Code heavily now, I would say we’re already at a point where AI wins.
lukan
4 hours ago
"Even if that were true, it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later."
I admire your experience with people.
dns_snek
2 hours ago
The point is, that's not the typical experience and people like that can be replaced. We don't willingly bring people like that on our teams, and we certainly don't aim to replace entire teams with clones of this terrible coworker prototype.
ttoinou
5 hours ago
No but they will despise you for bringing the problem up
duskdozer
3 hours ago
Maybe, but it lets them pump out much, much more code than they otherwise would have been able to. That's the "100x" in their AI productivity multipliers.
marginalia_nu
10 hours ago
My sense is that the code generation is fast, but then you always need to spend several hours making sure the implementation is appropriate, correct, well tested, based on correct assumptions, and doesn't introduce technical debt.
You need to do this when coding manually as well, but the speed at which AI tools can output bad code means it's so much more important.
ehnto
9 hours ago
Well when you write it manually you are doing the review and sanity checking in real time. For some tasks, not all but definitely difficult tasks, the sanity checking is actually the whole task. The code was never the hard part, so I am much more interested in the evolving of AIs real world problem solving skills over code problems.
I think programming is giving people a false impression on how intelligent the models are, programmers are meant to be smart right so being able to code means the AI must be super smart. But programmers also put a huge amount of their output online for free, unlike most disciplines, and it's all text based. When it comes to problem solving I still see them regularly confused by simple stuff, having to reset context to try and straighten it out. It's not a general purpose human replacement just yet.
LPisGood
9 hours ago
And it’s slower to review because you didn’t do the hard part of understanding the code as it was being written.
Implicated
9 hours ago
You're holding it wrong.
Set the boundaries and guidelines before it starts working. Don't leave it space to do things you don't understand.
ie: enforce conventions, set specific and measurable/verifiable goals, define skeletons of the resulting solutions if you want/can.
To give an example. I do a lot of image similarity stuff and I wanted to test the Redis VectorSet stuff when it was still in beta and the PHP extension for redis (the fastest one, which is written in C and is a proper language extension not a runtime lib) didn't support the new commands. I cloned the repo, fired up claude code and pointed it to a local copy of the Redis VectorSet documentation I put in the directory root telling it I wanted it to update the extension to provide support for the new commands I would want/need to handle VectorSets. This was, idk, maybe a year ago. So not even Opus. It nailed it. But I chickened out about pushing that into a production environment, so I then told it to just write me a PHP run time client that mirrors the functionality of Predis (pure-php implementation of redis client) but does so via shell commands executed by php (lmao, I know).
Define the boundaries, give it guard rails, use design patterns and examples (where possible) that can be used as reference.
philipp-gayret
2 hours ago
You are correct but developers are not yet ready to face it. The argument you'll always get is the flawed premise that it's less effort to write it yourself (While the same people work in teams that have others writing code for them every day of the week).
slopinthebag
8 hours ago
They aren't holding it wrong, it's a fundamental limitation of not writing the code yourself. You can make it easier to understand later when you review it, but you still need to put in that effort.
marginalia_nu
2 hours ago
So in my experience with Opus 4.6 evaluating it in an existing code base has gone like this.
You say "Do this thing".
- It does the thing (takes 15 min). Looks incredibly fast. I couldn't code that fast. It's inhuman. So far all the fantastical claims hold up.
But still. You ask "Did you do the thing?"
- it says oops I forgot to do that sub-thing. (+5m)
- it fixes the sub-thing (+10m)
You say is the change well integrated with the system?
- It says not really, let me rehash this a bit. (+5m)
- It irons out the wrinkles (+10m)
You say does this follow best engineering practices, is it good code, something we can be proud of?
- It says not really, here are some improvements. (+5m)
- It implements the best practices (+15m)
You say to look carefully at the change set and see if it can spot any potential bugs or issues.
- It says oh, I've introduced a race condition at line 35 in file foo and an null correctness bug at line 180 of file bar. Fixing. (+15m)
You ask if there's test coverage for these latest fixes?
- It says "i forgor" and adds them. (+15m)
Now the change set has shrunk a bit and is superficially looking good. Still, you must read the code line by line, and with an experienced eye will still find weird stuff happening in several of the functions, there's redundant operations, resources aren't always freed up. (60m)
You ask why it's implemented in such a roundabout way and how it intends for the resources to be freed up?
- It says "you're absolutely right" and rewrites the functions. (+15m)
You ask if there's test coverage for these latest fixes?
- It says "i forgor" and adds them. (+15m)
Now the 15 minutes of amazingly fast AI code gen has ballooned into taking most of the afternoon.
Telling Claude to be diligent, not write bugs, or to write high quality code flat out does not work. And even if such prompting can reduce the odds of omissions or lapses, you still always always always have to check the output. It can not find all the bugs and mistakes on its own. If there are bugs in its training data, you can assume there will be bugs in its output.
(You can make it run through much of this Socratic checklist on its own, but this doesn't really save wall clock time, and doesn't remove the need for manual checking.)
vannevar
10 hours ago
I'd highly recommend working top down, getting it to outline a sane architecture before it starts coding. Then if one of the modules starts getting fouled up, start with a clean sheet context (for that module) incorporating any cautions or lessons learned from the bad experience. LLMs are not yet good at working and reworking the same code, for the reasons you outline. But they are pretty good at a "Groundhog Day" approach of going through the implementation process over and over until they get it right.
coolius
5 hours ago
+1 if you are vibe coding projects from scratch. if the architecture you specify doesn't make sense, the llm will start struggling, the only way out of their misery is mocking tests. the good thing is that a complete rewrite with proper architecture and lessons learned is now totally affordable.
disgruntledphd2
4 hours ago
I think the best thing about LLMs is how incredibly easy they make it to build one to throw away.
I've definitely built the same thing a few times, getting incrementally better designs each time.
joquarky
6 hours ago
Don't let it deteriorate so far that it can't recover in one session.
Perform regular sessions dedicated to cleaning up tech debt (including docs).
Implicated
9 hours ago
Not trying to be snarky, with all due respect... this is a skill issue.
It's a tool. It's a wildly effective and capable tool. I don't know how or why I have such a wildly different experience than so many that describe their experiences in a similar manner... but... nearly every time I come to the same conclusion that the input determines the output.
> If they implement something with a not-so-great approach, they'll keep adding workarounds or redundant code every time they run into limitations later.
Yes, when the prompt/instructions are overly broad and there's no set of guardrails or guidelines that indicate how things should be done... this will happen. If you're not using planning mode, skill issue. You have to get all this stuff wrapped up and sorted before the implementation begins. If the implementation ends up being done in a "not-so-great" approach - that's on you.
> If you tell them the code is slow
Whew. Ok. You don't tell it the code is slow. Do you tell your coworker "Hey, your code is slow" and expect great results? You ask it to benchmark the code and then you ask it how it might be optimized. Then you discuss those options with it (this is where you do the part from the previous paragraph, where you direct the approach so it doesn't do "no-so-great approach") until you get to a point where you like the approach and the model has shown it understands what's going on.
Then you accept the plan and let the model start work. At this point you should have essentially directed the approach and ensured that it's not doing anything stupid. It will then just execute, it'll stay within the parameters/bounds of the plan you established (unless you take it off the rails with a bunch of open ended feedback like telling it that it's buggy instead of being specific about bugs and how you expect them to be resolved).
> you can have 10 bespoke tests for every bug. Plus a new mocking framework created every time the last one turns out to be unfit for purpose.
This is an area I will agree that the models are wildly inept. Someone needs to study what it is about tests and testing environments and mocking things that just makes these things go off the rails. The solution to this is the same as the solution to the issue of it keeping digging or chasing it's tail in circles... Early in the prompt/conversation/message that sets the approach/intent/task you state your expectations for the final result. Define the output early, then describe/provide context/etc. The earlier in the prompt/conversation the "requirements" are set the more sticky they'll be.
And this is exactly the same for the tests. Either write your own tests and have the models build the feature from the test or have the model build the tests first as part of the planned output and then fill in the functionality from the pre-defined test. Be very specific about how your testing system/environment is setup and any time you run into an issue testing related have the model make a note about that and the solution in a TESTING.md document. In your AGENTS.md or CLAUDE.md or whatever indicate that if the model is working with tests it should refer to the TESTING.md document for notes about the testing setup.
Personally, I focus on the functionality, get things integrated and working to the point I'm ready to push it to a staging or production (yolo) environment and _then_ have the model analyze that working system/solution/feature/whatever and write tests. Generally my notes on the testing environment to the model are something along the lines of a paragraph describing the basic testing flow/process/framework in use and how I'd like things to work.
The more you stick to convention the better off you'll be. And use planning mode.
riffraff
6 hours ago
> Whew. Ok. You don't tell it the code is slow. Do you tell your coworker "Hey, your code is slow" and expect great results?
Yes? Why don't you?
They are capable people that just didn't notice something, id I notice some telemetry and tell them "hey this is slow" they are expected to understand the reason(s).
Implicated
6 hours ago
So, you observed some telemetry - which would have been some sort of specific metric, right? Wouldn't you communicate that to them as well, not just "it's slow"?
"Hey, I saw that metric A was reporting 40% slower, are you aware already or have any ideas as to what might be causing that?"
Those two approaches are going to produce rather distinctly different results whether you're speaking to a human or typing to a GPU.
bryanrasmussen
5 hours ago
Yeah if my co-worker can't start figuring out why the code is slow, with a reasonable reference to what the code in question is, that is a knock against their skills. I would actually expect some ideas as to what the problem is just off the top of their heads, but that the coding agent can't do that isn't a hit against it specifically, this is now a good part of what needs to be done differently.
The suggestion to tell the agent to do performance analysis of the part of the code you think is problematic, and offer suggestions for improvements seems like the proper way to talk to a machine, whereas "hey your code is slow" feels like the proper way to talk to a human.
brabel
4 hours ago
As someone who leads a team of engineers, telling someone their code is slow is not nice, helpful or something a good team member should do. It’s like telling them there’s a bug and not explaining what the bug is. Code can be slow for infinite reasons, maybe the input you gave is never expected and it’s plenty fast otherwise. Or the other dev is not senior enough to know where problems may be. It can be you when I tell you your OOP code is super slow, but you only ever done OOP and have no idea how to put data in a memory layouts that avoids cpu cache misses or whatever. So no that’s not the proper way to talk to humans. And AI is only as good as the quality of what you’re asking. It’s a bit like a genie, it will give you what you asked , not what you actually wanted. Are you prepared for the ai to rewrite your Python code in C to speed it up? Can it just add fast libraries to replace the slow ones you had selected? Can it write advanced optimization techniques it learned about from phd thesis you would never even understand?
bryanrasmussen
an hour ago
>As someone who leads a team of engineers, telling someone their code is slow is not nice, helpful or something a good team member should do
right, I'm sure there are all sorts of scenarios where that is the case and probably the phrasing would be something like that seems slow, or it seems to be taking longer than expected or some other phrasing that is actually synonymous with the code is slow. On the other hand there are also people that you can say the code is slow to, and they won't worry about it.
>So no that’s not the proper way to talk to humans
In my experience there are lots of proper ways to talk to humans, and part of the propriety is involved with what your relationship with them is. so it may be the proper way to talk to a subset of humans, which is generally the only kinds of humans one talks to - a subset. I certainly have friends that I have worked to for a long time who can say "what the fuck were you thinking here" or all sorts of things that would not be nice if it came from other people but is in fact a signifier of our closeness that we can talk in such a way. Evidently you have never led a team with people who enjoyed that relationship between them, which I think is a shame.
Finally, I'll note that when I hear a generalized description of a form of interaction I tend to give what used to be called "the benefit of a doubt" and assume that, because of the vagaries of human language and the necessity of keeping things not a big long harangue as every communication must otherwise become in order to make sure all bases of potential speech are covered, that the generalized description may in fact cover all potential forms of polite interaction in that kind of interaction, otherwise I should have to spend an inordinate amount of my time lecturing people I don't know on what moral probity in communication requires.
But hey, to each their own.
on edit: "the what the fuck were you thinking here" quote is also an example of a generalized form of communication that would be rude coming from other people but was absolutely fine given the source, and not an exact quote despite the use of quotation marks in the example.
zabzonk
5 hours ago
Well, I would say something like "We seem to be having some performance issues the business has noticed in the XYZ stuff. Shall we sit down together and see if we can work out if we can improve things?"
girvo
5 hours ago
I absolutely tell a coworker their code is slow and expect them to fix it…
Bayko
an hour ago
I too tell my boss to promote me and expect him to do so.
brabel
5 hours ago
Great answer, and the reason some people have bad experiences is actually patently clear: they don’t work with the AI as a partner, but as a slave. But even for them, AI is getting better at automatically entering planning mode, asking for clarification (what exactly is slow, can you elaborate?), saying some idea is actually bad (I got that a few times), and so on… essentially, the AI is starting to force people to work as a partner and give it proper information, not just tell them “it’s broken, fix it” like they used to do on StackOverflow.
otabdeveloper4
7 hours ago
It is not a tool. It is an oracle.
It can be a tool, for specific niche problems: summarization, extraction, source-to-source translation -- if post-trained properly.
But that isn't what y'all are doing, you're engaging in "replace all the meatsacks AGI ftw" nonsense.
Implicated
6 hours ago
If I was on the "replace all the meatsacks AGI ftw" team then I would have referred to it as an oracle, by your own logic, wouldn't I have?
It's a tool. It's good for some things, not for others. Use the right tool for the job and know the job well enough to know which tools apply to which tasks.
More than anything it's a learning tool. It's also wildly effective at writing code, too. But, man... the things that it makes available to the curious mind are rather unreal.
I used it to help me turn a cat exercise wheel (think huge hamster wheel) into a generator that produces enough power to charge a battery that powers an ESP32 powered "CYD" touchscreen LCD that also utilizes a hall effect sensor to monitor, log and display the RPMs and "speed" (given we know the wheel circumference) in real time as well as historically.
I didn't know anything about all this stuff before I started. I didn't AGI myself here. I used a learning tool.
But keep up with your schtick if that's what you want to do.
otabdeveloper4
34 minutes ago
Oracles have their use too, but as long as you keep confusing "oracle" and "tool" you will get nowhere.
P.S. The real big deal is the democratization of oracles. Back in the day building an oracle was a megaproject accessible only to megacorps like Google. Today you can build one for nothing if you have a gaming GPU and use it for powering your kobold text adventure session.
leptons
5 hours ago
>I used it to help me turn a cat exercise wheel (think huge hamster wheel) into a generator that produces enough power to charge a battery that powers an ESP32 powered "CYD" touchscreen LCD that also utilizes a hall effect sensor to monitor, log and display the RPMs and "speed" (given we know the wheel circumference) in real time as well as historically.
So what? That's honestly amateur hour. And the LLM derived all of it from things that have been done and posted about a thousand times before.
You could have achieved the same thing with a few google searches 15 years ago (obviously not with ESP32, but other microcontrollers).
carlosjobim
2 hours ago
Yes, this is exactly the experience I have had with LLMs as a non-programmer trying to make code. When it gets too deep into the weeds I have to ask it to get back a few steps.
bryanrasmussen
10 hours ago
maybe there should be an LLM trained on a corpus of a deletions and cleanup of code.
krackers
9 hours ago
I'm guessing there's a very strong prior to "just keep generating more tokens" as opposed to deleting code that needs to be overcome. Maybe this is done already but since every git project comes with its own history, you could take a notable open-source project (like LLVM) and then do RL training against against each individual patch committed.
bryanrasmussen
an hour ago
right, it would have to a specialized tool that you used to do analysis of codebase every now and then, or parts that you thought should be cleaned up.
Obviously there is a just keep generating more tokens bias in software management, since so many developer metrics over the years do various lines of code style analysis on things.
But just as experience and managerial programs have over time developed to say this is a bad bias for ranking devs, it should be clear it is a bad bias for LLMs to have.
movedx01
4 hours ago
Perhaps the problem is that you RL on one patch a time, failing to capture the overarching long term theme, an architecture change being introduced gradually over many months, that exists in the maintainer’s mental model but not really explicitly in diffs.
ashdksnndck
4 hours ago
I think this is in the training data since they use commit data from repos, but I imagine code deletions are rarer than they should be in the real data as well.
bryanrasmussen
30 minutes ago
deleting and code cleanup is perhaps more an expression of seniority, and personal preferences. Maybe there should be the same kind style transfer with code that you see with graphical generative AI, "rewrite this code path in the style of Donald Knuth"
ThrowawayTestr
an hour ago
I feel like there's two types of LLM users. Those that understand it's limitations, and those that ask it to solve a millennium problem on the first try.
codebolt
8 hours ago
I use the restore checkpoint/fork conversation feature in GitHub Copilot heavily because of this. Most of the time it's better to just rewind than to salvage something that's gone off track.
disgruntledphd2
4 hours ago
Yeah I'm a big fan of branching for basically every change, as it provides a known good checkpoint.
fmbb
3 hours ago
It’s in the name, isn’t it?
Generative AI.
leke
6 hours ago
i wonder if the solution is to just ask it to refactor its code once it's working.
mirsadm
3 hours ago
I do this all the time but then you end up with really over engineered code that has way more issues than before. Then you're back to prompting to fix a bunch of issues. If you didn't write the initial code sometimes it's difficult to know the best way to refactor it. The answer people will say is to prompt it to give you ideas. Well then you're back to it generating more and more code and every time it does a refactor it introduces more issues. These issues aren't obvious though. They're really hard to spot.
MadnessASAP
5 hours ago
You can, and it might make things a bit better. The only real way I've found so far is to start going through file by file, picking it apart.
I wouldn't be surprised if over half my prompts start with "Why ...?", usually followed by "Nope, ... instead”
Maybe the occasional "Fuck that you idiot, throw the whole thing out"
MattGaiser
9 hours ago
> If they implement something with a not-so-great approach, they'll keep adding workarounds or redundant code every time they run into limitations later.
Are you using plan mode? I used to experience the do a poor approach and dig issue, but with planning that seems to have gone away?
esafak
10 hours ago
I have run into this too. Some of it is because models lack the big picture; so called agentic search (aka grep) is myopic.