codingdave
7 hours ago
If that future came to fruition we'd stop hearing "it works on my machine", and start hearing "it worked with my LLM".
The problem with refactoring code, re-writing codebases, and such major work is not the effort of re-coding. It is that you lose the code that has been battle-tested in production over years of use, millions of man-hours of beating on the code and turning it into a product that has survived every real-world edge case, attack, and scaling concern throughout the entire history of the product.
When you re-write code for for the sake of re-writing code, you throw that all out and have a brand new codebase that needs to go through all the production pain all over again.
So no - the trend I'm hearing of people thinking code will just become an output of an LLM-driven build process sounds quite naive to me.
ohcmon
7 hours ago
> you lose the code that has been battle-tested
I agree that this is still the most important thing, and I don’t try to challenge this.
At the same time we have quite adopted bumping our dependencies when it does not incorporate breaking changes (especially if there are know security vulnerabilities) — and my point is exactly about it, why even simple renames, extraction or flattening or other simple changes have to be treated so differently than internal changes that do not touch public interface?