strix_varius
10 hours ago
To me, the most salient point was this:
> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
Etheryte
9 hours ago
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
pfannkuchen
32 minutes ago
Yeah it doesn’t really seem different from people copy/pasting from stack overflow without reading through it. This isn’t really a new thing, though I guess nobody was really acting like SO was the second coming so it’s probably happening more now.
tyleo
8 hours ago
I agree and I’m surprised more people don’t get this. Bad behaviors aren’t suddenly okay because AI makes them easy.
If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.
We’re ultimately here to make money, not just pump out characters into text files.
jackblemming
an hour ago
How do you know the net value add isn’t greater with the AI, even if it requires more code review comments (and angrier coworkers)?
travisgriggs
5 hours ago
I largely agree with sibling responses.
BUT...
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
skydhash
5 hours ago
To mentor requires a mentee. If a junior is not willing to learn (reasoning, coming up, with an hypothesis, implementing the concept, and verifying it), then why should a senior bother to teach. As a philosopher has once said, a teacher is not meant to give you the solution, but to help you come up with your own.
Macha
8 hours ago
The problem is leadership buy in. The person throwing the LLM slop at github has great metrics when the leadership are looking at cursor usage, lines of code, PR numbers, while the person slowing down to actually read wtf the other people are submitting is now so drowning in slop that they have less time to produce on their own. So the execs look at it as the person complaining "not keeping up with the times".
bloppe
7 hours ago
If leadership is that inept, then this is likely only 1 of many problems they are creating for the organization. I would be looking for alternative employment ASAP.
GuinansEyebrows
6 hours ago
the issue isn't recognizing malign influence within your current organization... it's an issue throughout the entire industry, and I think what we're all afraid of is that it's becoming more inevitable every day, because we're not the ones who have the final say. the luddites essentially failed, after all, because the wider world was not and is not ready for a discussion about quality versus profit.
bloppe
6 hours ago
A poor quality product can only be profitable if no high quality alternative exists (at a similar price point). Every time that's the case, it's an epic opportunity for anybody with the wherewithal to raise some funding and build that high quality alternative themselves. A dysfunctional industry running on AI slop will not be able to keep you from eating their lunch unless they can achieve some sort of regulatory capture, which would be a separate (political) issue.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
crazygringo
6 hours ago
This a million times. If you do this three times, that's grounds for firing. You're literally not doing your job and lying that you are.
It's bizarre to me that people want to blame LLMs instead of the employees themselves.
(With open source projects and slop pull requests, it's another story of course.)
Ekaros
9 hours ago
Maybe the process should have actual two stage pull requests. First stage is you have to comment the request and show some test cases against it. And only then next person has to take a look. Not sure if such flow is even possible with current tools.
oblio
6 hours ago
Build the PR and run tests against it. Supported by all major CI/CD tools.
lubujackson
7 hours ago
The solve is just rejecting the commit with a "clean this up" message as soon as you spot some BS. Trust is earned!
lezojeda
8 hours ago
What do you do if the manager enables it?
jihadjihad
10 hours ago
> whereas good senior engineers often remove as much code as they add
MisterTea
8 hours ago
"One of my most productive days was throwing away 1000 lines of code." - Ken Thompson
CaptainOfCoit
8 hours ago
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
bloppe
7 hours ago
Ya, there's not much you can do when leadership is so terrible. If this kind of workflow is genuinely blessed by management, I would just start using Claude for code reviews too. Then when things break and people want to point fingers at the code reviewer, I'd direct them to Claude. If it's good enough to write code without scrutiny, it's good enough to review code without scrutiny.
jakub_g
7 hours ago
I feel like I went through this stage ahead of time, a decade ago, when I was junior dev, and was starting my days by: first reviewing the work of a senior dev who was cramming out code and breaking things at the speed of light (without LLMs); and then leaving a few dozen comments on pull requests of the offshore team. By midday I had enough for the day.
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
smoody07
3 hours ago
This is a broader issue about how where we place blame when LLMs are involved. Humans seem to want to parrot the work and take credit when it’s correct while deflecting blame when it’s wrong. With a few well placed lawsuits this paradigm will shift imho
aleph_minus_one
9 hours ago
The problem rather is that you still have to stay somewhat agreeable while calling out the bullshit. If you were "socially allowed" to treat colleagues like
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
AnimalMuppet
9 hours ago
They should get a clear explanation of the problem and of the team expectations the first time it happens.
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
sudahtigabulan
6 hours ago
> If it happens a second time? A stern talk from their manager.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
aleph_minus_one
7 hours ago
> Let your manager be the bad guy. That's part of what they're for.
> Your manager won't do that? Then your team is broken in a way you can't fix.
If you apply this standard, then most teams are broken.
01HNNWZ0MV43FF
7 hours ago
"A big enough system is always failing somewhere" - can't remember who said it
zamalek
5 hours ago
> LLM-written ones are almost entirely additive
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
fn foo(bar)
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get: fn foo(bar) // Never called
fn foo_with_frobnitz(bar)
cookiengineer
7 hours ago
You have two options: Burn out because you need to correct every stupid line of code, or... Start to not give a damn about quality of code and live a happy life while getting paid.
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
yodsanklai
6 hours ago
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
000ooo000
4 hours ago
>After you made your colleagues upset submitting crappy code for review, you start to pay attention.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
ge96
8 hours ago
I'm working on the second project handed to me that was vibe-coded. What annoys me assuming it runs is the high number of READMEs which I'm not even sure which one to use/if applicable.
They are usually verbose/include things like "how to run a virtual env for python"
CjHuber
10 hours ago
I'd say it depends on how coding assistants are used, when on autopilot I'd agree, as they don't really take the time to reflect on the work they've done before going on with the next feature of the spec. But in a collaborative process that's of course different as you are pointing out things you want to have implemented in a different way. But I get your point, most PR's you'd flag as AI generated slop are the ones where someone just ran them on autopilot and was somewhat satisfied with the outcome, while treating the resulting code as blackbox