Negitivefrags
19 hours ago
At my company I just tell people “You have to stand behind your work”
And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.
I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
themacguffinman
19 hours ago
The difference I see between a company dealing with this as opposed to an open source community dealing with this is that the company can fire employees as a reactive punishment. Drive-by open source contributions cost very little to lob over and can come from a wide variety of people you don't have much leverage over, so maintainers end up making these specific policies to prevent them from having to react to the thousandth person who used "The AI did it" as an excuse.
osigurdson
17 hours ago
When you shout "use AI or else!" from a megaphone, don't expect everyone to interpret it perfectly. Especially when you didn't actually understand what you were saying in the first place.
EE84M3i
18 hours ago
>I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
I find this bit confusing. Do you provide enterprise contracts for AI tools? Or do you let employees use their personal accounts with company data? It seems all companies have to be managing this somehow at this point.
bjackman
14 hours ago
Shouldn't this go without saying though? At some point someone has to review the code and they see a human name as the sender of the PR. If that person sees the work is bad, isn't it just completely unambiguous that the person whose name is on the PR is responsible for that? If someone responded "but this is AI generated" I would feel justified just responding "it doesn't matter" and passing the review back again.
And the rest (what's in the LLVM policy) should also fall out pretty naturally from this? If someone sends me code for review, and have the feeling they haven't read it themselves, I'll say "I'm not reviewing this and I won't review any more of your PRs unless you promise you reviewed them yourself first".
The fact that people seem to need to establish these things as an explicit policy is a little concerning to me. (Not that it's a bad idea at all. Just worried that there was a need).
lexicality
14 hours ago
You would think it's common sense but I've received PRs that the author didn't understand and when questioned told me that the AI knows more about X than they do so they trust its judgement.
A terrifying number of people seem to think that the damn thing is magic and infallible.
jeroenhd
18 hours ago
Some people who just want to polish their resume will feed any questions/feedback back into the AI that generated their slop. That goes back and forth a few times until the reviewing side learns that the code authors have no idea what they're doing. An LLM can easily pretend to "stand behind its work" if you tell it to.
A company can just fire someone who doesn't know what they're doing, or at least take some kind of measure against their efforts. On a public project, these people can be a death by a thousand cuts.
The best example of this is the automated "CVE" reports you find on bug bounty websites these days.
i2talics
19 hours ago
What good does it really do me if they "stand behind their work"? Does that save me any time drudging through the code? No, it just gives me a script for reprimanding. I don't want to reprimand. I want to review code that was given to me in good faith.
At work once I had to review some code that, in the same file, declared a "FooBar" struct and a "BarFoo" struct, both with identical field names/types, and complete with boilerplate to convert between them. This split served no purpose whatsoever, it was probably just the result of telling an agent to iterate until the code compiled then shipping it off without actually reading what it had done. Yelling at them that they should "stand behind their work" doesn't give me back the time I lost trying to figure out why on earth the code was written this way. It just makes me into an asshole.
sb8244
19 hours ago
It adds accountability, which is unfortunately something that ends up lacking in practice.
If you write bad code that creates a bug, I expect you to own it when possible. If you can't and the root cause is bad code, then we probably need to have a chat about that.
Of course the goal isn't to be a jerk. Lots of normal bugs make it through in reality. But if the root cause is true negligence, then there's a problem there.
AI makes negligence much easier to achieve.
nineteen999
18 hours ago
If you asked Claude to review the code it would probably have pointed out the duplication pretty quickly. And I think this is the thing - if we are going to manage programmers who are using LLM's to write code, and have to do reviews for their code, reviewers aren't going to be able to do it for much longer without resorting to LLM assistance themselves to get the job done.
It's not going to be enough to say - "I don't use LLM's".
nradov
19 hours ago
Yelling at incompetent or lazy co-workers isn't your responsibility, it's your manager's. Escalate the issue and let them be the asshole. And if they don't handle it, well it's time to look for a new job.
danaris
10 hours ago
...what makes you think i2talics isn't the manager in this situation??
skeeter2020
19 hours ago
>> Yelling at incompetent or lazy co-workers isn't your responsibility, it's your manager's
First: Somebody hired these people, so are they really "lazy and incompetent"?
Second: There is no one who's "job" is to yell at incompetent or lazy workers.
benhurmarcel
6 hours ago
That only works if those colleagues care about what you think of them
darth_avocado
19 hours ago
> At my company I just tell people “You have to stand behind your work”
Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.
Using AI as a scapegoat for sloppy and lazy work needs to be unacceptable.
Negitivefrags
19 hours ago
Of course it’s the minimum standard, and it’s obvious if you view AI as a tool that a human uses.
But some people view it as a seperate entity that writes code for you. And if you view AI like that, then “The AI did it” becomes an excuse that they use.
fwipsy
19 hours ago
Bad example. If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.
Taking responsibility for outcomes is a powerful paradigm but I refuse to be held responsible for things that are genuinely beyond my power to change.
This is tangential to the AI discussion though.
darth_avocado
19 hours ago
> If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.
If the toaster is defective, not using it, identifying how to use it if it’s still usable or getting it replaced by reporting it as defective are all well within the pay grade of a sandwich pusher as well as part of their responsibilities.
And you’re still responsible for the sandwich. You can’t throw up your arms and say “the toaster did it”. And that’s where it’s not tangential to the AI discussion.
Toaster malfunctioning is beyond your control, but whether you serve up the burnt sandwich is absolutely within your control, which you will be and should be held responsible for.
fwipsy
9 hours ago
That's exactly what I said and at odds with your last comment. You take responsibility for making the sandwich if possible. If not, you're not responsible for the sandwich, but for refunding the customer or offering them something else.
If I'm required to write code using AI without being given time to verify it, then it's also not fair for me to be held responsible for the quality. Agency matters. I will not take responsibility for things that I'm not given the power to address. Of course if I choose to write code with an AI and it comes out badly, that's within my responsibilities.
It's a bad example because typically "whether to toast the sandwich" depends on customer preference (imposed externally) but "whether to use AI" is still mostly up to the worker.
skydhash
8 hours ago
That’s why you make some written protest. While it would not save you from layoff, you won’t be made into a scapegoat.
dullcrisp
19 hours ago
No it’s not. If you burn a sandwich, you make new sandwich. Sandwiches don’t abide by the laws of physics. If you call a physicist and tell them you burnt your sandwich, they won’t care.
atoav
19 hours ago
I think it depends on the pay. You pay below the living wage? Better live with your sla.. ah employees.. serving charcoal. You pay them well above the living wage? Now we start to get into they should care-territory.
anonzzzies
19 hours ago
But "AI did it" is not immediate you are out thing? If you cannot explain why something is made the way you committed to git, we can just replace you with AI right?
EagnaIonat
19 hours ago
> we can just replace you with AI right?
Accountability and IP protection is probably the only thing saving someone in that situation.
tjr
19 hours ago
Why stop there? We can replace git with AI too!
bitwize
19 hours ago
The smartest and most sensible response.
I'm dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.
locusofself
19 hours ago
It's already happened at some very big tech companies
skeeter2020
18 hours ago
One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn't being tracked; just the various other usage paths.
It doesn't take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can't get correlation and causation right.