crawshaw
10 hours ago
The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Legend2440
8 hours ago
Legally and ethically yes, they are responsible for letting an AI loose with no controls.
But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.
crawshaw
8 hours ago
We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!
The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)
aethros
7 hours ago
> As long as LLMs are tools wielded by humans
They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.
simonw
7 hours ago
Are there really many unsupervised LLMs running around outside of experiments like AI Village?
(If so let me know where they are so I can trick them into sending me all of their money.)
My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)
crawshaw
7 hours ago
Part of what makes this post newsworthy is the claim it is an email from an agent, not a person, which is unusual. Your claim that "unsupervised LLM's are commonplace" is not at all obvious to me.
slowmovintarget
6 hours ago
Which agent has not been launched by a human with a prompt generated by a human or at a human's behest?
We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.
slowmovintarget
6 hours ago
> ...I would say it decided to do so,
Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.
I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.
"Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.
Legend2440
5 hours ago
> Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it.
Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.
AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to reasonably foresee what they might do in a given situation.
LastTrain
8 hours ago
No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.
dkdcio
10 hours ago
no computer system just does stuff on its own. a human (or collection of them) built and maintains the system, they are responsible for it
neural networks are just a tool, used poorly (as in this case) or well
Zenbit_UX
9 hours ago
I truly don’t understand comments like this.
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
dkdcio
9 hours ago
my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)
accusing people of being AI is very low-effort bot behavior btw
xdavidliu
9 hours ago
seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction
dkdcio
9 hours ago
I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
nateb2022
9 hours ago
> a human (or collection of them) built and maintains the system, they are responsible for it
But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?
sigbottle
3 hours ago
the only actual humans in the loop here are the startup founders and engineers. pretty cut and dry case here
unless you want to blame the AI itself, from a legal perspective?
dkdcio
9 hours ago
“it depends” (there’re plenty of laws and case law on this topic)
I think the case here is fairly straightforward
dwohnitmok
9 hours ago
Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
crawshaw
8 hours ago
I do not have a useful opinion on another person’s emotional response. My post you are responding to is about responsibility. A legal entity is always responsible for a machine.
dwohnitmok
5 hours ago
This is mildly disingenuous no? I'm not talking about Rob Pike's reaction which as I call out, "makes sense to me." And you are not just talking about legal entities. After all the legal entity here is Sage.
You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else on HN that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?
crawshaw
5 hours ago
I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
The reason I did was to associate the work with humans because that is the heart of my argument: people do things. This was not the work of an independent AI. If it took more than 60 seconds, I would have made the point abstractly rather than by using names, but abstract arguments are harder to follow. There was no more intention to comment than that.
dwohnitmok
2 hours ago
> I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
This is a bit frustrating of a response to get. No, I don't believe you spent a lot of time on this. I wasn't imaging you spending hours or even minutes tracking these guys down. But I also don't think it's relevant.
I don't think you'd find it relevant if the Sage researchers said "I didn't spend any effort on this. I only did this because I wanted to make the point that AIs have enough capability to navigate the web and email people. I could have made the point abstractly, but abstract arguments are harder to follow. There was no other intention than what I put in the prompt." It's hence frustrating to see you use essentially the same thing as a shield.
Look, I'm not here to crucify you for this. I don't think you're a bad person. And this isn't even that bad in the grand scheme of things. It's just that naming and shaming specific people feels like an overreaction to thoughtless, machine-generated thank you emails.
LastTrain
8 hours ago
> They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more ... > Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney ... > And yes this is annoying to get this kind of email no matter who it's from ...
Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.
dwohnitmok
5 hours ago
You're misreading my comment. I understand Rob Pike's reaction (which is against the general state of affairs, not those three individuals). I explicitly said it makes sense to me. I'm reacting to @crawshaw specifically listing out the names of people.
roywiggins
9 hours ago
I think this AI system just registers for Gmail and sends stuff.
simonw
9 hours ago
It looks to me like each of the agents that are running has its own dedicated name-of-model@agentvillage.org Gmail address.
roywiggins
9 hours ago
Huh, at that point they should just equip it with an email client rather than forcing it to laboriously navigate the webmail interface with a browser!
This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand, just give them sendmail or whatever.
crawshaw
8 hours ago
That is really interesting and does suggest some new questions. I would claim it does not change who is responsible in this case, but an example of a new question: there was a time when it was legally ambiguous that click-through terms of service were valid. Now if an agent goes and clicks through for me, are they valid?
blibble
9 hours ago
> The important point that Simon makes in careful detail is: an "AI" did not send this email.
same as the NRA slogan: "guns don't kill people, people kill people"
dstroot
9 hours ago
The NRA always forgets the second part: “People kill people… using guns. Tools that we manufacture expressly for that purpose.”
dkdcio
9 hours ago
does a gun on its own kill people?
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
ako
9 hours ago
That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.
exasperaited
7 hours ago
The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.
It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.