crawshaw
a month ago
The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Legend2440
a month ago
Legally and ethically yes, they are responsible for letting an AI loose with no controls.
But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.
crawshaw
a month ago
We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!
The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)
slowmovintarget
a month ago
> ...I would say it decided to do so,
Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.
I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.
"Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.
Legend2440
a month ago
> Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it.
Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.
AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to reasonably foresee what they might do in a given situation.
aethros
a month ago
> As long as LLMs are tools wielded by humans
They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.
simonw
a month ago
Are there really many unsupervised LLMs running around outside of experiments like AI Village?
(If so let me know where they are so I can trick them into sending me all of their money.)
My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)
Corrado
a month ago
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
How would we know? Isn't this like trying to prove a negative? The rise of AI "bots" seems to be a common experience on the Internet. I think we can agree that this is a problem on many social media sites and it seems to be getting worse.
As for being under "human supervision", at what point does the abstraction remove the human from the equation? Sure, when a human runs "exploit.exe" the human is in complete control. When a human tells Alexa to "open the garage door" they are still in control, but it is lessened somewhat through the indirection. When a human schedules a process that runs a problem which tells an agent to "perform random acts of kindness" the human has very little knowledge of what's going on. In the future I can see the human being less and less directly involved and I think that's where the problem lies.
I can equate this to a CEO being ultimately responsible for what their company does. This is the whole reason behind to the Sarbanes-Oxley law(s); you can't declare that you aren't responsible because you didn't know what was going on. Maybe we need something similar for AI "agents".
ben_w
a month ago
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
My intuition says yes, on the basis of having seen precursors. 20 years ago, one or both of Amazon and eBay bought Google ads for all nouns, so you'd have something like "Antimatter, buy it cheap on eBay" which is just silly fun, but also "slaves" and "women" which is how I know this lacked any real supervision.
Just over ten years ago, someone got in the news for a similar issue with machine generated variations of "Keep Calm and Carry On" T-shirts that they obviously had not manually checked.
Last few years, there's been lawyers getting in trouble for letting LLMs do their work for them.
The question is, can you spot them before they get in the news by having spent all their owner's money?
crawshaw
a month ago
Part of what makes this post newsworthy is the claim it is an email from an agent, not a person, which is unusual. Your claim that "unsupervised LLM's are commonplace" is not at all obvious to me.
slowmovintarget
a month ago
Which agent has not been launched by a human with a prompt generated by a human or at a human's behest?
We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.
computerthings
a month ago
[dead]
LastTrain
a month ago
No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.
dwohnitmok
a month ago
Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
crawshaw
a month ago
I do not have a useful opinion on another person’s emotional response. My post you are responding to is about responsibility. A legal entity is always responsible for a machine.
dwohnitmok
a month ago
This is mildly disingenuous no? I'm not talking about Rob Pike's reaction which as I call out, "makes sense to me." And you are not just talking about legal entities. After all the legal entity here is Sage.
You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else on HN that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?
crawshaw
a month ago
I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
The reason I did was to associate the work with humans because that is the heart of my argument: people do things. This was not the work of an independent AI. If it took more than 60 seconds, I would have made the point abstractly rather than by using names, but abstract arguments are harder to follow. There was no more intention to comment than that.
dwohnitmok
a month ago
> I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
This is a bit frustrating of a response to get. No, I don't believe you spent a lot of time on this. I wasn't imaging you spending hours or even minutes tracking these guys down. But I also don't think it's relevant.
I don't think you'd find it relevant if the Sage researchers said "I didn't spend any effort on this. I only did this because I wanted to make the point that AIs have enough capability to navigate the web and email people. I could have made the point abstractly, but abstract arguments are harder to follow. There was no other intention than what I put in the prompt." It's hence frustrating to see you use essentially the same thing as a shield.
Look, I'm not here to crucify you for this. I don't think you're a bad person. And this isn't even that bad in the grand scheme of things. It's just that naming and shaming specific people feels like an overreaction to thoughtless, machine-generated thank you emails.
crawshaw
a month ago
I went for a walk to think about your position. I do not think you are wrong. If you refused to name a person in a situation like this, I would never try to convince you otherwise. That is why it is hard for me to make a case to you here, because I do not hold the opposing position. But I also find your argument that I should have not done so unconvincing. Both seem like reasonable choices to me.
I have two tests for this. First: what harm does my comment here cause? Perhaps some mild embarrassment? It could not realistically do more.
Second: if it were me, would I mind it being done to me? No. It is not a big deal. It is public feedback about an insulting computer program, no one was injured, no safety-critical system compromised. I have been called out for mistakes before, in classes, on mailing lists, on forums, I learn and try to do better. The only times I have resented it are when I think the complaint is wrong. (And with age, I would say the only correct thing to do then is, after taking the time to consider it carefully, clearly respond to feedback you disagree with.)
The only thing I can draw from thinking through this is, because the authors of the program probably didn’t see my comment, it was not effective, and so I would have been better emailing them. But that is a statement about effectiveness not rightness. I would be more than happy doing it in a group in person at a party or a classroom. Mistakes do not have to be handled privately.
I am sorry we disagree about this. If you think I am missing anything I am open to thinking about it more.
dwohnitmok
a month ago
> I am sorry we disagree about this. If you think I am missing anything I am open to thinking about it more.
I am sorry I'm responding to this so late. I very much appreciate the dialogue you're extending here! I don't think I'll have the time to give you the response you deserve, but I'll try to sketch out some of the ideas.
This is all a matter of degree. Calling individuals out on mailing lists, in internal company comms, or in class still feels different than going and listing all an org's members on a website (even more so than e.g. just listing the CEO).
There's a couple of factors here at play, but mainly it's the combination of:
1. The overall AI trend is a large, impactful thing, but this was a small thing 2. Just listing the names without any explanation other than "they're responsible"
This just pattern matches to types of online behavior I find quite damaging for discourse too closely for my liking.
LastTrain
a month ago
> They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more ... > Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney ... > And yes this is annoying to get this kind of email no matter who it's from ...
Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.
dwohnitmok
a month ago
You're misreading my comment. I understand Rob Pike's reaction (which is against the general state of affairs, not those three individuals). I explicitly said it makes sense to me. I'm reacting to @crawshaw specifically listing out the names of people.
dkdcio
a month ago
no computer system just does stuff on its own. a human (or collection of them) built and maintains the system, they are responsible for it
neural networks are just a tool, used poorly (as in this case) or well
Zenbit_UX
a month ago
I truly don’t understand comments like this.
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
user
a month ago
dkdcio
a month ago
my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)
accusing people of being AI is very low-effort bot behavior btw
xdavidliu
a month ago
seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction
dkdcio
a month ago
I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
nandomrumber
a month ago
Some people seem to think that every thing they say or write has to somehow be an argument or counterpoint, or find something to correct, or point out flaw.
So when they see a piece of writing that is in agreement and concisely affirms the points being made, they don’t understand why they never get invited to parties.
nateb2022
a month ago
> a human (or collection of them) built and maintains the system, they are responsible for it
But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?
dkdcio
a month ago
“it depends” (there’re plenty of laws and case law on this topic)
I think the case here is fairly straightforward
sigbottle
a month ago
the only actual humans in the loop here are the startup founders and engineers. pretty cut and dry case here
unless you want to blame the AI itself, from a legal perspective?
roywiggins
a month ago
I think this AI system just registers for Gmail and sends stuff.
simonw
a month ago
It looks to me like each of the agents that are running has its own dedicated name-of-model@agentvillage.org Gmail address.
roywiggins
a month ago
Huh, at that point they should just equip it with an email client rather than forcing it to laboriously navigate the webmail interface with a browser!
This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand, just give them sendmail or whatever.
Corrado
a month ago
I think the whole point of this was to see if the "agents" could act like a real human and real humans use Gmail much more frequently than sendmail. Sage even commented that they had update their prompt to tell the agents to not send email and not just remove the Gmail component for fear that the agent would just open it's own Gmail (or Y! mail, etc.) account and send mail on it's own.
crawshaw
a month ago
That is really interesting and does suggest some new questions. I would claim it does not change who is responsible in this case, but an example of a new question: there was a time when it was legally ambiguous that click-through terms of service were valid. Now if an agent goes and clicks through for me, are they valid?
blibble
a month ago
> The important point that Simon makes in careful detail is: an "AI" did not send this email.
same as the NRA slogan: "guns don't kill people, people kill people"
dstroot
a month ago
The NRA always forgets the second part: “People kill people… using guns. Tools that we manufacture expressly for that purpose.”
saidnooneever
a month ago
[flagged]
vineyardmike
a month ago
Guns make it faster and easier to be successful than alternatives, and their explicit only purpose is to do so. It enables short and impulsive actions to be lethal before you and those around you think through your behavior.
Comparatively, the people in this article are using tools which have a variety of benign purposes to do something bad.
Similarly though, they probably wouldn’t have gone through with it if they had to set up an email server on hardware they bought and then manually installed in a colo and then set up a DNS server and a GPU for the neural network they trained and hosted themselves.
LunaSea
a month ago
The statistics of non-US countries disprove your theory.
saidnooneever
a month ago
this could simply be difference in culture or education. you might not beleive it but in EU also every day ppl get shot and murdered. driven over with car stabbed, strangled raped etc etc. its not so much in mass media, (diff in culture). the vast majority of such cases doesnt make it even into local news let alone national or international.
LunaSea
a month ago
> but in EU also every day ppl get shot and murdered
Sure, let's look at the numbers together :
- Homicide rate in the EU in 2023 : ~1.3 to 1.4 per 100,000 [1] - Homicide rate in the US in 2023 : 5.7 per 100,000 [2]
So, I'm 4x more likely to get killed in the US than in the EU.
> the vast majority of such cases doesnt make it even into local news let alone national or international
Do you really believe that murders don't get published in the media in the EU? This is a ridiculous assertion. Source please!
[1] https://ec.europa.eu/eurostat/statistics-explained/SEPDF/cac...
[2] https://www.statista.com/statistics/191223/reported-murder-a...
xxs
a month ago
significantly less likely in cases of mass shootings, e.g. schools.
ako
a month ago
That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.
pensatoio
a month ago
I think it's important to agree with you and point out the obvious, again, in this thread. The people behind Sage are responsible (or, shall I say, irresponsible.)
The attitude towards AI is much more mixed than the attitude towards guns, so it should be even easier to hammer this home.
Adam Binksmith, Zak Miller, and Shoshannah Tekofsky are _bad_ people who are intentionally doing something objectively malicious under the guise of charity.
dkdcio
a month ago
does a gun on its own kill people?
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
exasperaited
a month ago
The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.
It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.
gorgoiler
a month ago
Let’s not turn this into a witch hunt please.
While you are technically able to call out their full names like this, erring on the side of not looking like doxxing would be a safe bet, especially at this time of year. You could after all post their LinkedIn accounts and email addresses but with some lines it’s better to not play “how close can I get without crossing it?”.
izacus
a month ago
Making people accountable for their actions is NOT a witch hunt.
It's horrible to even propose that people are absolved of their decisionmaking consequences just because they filtered them through software.
j-pb
a month ago
Oh no, they send him a "thank you for all the hard work you've done" email, how could they, off to prison with these monsters, they need to be held responsible for all the suffering and pain they've caused.
davorak
a month ago
> "thank you for all the hard work you've done"
Who is decided to say "thank you" to Rob Pike in this case? I am not sure there is anyone, so in my mind there is not real "thank you" here. As far as I can tell it is spam. Maybe spam that tries to deceive the receiver into think there is a "thank you" to lure them into interacting the the AI? "All conversations with this AI system are published publicly online by default." after all and Rob Pike's interactions would be good PR for the company.
j-pb
a month ago
Well is it the humans responsibility and action, or is it not? You can't have it both ways.
You also obviously didn't read the mail, because it contains explicit info that this was send by claude on behalf of AI village.
It's at worst cheezy. But people get tons of truly nefarious spam and fraud mails everyday without any kind of meltdown. But an AI wishes you a nice day, suddenly it's all pitchforks and torches.
Stop clutching your pearls ffs.
davorak
a month ago
> Well is it the humans responsibility and action, or is it not? You can't have it both ways.
Seems to contradict your later:
> But an AI wishes you a nice day, suddenly it's all pitchforks and torches.
Are you attributing the 'thank you' sentiment to the humans or the ai?
crawshaw
a month ago
I certainly have no intention of doing anyone harm. I went to their website and clicked three times to get the names of the people and organization behind it, there is a prominent About page with profile links. If an admin considers this inappropriate please remove the names from my post.
Vegenoid
a month ago
Are they not proud of their work and publicly displaying their names as the authors of the project?
da_grift_shift
a month ago
Have you considered that the sites associated with this project have a very prominent meet-the-team page and that every AI Village blogpost is signed off by a member of said team? Can you explain what you're seeing in the parent comment that's private?
EDIT: Public response: https://x.com/adambinksmith/status/2004651906019541396
gorgoiler
a month ago
It’s not that they are private people, it’s that I feel uneasy when a discussion about the ethics and morality drifts towards these-are-their-names and here-are-some-pitchforks.
We can all go find out their names and dust off our own pitchforks. I don’t see any value in encouraging this behaviour on a site like this.
riwsky
a month ago
Dude, what? The fuckers set up an automated system that found people’s private email addresses and blasted them with unwanted emails. The outrage is exactly that they built a line-crossing machine. Your moralizing is incoherent.
Ukv
a month ago
The goals (initially "raise as much money for charity as you can", currently "Do random acts of kindness") don't seem ill-intentioned, particularly since it was somewhat successful at the first ($1481 for Helen Keller International and $503 for the Malaria Consortium). To my understanding it also didn't send more than one email per person.
I think "these emails are annoying, stop it sending them" is entirely fair, but a lot of the hate/anger, analogizing what they're doing to rape, etc. seems disproportionate.
ath3nd
a month ago
Lets turn this into an accountability thing please.
The same way we name and shame petrol and plastic CEOs whose trash products flood our environment, we should be able to shame slop makers. Digital trash is still trash.