akutlay
a month ago
It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
wolvoleo
a month ago
True, CSAM should be blocked by all means. That's clear as day.
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
sam_lowry_
a month ago
Funnily Mistral is as much censored as ChatGPT.
One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.
chrisjj
a month ago
Some misunderstanding here. This article makes abolutely no mention of CSAM. The objection is to "sexual content on X without people’s consent".
johneth
a month ago
It's nonconsensual generation of sexual content of real people that is breaking the law. And things like CSAM generation which is obviously illegal.
> It feels a bit like foreign morals are being forced upon us.
Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.
judahmeek
a month ago
whether it was the "first" definitely depends on your standards & focus: https://cloudsecurityalliance.org/blog/2025/02/19/deepseek-r...
zajio1am
a month ago
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
nutjob2
a month ago
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
Ancapistani
a month ago
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
zajio1am
a month ago
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
chrisjj
a month ago
> When these models are fine tuned to allow any kind of nudity
If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.
The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."
nozzlegear
a month ago
Why does that seem absurd to you?
7bit
a month ago
Don't feed the troll
NedF
a month ago
[dead]