> These models generate probably a billion images a day.
Collectively, probably more. Grok? Not unless you count each frame of a video, I think.
> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.
If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.
Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.
> That may precisely be the point of this tbh.
Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.
* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html
If they can't prevent child porn, then it should be banned.
Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.
Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?
Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.
Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.
And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.
I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.
One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
If you reject the foundation of liberal western civilization I don’t know what to tell you.
Move to china?
I’m just pointing out how the world works in real life not saying that it is desirable. Thinking in terms of that distinction is very useful.
Even the OP's quote made it clear this isn't the case. Companies need to show they rigorously tested that the model doesn't do this.
It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.