The racist AI deepfake that fooled and divided a community

34 pointsposted 13 hours ago
by cmsefton

23 Comments

naming_the_user

13 hours ago

It baffles me that people are still focusing on things like "we detected AI manipulation".

You won't know when we've crossed the precipice and it's no longer detectable. By now it should be clear that video/audio evidence without some other form of provenance is untrustworthy.

If I write here

"I love yellow snow" - Barack H. Obama

You don't look for signs of manipulation, you just wouldn't trust it at all without corroboration, I can type anything I like.

McDyver

12 hours ago

I think that, at some level, people want to feel shocked and angry, or another strong (mainly negative) emotion.

We use the opportunity to validate our biases and we become blind to evidence that would negate our strong feelings. It takes a conscious effort to evaluate and check the sources, once that visceral reaction occurs.

beAbU

12 hours ago

It's called rage bait, and AI makes manufacturing rage bait oh so easy.

wruza

12 hours ago

I think gp’s idea is that rage bait has a vanishingly low threshold. You can accidentally enrage lots of communities today by simply saying things you’d think are natural (or by presenting bare facts).

bell-cot

12 hours ago

Pretty much, though it seems to be a 2-way effect:

A relative hair-trigger on whatever emotions and preoccupations a person already has. These days, for the 99%, there's no lack of negative ones. But if you've ever been around a little girl who's just recently discovered a fictional world full of magical talking ponies and such...

The modern world's focus on no-risk, passive entertainment - at extreme scale. While our distant ancestors told stories around the campfire, and cheap novels & fiction magazines were pretty common over a century ago, the modern world's 24x7 feeds of TV, cable, streaming, social media, etc. is something else. And that content is for-sure not teaching self-awareness, nor self-control, nor prudence, nor skepticism, nor ...

croes

12 hours ago

>You don't look for signs of manipulation, you just wouldn't trust it at all without corroboration, I can type anything I like.

But people still trust what they hear and see.

So audio and video clips get more trust than written words.

naming_the_user

2 hours ago

Our eyes and ears are trustworthy, what's not trustworthy are untrusted sources.

The same applies to quotes. You can trust, well, as much as you can trust Hacker News, that what I've written is what I've written. But you can't trust that what I've written is a true representation of a real event.

generic92034

12 hours ago

I think the parent's point was that this should change, as faking audio and video clips is becoming almost as easy as writing articles with falsehoods.

croes

2 hours ago

We alive still in times where some people believe anything they found written on the Internet.

It takes a lot more time to get used to fake video and audio, especially in that mass and quality.

Tempest1981

12 hours ago

Presumably there is something in human instinct that trusting our eyes/ears helps us survive. Should we evolve to overcome this?

user

2 hours ago

[deleted]

wruza

12 hours ago

We can’t evolve overcome much more primitive shit. Don’t hold your breath here.

cen4

12 hours ago

> It baffles me

They are just marketing their products and services. They won't solve anything cause Media itself exists as Attention exploitation/manipulation service offered to the highest bidder.

Global Human Attention is a finite natural resource. Its needs to be treated the same way we treat water and uranium. Sooner or later we will get there.

user

12 hours ago

[deleted]

CalRobert

13 hours ago

"Well, even if it’s not real, it’s what I think they think."

This is the real problem, I suspect. People will tend to favour things that support their existing biases (this includes me!) and now we have a firehose of crap to help us feel better about our views instead of challenging ourselves.

ProxCoques

12 hours ago

I used to naively think that some kind of PKI/trust network for online media would have to rise up to thwart misinformation.

But anger is the real the reason why people want to believe obviously fake info about how Mexicans are manipulating the weather. We live in a society that tells you that you are free: freedom to work hard, treat people right and do the right thing. Then health, wealth and happiness will be yours.

But what if you do all those things and see the opposite happening to you and those around you? You want reasons. 5G brain control, Bill Gates and the Deep State all look like great explanations because long arcs of neoliberal monetarism and deregulated economics is, well, boring and quite hard to understand. So who cares about some AI fakery when that's going on?

CalRobert

12 hours ago

Hell, the reason I have is "capital owners have ensured that under the rules of our system wealth generally accrues to capital more than labor" but maybe I am just seeking data that agrees with my own preconceived notions. Thomas Piketty (Debt In the 21st Century) apparently got his data wrong but he's still referenced plenty.

adontz

13 hours ago

It's in the interest of general public to fund AI detection technologies, including AI based AI detection technologies.

lionkor

12 hours ago

Cant wait to say something on video, only for it to be flagged as AI with 90% certainty because I look too average.

jauntywundrkind

10 hours ago

It scares me a bit that AI has been so destabilizing so quickly, such a tool for a would be "axis of upheaval" or smaller antagonists to help inflame and agitated the world.

And it scares me just as much that the likely "someone has to do something about this" that surely will come. There's thoughts floating around like Originator Profile & Content Authenticity, which seems like voluntarily ways to start letting sites vouch for their content, and that seems ok...

https://github.com/w3c/tpac2024-breakouts/issues/70 https://github.com/w3c/tpac2024-breakouts/issues/90

..but it feels like there is a rapidly growing angst about mis and disinformation and that eventually some nations are going to start demanding truth, and only truth, online, and it feels implausible except by letting only very few speak at all.

It also bothers me that HN seems to be sweeping these issues with AI under the rug. It feels like many of these deepfake articles almost immediately get flagged. Yesterday the article on AI generated image of Trump in blue-jeans walking through flood water being widely shared for flagged quite quickly too. Now this? This is so what the current moment of technology & it's role in society is, right here, and malfeasants projecting their petty narrow view via suppression seem to be winning the day already. https://futurism.com/the-byte/donald-trump-hurricane-ai

atmavatar

5 hours ago

There's good reason for the angst. A cornerstone that's absolutely necessary for representative government to function is a well-informed public.

Mis/disinformation becoming more prevalent and more convincing, already making it extremely difficult to debunk effectively, and the echo chambers afforded by social media algorithms and tailored-to-your-bias balkanization of traditional media makes debunking nigh impossible. "more speech" doesn't work when it can't penetrate the bubbles people live in.

So, we're in a bit of a catch-22: risk destroying freedoms via censorship or leave things alone and let them disappear on their own as society self-destructs due to a populace that no longer lives within a shared reality. A viable third option would be really nice.