ryzvonusef
9 months ago
Everyone has their own fears about AI, but my fears are especially chilling; what if AI was used to imitate a person saying something blasphemeous?
My country is already has blasphemy lynching mobs based on the slightest perceived insult, real or imagined. They will mob you, lynch you, burn your corpse, then distribute sweets while you family hide and issue video messages denouncing you and forgiving the mob.
And this was before AI was easy to access. You can say a lot of things about 'oh backward countries' but this will not stay there, this will spread. You can't just give a toddler a knife and then blame them for stabbing someone.
Has nothing to do with fame, with security, with copyright. This will get people killed. And we have no tools to control this.
https://x.com/search?q=blasphemy
I fear the future.
losvedir
9 months ago
I think the answer, counterintuitively, is to make these AI tools more open and accessible. As long as they're restricted or regulated or inaccessible people will continue to think of videos and recordings as not fakeable. But make voice cloning something easy and fun to do with a $1 app, let the teens have their prank call fun and pretty soon it should work its way into the public consciousness.
I had my 70 year mother ask me last week if she should remove her voicemail message because can't people steal her voice with it? I was surprised but I guess she heard it on a Fox segment or something.
I think it might be a rough couple years but hopefully we'll be through it soon.
HeatrayEnjoyer
9 months ago
This is idealistic. People still haven't fully learned that images can be photoshopped in its twenty years of its existence. (Deep)faked porn is still harmful which is why it's a crime.
Worse, there isn't an attitude of default skepticism in many areas/cultures. If a person is suspected of violating the moral code the priority will be punishment and reinforcing that such behavior isn't acceptable. Whether or not the specific person actually did the specific act is a secondary concern.
It's just going to increase the number of people who will be harmed or killed.
deepsun
9 months ago
Yep, had a lawyer telling me that image timestamp cannot be faked. While it's literally a right-click away.
sugarkjube
9 months ago
Well, thing is, most people can't.
In a lawyers view, and a judge's view, some skilled expert "hackers" can, and its called hacking. (so i guess we're all hackers)
I once discussed these things with a (knowledgeable) lawyer. He explained you can just present almost anhthing in a court case, and when it isn't refuted, well then it's valid.
In a case my lawyer (same one) presented a printed out email. Other party did not claim it was false, so it's suddenly just as valid as a registered letter. (it was a genuine email).
In another unrelated case, the other party suddenly introduced a forged picture. If I hadn't been there at that moment (I wasn't supposed to actually), then suddenly it would have been proof.
Court cases are not about truth, and not about justice. They are about convincing the judge.
deepsun
9 months ago
Well, at least for email, it's theoretically possible to prove it's authenticity through third-parties. E.g. lawyer can ask GMail "did you receive this email with this DKIM?".
deepsun
9 months ago
And judge's job is not to find the truth. It's to convince the public that ruling was just.
veunes
9 months ago
When people in positions of authority, like legal experts, don’t fully grasp how easily digital content can be manipulated
xp84
9 months ago
The first thing that should be done - which should be easy enough to do, is demonstrating blasphemy with the voices of all the most pious leaders that lead the lynch mobs. If those ones get believed, well, problem is solved in an alternate way.
I also however don’t believe that deepfaked porn is actually harmful. People are already imagining people naked, and it’s been easy to photoshop a nude for 15 years. A talented artist was needed 200 years ago. Allowing this (admittedly rude and disrespectful) act to be done with less skill changes very little.
tikkun
9 months ago
I'll note that both photoshop and changing the timestamp of images (mentioned below) are only easy for a very small percentage of the population. It'd likely be different if >30% of people could easily do these things.
tga_d
9 months ago
Changing an image timestamp -- as in, exif metadata -- is trivially easy for anyone with a computer or phone, a quick search will tell you how to do it on any device with no skill involved. There's a difference between "easy for a small percentage of people" (modify the content of a photo in an undetectable way, find a software vulnerability, etc.) and "only a small percentage know how to do it" (modify exif values, do a simple magic trick, etc.). Just because someone doesn't know anyone can do it if they tried doesn't mean it's not trivial and pervasive.
alexsmirnov
9 months ago
The problem that to make a right query, one has to know at least part of the answer. To search for exif modification tool, it's essential to understand: - what is image file, and how information stored in it. - the concept of `metadata`, and exif in particular. - the concept that any file can be modified in place, or when you make a copy. - know what where's ecosystem of open/free/custom tools that can do an unusual tasks. The small percentage has all that knowledge at the hand to even get an idea of possibility to modify image timestamp
shaftway
9 months ago
This.
I'm currently the originator and lead on a project that will have huge impact on our bottom line. The team is mostly junior people, so when I presented the plan the response I got wasn't "I didn't think we could do that", it was "I didn't even know that that is a thing".
There used to be a joke about Rumsfeld's "known unknowns" vs. "unknown unknowns", but it's so true. And to the vast majority of people in the world the question isn't "how" to change a timestamp, it's understanding that a timestamp can even be changed.
bryanlarsen
9 months ago
Most people will believe a rumour if it is told to them in person by a friend. We've had our entire evolution worth of time to recognize that rumours can be manipulated yet rumours still spread and are still very dangerous.
CoastalCoder
9 months ago
> I had my 70 year mother ask me last week if she should remove her voicemail message because can't people steal her voice with it? I was surprised but I guess she heard it on a Fox segment or something.
Out of curiosity, how much training data is needed currently to mimic a voice at various levels of convincingness?
alternatex
9 months ago
Skype does it in real time (for live translation) with a few seconds of audio. For the reasons discussed in this thread, it continuously forgets its previous training from the call to not make the voice too similar, but just similar enough to distinguish the speakers.
HeatrayEnjoyer
9 months ago
Almost none, at most as little as a professional impersonator requires. GPT-4o's advanced voice mode would clone the user's voice by accident. A recording clip of one incident is available:
>During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.
>Example of unintentional voice generation, model outbursts “No!” then begins continuing the sentence in a similar sounding voice to the red teamer’s voice
https://assets.ctfassets.net/kftzwdyauwt9/4CG0G7y9WOfEkzBpi7...
jerpint
9 months ago
Just a few seconds of speech gets you pretty far with a lot of models
caeril
9 months ago
Yes, this is generally true, in order to clone the timbre/pitch/tonality, but if you're going to run a voice cloning scam to fool friends and family members, you need much more to get the cadence, pauses, vocal tics, etc right.
kmlx
9 months ago
> what if AI was used to imitate a person saying something blasphemeous?
> My country is already has blasphemy lynching mobs
in your case the problem is not AI, it’s your country.
ryzvonusef
9 months ago
Your country might not have lynching mobs, but you can't deny there are certain taboo topics in your society also, certain slurs and other opinions which would take you ages to clense and even then never fully.
If an AI fake-porn of some ordinary person involving a minor was unleashed, think of the utter shame and horror they would be treated by people for the rest of their lives, even if it were proven false.
No one would believe them, work with them, hire them, rent them, they would wish they had been lynched instead of the life they live.
nkrisc
9 months ago
Yes, you’re right. That’s why the problem is the lynch mobs. If there was no AI, people would find another way to sic the mob on someone. I’m sure it’s already happened countless times without AI. Mobs aren’t known for the rational behavior and respect of the rule of law.
ryzvonusef
9 months ago
I was simply explaining my personal fears regarding AI given that lynch mobs exist in my country; the lesson you should have taken from that was not about lynch mobs, but to analyse how AI might affect YOU in your particular environment.
Remember the post is about cloning a youtuber's voice using AI; most people were thinking of the copywrite aspect of it, but I wanted to share with people what my fear was.
My fears are, things are already bad w.r.t lynch mobs before AI, after AI things will only get worse with fake but realistic sounding voice notes etc. That's my fear, doesn't have to be yours.
You don't have lynch mobs (hopefully) but surely you can think of other problems that a fake AI image/audio/video can cause, and just low the barrier is, and just how good the tools are.
nuancebydefault
9 months ago
Maybe it's because I have no knowledge of lynch mobs in my country but I find that fear is a bad adviser. I think currently there is no way back in AI development. I read Microsoft is getting a failed nuclear reactor repaired to power the power hungry AI.
The only way forward is making sure most people know that digital content can be easily faked. If you hand out good faker tools for free, it will happen faster
noobermin
9 months ago
There is absolutely a way back if laws change. It's that simple, people need to stop pretending its inevitable when it absolutely isn't.
nuancebydefault
9 months ago
International law like in... human rights? Are those currently being enforced?
nkrisc
9 months ago
I don’t disagree that AI could exacerbate the issue, in fact I agree. But anything could potentially do so. Instant messaging likely also made the problem worse as a call to form a mob can reach many people much faster than before. The time from inciting incident to mob action is greatly compressed because of instant messaging.
It’s fundamentally a societal issue, not a technological one. Yes, it’s scary what effect AI could have on lynch mobs, because lynch mobs are scary.
roenxi
9 months ago
But mob formation and mass social shaming on little evidence already happens without AI and it isn't clear why AI would make it worse. If it gets so easy to create fake videos that'll just muddy the waters and people would trust rumour less, not form mobs more.
It happens once, then some wag creates videos of all the mob leaders blaspheming or whatever. Undermines the idea that the videos are the root cause.
ryzvonusef
9 months ago
You are giving too much credence to the analytical capabilites to a lynch mob, and you are underestimating how much AI lowers the barrier for creating realistic 'proof' for incitement.
Also there are no mob 'leaders'. These are everyday folks, these are you and me and my neighbours.
A student pissed at a teacher could create a fake AI video of his teacher at night, and wake up to no pesky teacher and feel no remorse over what happened while he slept.
The student was not a mob leader, just some dumbass who was angry at their teacher for giving bad grades, and had access to simple AI tools (soon available online for a trivial fee, if not free).
____
Be honest, are you 100% sure of the status of that facebook account you deleted in 2018? Zuck never deleted it, that content is still there to be mined and abused. Also not that hard to just log in to the damn thing and post something spicy.
I don't remember if old posts can be edited in FB, cause that would allow the person to really gild the lily.
benterix
9 months ago
> people would trust rumour less, not form mobs more
I believe you are right. When the first unbelievable clips appeared on Facebook (where the most gullible people are), you could see the old ladies cheering "oh how wonderful! where did you film that?" Now in the comments section you will mostly see people saying "sheesh, stop polluting my feed with these fake AI-generated spam". Don't get me wrong, many people still fall for them, but the tendency seems clear - after all, we managed to survive because of learning.
forgetfreeman
9 months ago
I feel like the Bullshit Asymmetry Principle is involved here. If the bar is lowered to the point where it takes less energy to generate inflammatory video "evidence" than it does to debunk it the shitheads win. I feel like we can all agree that AI has the potential to lower that bar.
t0bia_s
9 months ago
Next or next next generation would be used to it. Digital content would be consumed differently, with automatical scepticism or so...
Remember when people believed that camera took our soul when we are caotured on image?
pjc50
9 months ago
The US equivalent is much less labour intensive than a lynch mob: it's mass shooters radicalized by things they've read on the internet.
Or https://www.npr.org/2024/09/19/nx-s1-5114047/springfield-ohi... , where repeating racial libel causes a public safety problem.
While this kind of incitement in no way requires AI, it's certainly something that's easier to do when you can fake evidence. See also https://www.bbc.co.uk/news/articles/c5y87l6rx5wo
fragmede
9 months ago
https://apnews.com/article/ai-maryland-principal-voice-recor...
In the US, freedom of speech means the government won't stifle you, but the court of public opinion will crucify you for saying the wrong thing. In the linked case, a school principal was framed for being a racist, a fire able offense. Better than being killed by religious extremists, I suppose, but still not great. Thankfully in this case they were able to find the perpetrator who faked evidence, but we have a problem.
rustcleaner
9 months ago
>radicalized by things they've read on the internet
That take pretty much throws the shooters' preceding years of anguish under the bus to make a political score. "Ban the wrongspeech!" is a seductively easier proposition than ending youth bullying and ostracism.
johnnyanmac
9 months ago
Free speech doesn't protect against libel/slander. It also doesn't protect against speech that induces violence or panic (the old "yell fire in a theater") And I argue impersonating celebrity voices to radicalize others falls under the latter.
So yeah, I would call this "wrongspeech" and wrongspeech is already banned.
pjc50
9 months ago
Quite a few (not all, but the more famous ones) write manifestoes in which they describe how they were radicalized, cite other manifestoes and mainstream racist commentators who inspired them, along with Adolf Hitler, and make it very clear what their motives are.
> youth bullying and ostracism
... which itself counts as free speech?
A lot of people get bullied in a lot of countries. Few of them turn to mass murder. The mass shooter phenomenon is mostly America, plus a few people who've been radicalized by the same racist discourse on the same message boards (Anders Brevik, NZ mosque shooter).
In quite a lot of cases closer investigation suggests that they were ostracized precisely because they'd already chosen hate.
psychlops
9 months ago
The internet isn't always the bogeyman, the last assassin was radicalized by mass media.
datavirtue
9 months ago
People don't shoot up crowds because of something they read on the internet. They do it because they are done with life. Everything after that is retrospective reasoning.
pjc50
9 months ago
Some of them (e.g. Breivik) write manifestoes in which they list all the stuff they read on the internet that motivated them.
datavirtue
9 months ago
Swap it out for books or tracts or magazines, then.
johnnyanmac
9 months ago
It's no one factor, and I'm sure internet contributes. Especially if you feel isolated in the physical world.
jokethrowaway
9 months ago
Yes, but after the invention of radio and television we already enabled mass indoctrination and mass brainwashing. The internet is more refined but the same principle.
The solution is not to ban technology (it's impossible), but to create a more decentralised society where the majority of voters, which is easily brainwashed, doesn't get to dictate the life of the minority. We need to destroy and dismantle all the gigantic entities that ruin our life for the advantage of the few, whether they are the government or corporations ruled by billionaires controlling the government.
We also need to create a better society with less mental illness, where people are not depressed by how unfair life is that they go on and kill kids in school - but I believe if we reduce large societal inequality this will sort itself out over time.
Parents in my circle are already banning social medias for their kids until they're adults - so over time we'll evolve to get better at ignoring all the crap we get exposed to.
berniedurfee
9 months ago
What country is immune to this?
As far as I can tell the collective conscious of every country is swayed by propaganda.
A written headline is enough to incite rage in any country much less a voice or video indistinguishable from the real thing.
Folks in “developed’ countries have their lives destroyed or ended all the time based on rumors of something said or done.
bufferoverflow
9 months ago
Any country that doesn't have executions by mobs. Used to be most nordic countries. Till they changed their immigration policies.
JoeAltmaier
9 months ago
OR any number of non-racist forces at work e.g. the internet, movies and social norms changing everywhere.
bufferoverflow
9 months ago
Those are the mobs. Your average "non-racist" force is just racist to some other race.
7bit
9 months ago
That's a little too easy no? AI being used to imitate people definitely is a problem that needs to be addressed, and already is. Discarding that because there is a bigger issue is ignorant. Both can exist as a problem at the same time.
Ygg2
9 months ago
The problem is AI. What if you post video of a politician eating babies, and that causes some nutjob to kill that politician?
Sure, distrust everything digital, but what if only evidence of someone doing something wrong is digital?
tomjen3
9 months ago
What if someone printed yellow papers blaming Spain for a ship accident so the us would go to war with them?
jokethrowaway
9 months ago
Becoming famous always had its risks because of mentally ill people targeting you.
Nothing new.
The trick is being rich without being famous!
charlieyu1
9 months ago
People copied and pasted faces of celebrities on another photo since 1990s. Nothing significant happened.
ryzvonusef
9 months ago
problem is not celebrities, problem is general every day people.
With celeberaties, atleast they are famous enough that the idea of fakes exist... but you are Johnny nobody, that's not a fake, you just got caught and don't want to admit you said/did it!
The court of public opinion is much smaller and personal, and 'evidence' is much more realistic and detailed.
Nathanba
9 months ago
general every day people will have to figure out that photoshop exists and that it contains AI now that can invent pictures from scratch. It should be easy to figure out, just explain that it can paint 100% realistically like a camera. The real issue is how people will start using this as a defense for criminal acts. "it wasn't me, all the camera footage is AI generated"
flembat
9 months ago
An individual is not responsible for the culture or government in the country they live in.
In the UK a government was just elected with a historical absolute majority by only ten million people, and now first time offenders are being sent to prison for making stupid offensive statements online.
switch007
9 months ago
Are you referring to this case?
> In a now deleted post on her X account, Mrs Connolly, from Northampton, wrote: “Mass deportation now, set fire to all the f*** hotels full of the bastards for all I care… If that makes me racist, so be it.
> The court heard Kay copied, pasted and uploaded Mrs Connolly’s post at 12.27pm on Wednesday from a BBC News report and added the hashtags #standwithlucyconnolly #lucyconnolly #f**northamptonshirepolice #conservative #FaragesRiots #RiotsUK and #Northampton.
> As well as a post which urged people to “mask up” during a protest targeting an immigration law firm, Kay tweeted to his 127 followers at 2.34am: “That’s 100% the plan, plus gloves. No car either so no number plate to trace and a change of clothes ready nearby.”
https://www.independent.co.uk/news/uk/crime/northampton-bbc-...
flembat
9 months ago
Free speech, should not just be for people we agree with. Although I agree conspiracy to commit a crime is something else.
My point was that AI faked content could certainly get you locked up in the UK.
bitnasty
9 months ago
That may be true, but it doesn’t unkill the victims.
latexr
9 months ago
The comment didn’t say the problem was AI, it said they feared its consequences, which is a perfectly valid concern.
It’s like if someone said “I’m scared of someone bringing a semi-automatic weapon to my school and doing a mass shooting. My country has lax laws about guns and their proper use”. And then you said “in your case the problem is not guns, it’s your country”.
I mean, it’s technically true, but also unhelpful. Such ingrained laws are hard to change and you can be placed in danger for even trying.
Before someone decries the gun example as not being comparable, it is possible to live in a country with a monumental number of guns and not have mass murdering every day. It’s called Switzerland.
But let’s please stick to the subject of AI, which is what the thread is about. The gun example is the first analogy which came to mind, and analogies are never perfect, so it’s unproductive to nitpick the example. I don’t mean to shift the conversation from one contentious topic to another.
pjc50
9 months ago
> It’s like if someone said “I’m scared of someone bringing a semi-automatic weapon to my school and doing a mass shooting. My country has lax laws about guns and their proper use”. And then you said “in your case the problem is not guns, it’s your country”.
The US has both problems: widespread availability of weapons and a high level of freedom to incite violence through spreading lies about groups. Which is why it sees much more of these incidents than countries which have similar levels of gun ownership.
The non-gun version of the problem is mass stabbings, which are less lethal.
underdeserver
9 months ago
Switzerland allows you to own a gun, but (generally) not to bear it. Huge difference.
bryanrasmussen
9 months ago
America also has laws against taking guns into theaters and indiscriminately shooting people, once you have easy access to guns the gun becomes a potential solution to problems. Maybe Switzerland just has less problems where that potential solution appeals to people.
input_sh
9 months ago
I'd say that the key difference is that Switzerland has compulsory military service in which you're taught how to operate a weapon properly. Therefore, everyone that has one has gone through months of training.
Vs the US, where there are loopholes that you can use to avoid even a basic background check, and then use it for the very first time to shoot someone.
thfuran
9 months ago
Training seems likely to cut down on accidental shootings, but not so much on the deliberate ones.
underdeserver
9 months ago
I disagree. First of all, people with issues are more likely to be found out in boot camp.
Second, you learn a certain respect for the firearm and are expected to observe strict safety rules when handling it. That gives you a kind of psychological flinch when you consider doing anything out of the norm with it.
ambicapter
9 months ago
It seems like tons of toddler are accidentally firing guns as well in America, so training probably won't help there.
thfuran
9 months ago
Training people to lock up their damn guns would.
holoduke
9 months ago
Biggest difference is that you have only rich people in Switzerland. Almost no crime and hardly any immigration issues.
tharkun__
9 months ago
That doesn't seem to make sense. All that would do is to make sure that a nut job knows how to properly shoot the gun. Basically what you are saying is "a Swiss wouldn't have missed Trump".
The difference might be in what guns mean to Swiss people vs. the US.
A gun in the US has this "if the government becomes destructive of these ends we can start shooting" connotation. A gun is there for self defense. Someone doesn't get off your law, you use your gun.
In Switzerland the gun and the compulsory military service you mention is there for the people to protect their country and fellow countrymen. You are trained in defending your neighbor, who just stepped on your lawn against outside aggressors.
godelski
9 months ago
> what if AI was used to imitate a person saying something blasphemeous?
I've been contemplating writing an open letter to Dang to nuke my account. Because at this time you can likely deanonymize any user with a fair amount of comments. As long as you can correlate. You can certainly steal their language, even if not 100% accurate. It may be caution, but it isn't certain that we won't enter a dark forest and there's reason to believe we could be headed that way. But at the same time, is not retreating to the shadows giving up?ryzvonusef
9 months ago
The problem I fear is that, let's say you once had a facebook account, we all deactivated our accounts when there was wave against Zuck a few years back, but as we know, facebook doesn't really delete your account.
Now imagine that account was linked to a SIM. It's trivial for a nefarious actor to get it re-activated, infact there was a video by Veritasium just today where they didn't even need your SIM.
But even if they are not that hi-tech, it's not that hard to get a SIM issued in your name, or other hacks of a similar nature, we have all heard of stories.
Worse, you lost that SIM a decade back, the number gets back into the queue, and is eventually re-issued to someone new... and they try to create a facebook account, and are presented with yours.
They can then re-activate your old facebook account, and post a video/audio/text of "godelski" saying they like pineapple on pizza. and before you can defend yourself, the pizzarias have lynched you.
(I dare not use a real example even as a jest, I live here)
Are you 100% sure of all your old social media accounts, all the SIM you have ever used to log-in to accounts?
We leave a long trail.
rustcleaner
9 months ago
I think we need a data Great Reset: all data (and I mean all) must be deleted by a certain date in the future, and all newly collected data must have provenance so in the future the customer can find out that Ford got information from Databroker A which bought it from B which bought it from Microsoft which collected it from what you type in Windows. Companies holding activity data without provenance get the crack-cocaine dealer treatment: officers in jail, everybody fired, investors and lenders fucked, company liquidated and forfeited.
While it is impossible to take back a disclosed secret, we can create a legal framework which issues immediate business-stopping corporate death sentences for spreading that data without consent (or after you revoke consent, yes it needs to be revokable). Your data is so valuable, because now that we have universal function approximators you can be simulated (to a prediction horizon) and that simulation interrogated.
danieldk
9 months ago
There should be a way to cryptographically sign posts (everywhere). I know, building a web of trust sucks, etc. But if there was someone with your username signing with a particular key for 10 years and then suddenly there is something controversial somewhere with a different key, something fishy is going on.
Of course, this could be misused to post something with plausible deniability, but if you want to say something controversial, why wouldn't you make another account for that anyway?
I know that one could theoretically sign posts with GPG, but it would be much nicer and less noisy if sites would have UI to show something like: Signed by <fingerprint>, key used for N years.
One issues is that most social media want your identity to be the account on their service and not some identity (i.e. key) that you control.
godelski
9 months ago
> There should be a way to cryptographically sign posts (everywhere).
This just confirms it is me. Which yes, reduces the problem of me being replicated, but does not do anything for my anonymity. That part may not seem important to you, considering you use your real name, but it is to me. It allows me to be more open.Key signing will not be the solution because it isn't addressing the problem. It exacerbates it.
danieldk
9 months ago
This just confirms it is me. Which yes, reduces the problem of me being replicated, but does not do anything for my anonymity. That part may not seem important to you, considering you use your real name, but it is to me. It allows me to be more open.
Ah, think I see your point. Your worry is that language use, etc. could be use to deanonymize you, by correlating with text that was not written anonymously. But that's a separate issue from voice or writing style cloning to pretend it's you that said it. In the latter case you could use a pseudonymous signing key?
I agree that deanonymization is an issue that is hard to tackle. I wonder if someone studied how unique writing style is. E.g. browser fingerprints are fairly unique, but I wonder to what extend you can filter a person from, say a pool of 100 million, using writing style alone (grammar, vocabulary use, etc.). I guess it becomes quite easy if you engage in a lot of domain-specific discussions and use their vocabularies.
E.g. if I'd talk about Marlin-kernels here, you could probably narrow me down to a few hundred people. Throw in another comment about the Glove80. Maybe ten people at most?
godelski
9 months ago
> I wonder if someone studied how unique writing style is.
Since I teach I can tell you that I can usually tell who wrote something by their language and it even works with code. There's also the Enron dataset, which is a common dataset for first time ML students where you do exactly this task.Your language is in fact a fingerprint. And like you suggest, topics too. Much of our freedom of anonymity comes from the fact that it is hard or not worth it to dox people.
I do agree that verification is a different issue though. I'm not sure keys will solve it because you're not going to sign anything that is scandalous, so it might even give evidence for those that want to falsely claim foul play. And how do you sign a leak?
The problem with signing is that it seems to work for the cases we don't care about and do nothing for the ones we do. That is unless we sign literally everything, including our voice, but then you kill anonymity (why I connected it) and you could then probably clone that too.
vnorilo
9 months ago
Not sure the lynch mob would pause to check that the web of trust is unbroken.
aktenlage
9 months ago
Another solution would be to use an LLM to rephrase your posts, wouldn't it?
Not a great outlook though, if everybody does this...
rustcleaner
9 months ago
I already do this in all my IRCs, Matrices, and Discords
godelski
9 months ago
Not really. Though that will average language and poison LLMs. I'm not sure I want either of those to happen.
Besides, topics and shared details are also pii. It's more that they aren't useful in that way until you have scale.
Plus, on HN you can't edit posts after some time.
johnnyanmac
9 months ago
>I'm not sure I want either of those to happen.
at this stage and current behavior, I wouldn't mind some poison. That's the only way companies will learn at this point.
kossTKR
9 months ago
Yep, im sure lots of people have written a lot of random stuff on a lot of forums that should absolutely stay anonymous from gossip to family secrets to honesty about career/workplace and what not.
If stylometric analysis runs on all comments on the internet then yeah.
Bad things will happen, very very bad.
I honestly think it should be at least illegal to do this kind of analysis because it'll be a treasure trove for the commercial sector to mine this data correlated to real people not to think of the destruction in millions of people with personal anonymous blogs etc.
Actually thinking about it further you could also easily group people political affiliations, and all kinds of other thoughts, dark, dark stuff!
photonthug
9 months ago
Idk the state of the art for forensic stylometrics or the precedents but I would be surprised if this hasn’t already been presented as evidence in many (most?) notable jurisdictions. Not definitive evidence but supporting evidence. So far from being declared illegal, it’s probably mostly established as a tool for law enforcement.
Meanwhile gangsters or nation states are probably already working on automated deanonymization and blackmail at scale, but will target the dark web before the public one. Not sure any of this stuff is that changed by the advent of deep fakes and llms tho, probably just boring classical statistics does ok?
greenchair
9 months ago
I wonder if in this timeline it would cause people to clean up their behavior online and only post things they are comfortable having being linked back to their real identity. Would have a chilling effect.
godelski
9 months ago
There's quite a few sci-fi books, movies, and TV shows that explore this concept. Probably the most well known is 1984
But I'm pretty sure this would make life very bland. Slow down innovation as we become less creative, less willing to deviate from the norm, and the Overton window of what's socially acceptable narrows. After all, we do love having enemies in some way or another, even if it means making them up.
rustcleaner
9 months ago
>Actually thinking about it further you could also easily group people political affiliations, and all kinds of other thoughts, dark, dark stuff!
Already happening at Google and Meta for a decade or more.
Der_Einzige
9 months ago
Stuff like this is why I tell everyone who does good work in the AI space to remember that they are watched
godelski
9 months ago
Working in the ML space myself I try to tell my peers to think deep about their work and the consequences. Not just for ethics, but because it helps you do better.
So far I get a lot of pushback and statements like "it's just linear algebra" and "quality doesn't matter, it's quantity. Just scale. That's the bitter lesson". The former tells me they know very little linear algebra and the latter tells me they know very little about probability distribution and the concept of coverage... I routinely see papers get awards and tens of thousands of citations when they're making errors I would expect undergrads in my classes to do better at... I think a big reason I'm frustrated is that supposedly we share the same goals -- building AGI -- but I'm told I'm stopping them from doing so while they reject my papers and theirs gets accepted... (Fwiw I'm happy to accept scale papers. There should be a wide range of ideas explored. But I'm upset that we're on a railroad and any deviation from that is considered heresy)
yreg
9 months ago
I treat my accounts an non-anonymous unless I use a single-use throwaway.
I suppose even a throwaway could be linked to my identity if a comment was long enough, but probably only with some limited certainty.
godelski
9 months ago
I treat mine as semi. I mean I'm not trying to hide from the government or anything, but yeah I'll say things I might not want to say under my clear name either. Don't we all? We all wear different masks in different groups. We all are different people in different groups too.
We're human, we all have stuff to "hide". I might want to vent about my boss or work and if they heard it that would certainly take it different than it was intended. We even complain about our friends, especially our close friends. Because no relationship is without conflict. But being human we often have to actualize our feelings with words, because we're social creatures. I'll at times complain about my partner, but that doesn't mean they also aren't my favorite person in the world.
To be human requires some anonymity. And no one should feel like they're going to be scrutinized for every little thing they say.
shevekofurras
9 months ago
You can't nuke your account. You can close it but your comments remain on the site. They'll delete your account and assign your comments and posts to a random username.
Yes this violates any EU citizen's right to be forgotten under GDPR. Welcome to silicon valley.
whamlastxmas
9 months ago
Does every comment previously attached to your account have a unique username or do they still all share the same?
Sort of moot point considering the multiple HN archives that would still have the original username attached
shevekofurras
9 months ago
The username is changed but the comments remain attached to a single different name.
godelski
9 months ago
> You can't nuke your account. You can close it but your comments remain on the site.
Which is why I'd write an open letter and not do the thing. If I could nuke my account I wouldn't need to ask Dang, and I would have already done it.Der_Einzige
9 months ago
How would one report this to the EU and force HN to follow the GDPR?
kyboren
9 months ago
AFAIK Y Combinator does no business in the EU and HN is hosted in a US data center.
Under which legal theory does EU law apply here?
johnnyanmac
9 months ago
No business? It's purely an American company?
I could be hallucinating, but I swore job boards also had peopel hire from the EU.
vasco
9 months ago
The best we can hope for is that one personally avoids this for the first 5 years or so, and then it gets so widespread and easy that everyone will start doubting any videos they watch.
The same way it took social media like reddit a few years of "finding the culprit" / "name and shame" till mods figured out that many times the online mob gets it wrong and so now that is usually not allowed.
But many people will suffer this until laws get passed or it enters into common consciousness that a video is more likely to be fake than it is to be real. Might be more than 5 years though. And unfortunately laws usually only get passed after there's proven damage to some people from it.
pjc50
9 months ago
> everyone will start doubting any videos they watch.
This kills the medium.
Just as ubiquitous scam calls have moved people away from phones, this moves people away from using media which cannot be trusted. Done enough this destroys reporting and therefore democracy. I wonder when the first nonexistent candidate will be elected.
GreenWatermelon
9 months ago
This sounds like saying Text, as a medium, is already destroyed. But as we can see, despite thousands of years of fraud potential, we still use text a medium. We're back to needing witnesses and corroborating evidence.
BurningFrog
9 months ago
We've had Photoshop for decades, and I still see pictures everywhere.
latexr
9 months ago
Again this tired argument. Photoshop requires skill and time. AI generation takes a few seconds of typing some words. The scale is not comparable.
wpm
9 months ago
And let’s not pretend image generation AI isn’t already in a state where it can pump out convincing slop. Facebook is full of it.
Eisenstein
9 months ago
It is inevitable at this point that video as 'proof' will be killed. That we cannot do reporting and it destroying democracy is a little too much 'sky is falling'. Democracy existed before video.
barryrandall
9 months ago
The market for content that reassures people their preexisting views are right and valid is huge, evergreen, and held to the lowest possible standard.
vasco
9 months ago
I'm not sure I can take seriously arguments of "killing democracy", that undermines whatever point you're making in the same way people that worry about crime and shout "think of the children" or immediately go to "stopping terrorism". Just make your point without hyperbole.
pnut
9 months ago
I guess then, you should use AI to generate videos of all of the lynch mob leadership committing blasphemy and let them sort it out internally?
ryzvonusef
9 months ago
You joke but, given the religious/sectarian nature of the issue, all it does is empower one sect to act against the leaders of the other sect.
Check the twitter link, you won't have to scroll much to find a mullah being blasted for blasphemy. No one is safe.
javcasas
9 months ago
Well, then target dead people. Find someone who died 6 months ago, have AI make them a blasphemer, have the sect look for dead people. After the third or fourth time, they'll stop. Not because they want to stop, but because they don't trust this being another prank by Mullah Achmed the Fool.
ryzvonusef
9 months ago
Just like how you are the traffic, you are also the mob
Mobs are not formal organisations with leadership and membership dues.
Mobs are general everyday people who are sick and tird of this miserable life and need an excuse to rabble and rouse.
So in this case, fake dead person's blasphemy will only be channeled towards their family members, mobs just need an excuse to get crazy.
AI is not what causes mobs, it just helps lower the incitement barrier by providing quality bait.
javcasas
9 months ago
> fake dead person's blasphemy will only be channeled towards their family members
More alternatives: fake full persons. The mobs will be confused when they are directed to hunt down someone with a family name in a neighbor where no one has that family name. Furthermore, send them to the car repair shop between the coffee shop and the market in a street with no car shops, markers and coffee shops.
javcasas
9 months ago
In every group there are leaders (even informal ones, like that woman that shares nonstop all the bullshit to her 347 facebook friends).
Make them into fools.
- Stone the blasphemer!
- Who? The one that died las year? The one that died in the war in '94? The one that left 4 years ago? Don't you have anything better to do other than fooling yourself and spreading more rumors?
Doesn't make it any less dangerous while the mass is being vaccinated against bullshitters.
movedx
9 months ago
One way that we technical folk can help prevent this is by purchasing a domain that we can call our own and then host a website that's very clear: "If my image or voice is used in a piece of digital media that is not linked here from this domain, it was not produced by me."
That, and cryptographic materials being used to sign stuff too.
I think that's possibly the best we can hope for from a technical perspective as well as waiting for the legal system to catch up.
ryzvonusef
9 months ago
But, but, it sounds so realistic! Listen kiddo, I dunno what 'cryptographic signatures' are, all I know is it sounds exactly like movedx saying they likes pineapple on pizza, and I know what I heard, sounds just like them, heard them dozen of times on TV, must have been an undercover journalist who exposed them, I say a person who likes pineapple on pizza is not welcome in my house whatever you say, now be gone!
yapyap
9 months ago
used the wrong example there, pineapple on pizza is a nice, delicious, refreshing topping
ryzvonusef
9 months ago
Dude I am not going in give real examples, I live in a country with lynch mobs, remember?
But the point was, doesn't have to be lynch mobs, for developed countries, even 'lesser' consequences are stull terrifying.
Imagine everyone in your office giving you the stink eye after lunch and you being fired by HR 'at-will', and not knowing what exactly caused it, after all the day started so normally...
karlgkk
9 months ago
did the point of the analogie fly over your head so completely
latexr
9 months ago
Fine, then use dead rats on pizza. Don't focus on the letter of the example when you clearly understand the spirit. Conversation on HN aims to be better than that. Don’t reply with shallow dismissals, steel man the other person’s argument.
Your account is a week old and half your comments are dead. I recommend referring to the guidelines.
badgersnake
9 months ago
You are wrong.
Popeyes
9 months ago
But that doesn't account for the situation, of course you aren't going to post the illegal stuff you say. And then that gives you a blank to cheque to say what you like in private and say "Well, it isn't on my site, so it must be fake, right?"
johnnyanmac
9 months ago
Honestly, we're in a post truth era at the moment. There's so much misinformation out there that a 5 second google query can disprove, but it doesn't solve any arguments. That kind of cryprographic verification will only help you in court. There will probably be irrevocable pr damage even if you win that court case though.
sureglymop
9 months ago
My specific fear is that if a picture of you next to your name is available online, that becomes part of the training set of a future model. Paranoically, I do not have any picture of myself available online.
I could then trivially generate pictures or even videos of you e.g. by knowing your name. Of course that's just an example but I do think that's where we are headed and so the concept of "trust" will change a lot.
criddell
9 months ago
Do you have a state driver’s license? If so, then chances are data brokers have your photo from that.
https://www.dallasnews.com/news/watchdog/2021/03/19/its-mind...
sureglymop
9 months ago
I am not from the US so no. And when I did visit the US, I was young enough not to have to give my fingerprints at the airports.
criddell
9 months ago
If it was a while back, your passport probably wasn’t the kind with a chip. If you visit today, your passport photo is electronically readable. I have no idea if the feds are as sloppy with this data as the states are though.
cudgy
9 months ago
And your fingerprints …
marginalia_nu
9 months ago
Seems like the end game for that technological development is kind of self-defeating.
Once it's 2 clicks away to generate a believable video of someone at the kkk kitten barbecue getting along with ted bundy and jeff epstein, surely the evidence value of that would dwindle, and the brief period in history when video evidence was both accessible and somewhat believable will come to an end.
disqard
9 months ago
You're right -- there's nothing axiomatic about the trustworthiness of pixels, and there was a brief window of Time in human history when photos/videos had the qualities you mentioned, and (sadly) that era will end soon.
Being savvy means not treating photos/videos as "untrustworthy", which (ironically) is one of the things the Western world made fun of the Taliban for -- their fundamentalist view called it "haram":
https://www.deseret.com/1997/10/6/19338048/taliban-army-bans...
I worry about written matter too, and here as well, vast masses of LLM-spewed text will surely pollute the landscape irreversibly.
cess11
9 months ago
Lynchings aren't commonly waged based on a solid process of evaluating evidence.
marginalia_nu
9 months ago
In that regard, the presence of this type of evidence isn't worth much either, a malicious rumor or an accusation is plenty sufficient to stoke the flames of an angry and terrified mob, nobody's going to pause the proceedings and wait for a fair and cool-headed judgement of evidence. This is after all a phenomenon that is far older than the smartphone.
thfuran
9 months ago
It turns out that humans aren't perfectly rational actors. Actors get hatemail and death threats because of roles they've played, "evidence" that isn't just potentially suspect but actually known to be entirely fictional.
Jeff_Brown
9 months ago
Given that this tech is unstoppable, the best defense might be a good offense: Flood the internet with clips of prominent religious and political leaders, especially those largely responsible for mob violence historically, saying preposterously blasphemous things they would obviously never say.
blueflow
9 months ago
> And we have no tools to control this.
Do you know "The boy who cried wolf"? Fabricate some allegations yourself and this will train people to disbelieve them.
ryzvonusef
9 months ago
Doesn't work.
You are assuming that people who are part of lynch mobs have the critical thinking skills to differentiate between real vs fake, and use logic.
Reminds me of the post I read on twitter, of some Thai/Chinese New Yorker whose mother told him not to speak Mandarin in public when COVID related Anti-Asian hate was rampant....
And he had to explain to her that she can't expect the sort of person who hits a random Asian to differentiate between Thai and Mandarin.
latexr
9 months ago
That sounds like a dangerous proposition. Either they fabricate allegations about a “nobody” and put them in trouble or they fabricate allegations about those in power and will be investigated and put themselves in danger. Neither strategy is good.
blueflow
9 months ago
It worked pretty fine with #MeToo. Not saying there was no collateral damage, but the end result was that people now default to not believing such allegations.
latexr
9 months ago
I don’t understand what you’re arguing, could you be more specific? I know about #MeToo, but what exactly are you saying worked, and what are you claiming people default to not believing?
GreenWatermelon
9 months ago
It doesn't have to be dangerous allegations. Fabricate a video of your country's dictator carrying a giant Boulder, with a flattering subtitle. This will send a message that videos could be deepfaked to such a degree.
smusamashah
9 months ago
I can absolutely relate with your fear, but I think this will eventually be helpful to dismiss those mobs. Might even desensitize people boiling over 'blasphemy'. Yes, for the first few instances it will hurt. Then, eventually it will become common enough to be known by common folk. Enough that those people themselves will be sceptic enough to not act.
I recall photoshop blackmailing stories where usually woman were the target. Now literally "everyone" knows pictures can be manipulated/photoshopped. It will take a while yes, but eventually common folk will learn that these audios/videos can't be trusted.
valval
9 months ago
You’d simply make such things highly illegal. No matter how I spin it in my head, there’s nothing particularly scary about this, like there isn’t about identity theft or any other crime, in reality.
Even if blasphemy is illegal in your country, people would probably agree that falsely accusing someone of blasphemy is also wrong.
zwirbl
9 months ago
Lynching someone is highly illegal, whatever the cause. And yet...
mrkramer
9 months ago
The only logical legal solution is that any content of you shared by you is legitimate one and all other content of you shared by somebody else is presumed non-authenthic and possibly fake.
F-Lexx
9 months ago
What if a third party gains access to your social media account(s) and starts posting fake content from there?
mrkramer
9 months ago
My view on deepfakes:
I was thinking about deepfakes and how to protect from them and my conclusion is; there is no way you can protect from them from the practical point of view but from the legal point of view, governments can make laws where everything shared by somebody else that involves you is presumed fake unless there is substantial circumstantial evidence that says otherwise e.g. witnesses, legitimate metadata etc. etc.
Even before LLMs and GenAI, Photoshop became a synonym for messing around and faking photos so there is nothing new here but now there is more powerful "faking" software available to the masses.
Before computers and digital mass media sharing some compromising photo or tape could've been assumed authentic but now in the era of computers and software there is no way you can tell that something is authentic for sure.
>What if a third party gains access to your social media account(s) and starts posting fake content from there?
You can cause chaos and bad press in the short-term but when the original owner of the account restores ownership of the account everything falls apart. Like the commenter below said it happens all the time and it doesn't have any real impact on anything whatsoever.
cynicalsecurity
9 months ago
It happens all the time. They usually post spam and scam.
cloudguruab
9 months ago
It’s not just a problem that’ll stay in one place either. This tech is getting easier, and the consequences could be deadly. Scary times, for sure.
charlieyu1
9 months ago
From Hong Kong. We already had fake audio messages that sounded like a protest leader during 2014 protests… It was always there, even a long time ago
gwervc
9 months ago
This is nothing to do with AI but with intolerance of a certain religion. That religion is killing a lot in my country and many others too, but both the governments (national and supranational) and corporations censor any criticism of it. Even here on HN I got posts and accounts removed by the moderation for the slightest hint of criticism against it, and fully expected a downvoting mob by writing this comment. Sadly, it'll will continue for a long time giving how taboo the subject is.
sensanaty
9 months ago
If you were in the US and someone were to make a deepfake of you saying a racial slur, do you think you'd fair better than if you were a blasphemer in a Sharia country?
The religion isn't the (whole) issue here, this situation can apply in the secular West just as easily. The punishment won't be death, but it can still ruin people's lives. A fake pedophilia accusation comes to mind, where even if proven innocent you'll still be royally fucked for the rest of your life unless you spend considerable expense and effort.
thfuran
9 months ago
>If you were in the US and someone were to make a deepfake of you saying a racial slur, do you think you'd fair better than if you were a blasphemer in a Sharia country?
Of course.
ryzvonusef
9 months ago
You are focusing too much on 'that' religion and not realising that parallel analogies exist for other countries, religions and culture too.
Sure, not lynch mobs, but AI-generate fake media can certain ruin people's lives, and unlike photshop etc, the barriers of skill and time required are very low, and the quality is very high.
I share my country's experience because I wanted to share my personal perspective and fears, but please don't under estimate how AI can affect you. Just because you won't be death doesn't mean they can't turn you into a social pariah with a few clicks.
bufferoverflow
9 months ago
It's sounds like a problem with your crazy population, not with AI.
veunes
9 months ago
The analogy of handing a toddler a knife is spot on. AI is an incredibly powerful tool, but without proper safeguards, regulations or education, it can cause irreparable harm
loceng
9 months ago
We have ourselves. We have to create a culture of learning to quell reactive emotions - so we're less ideological and more critical thinker.
fennecbutt
9 months ago
The people are the problem not the tool.
disqard
9 months ago
I'm reminded of Chris Rock's "Guns, don't kill people, bullets do!"
In a more serious vein, this is definitely about unleashing an extremely powerful technology, at scale, for profit, and with insufficient safeguards (imagine if you could homebrew nuclear weapons -- that's inconceivable!)
There will be collateral damage. How much, and at what point will it trigger some legislation? Time will tell.
benterix
9 months ago
I'm very sorry to say this but if you live in a country that is killing others for what they say, AI is probably not your biggest problem. And I don't believe an easy solution exists.
ryzvonusef
9 months ago
AI doesn't create problems, but AI certainly lowers the barriers and improves the 'quality' of the bait.
To explain for a more developed country context, the fakes that previously required skill in Photoshop and Audacity etc now is much simpler to implement with AI, allowing far more dipshits to create and share fake image/audio/video of someone they are pissed at during their lunch break on their phone.
That's way too quick, allowing people to shoot far too many arrows in a huff, before their reasonable brain has time to make them realise the consequences of their actions.
HeatrayEnjoyer
9 months ago
"You can't refuse this brand new technology but you must change your society's culture that's been around for centuries so you are compatible with it." is a repulsively Silicon Valley answer.
progbits
9 months ago
Not every culture deserves to live on, certainly not just because "it has been around" and absolutely not one that lynches people.
Tolerance of intolerance is a bullshit strategy that won't work.
HeatrayEnjoyer
9 months ago
No, but it's not Silicon Valley CEOs' place to be making that decision unilaterally, nor is their culture some shining paragon of virtue either. They are not entitled to treat others paternally. It's not anyone's place to be executing it in such a hamfisted and harmful manner either.
lurking_swe
9 months ago
culture has overlap with religion, but culture != religion.
Most (not all) religions are like a cancer, their main contribution to society is dividing people into US vs THEM.
If your religion encourages you to kill someone, MAYBE that’s a problem (not AI)?
HeatrayEnjoyer
9 months ago
You're missing the point, perhaps willfully so.
lurking_swe
9 months ago
my point is that people who want to harm others will do so regardless. You can’t solve a social problem with technology, or even by taking away technology. The underlying problems remain.
I do sympathize that it is making it EASIER for them to do more harm. That’s a bit concerning to me.
pmarreck
9 months ago
> My country is already has blasphemy lynching mobs based on the slightest perceived insult, real or imagined. They will mob you, lynch you, burn your corpse, then distribute sweets while you family hide and issue video messages denouncing you and forgiving the mob.
Blasphemy laws—and the violence that sometimes accompanies them—are a cultural issue, not a technological one. When the risk of mob violence is in play, it's hard to have rational discussions about any kind of perceived offense, especially when it can be manipulated, even technologically, as you pointed out. The hypothetical of voice theft amplifies this: If a stolen voice were used to blaspheme, who would truly be responsible?
This is why we must resist the urge to give into culturally sanctioned violence or fear, regardless of religious justification. The truth doesn’t need to be violently defended; it stands by itself. If a system cannot tolerate dissent without devolving into chaos, then the problem lies within the system, not the dissent.
“An appeaser is one who feeds the crocodile, hoping it will eat him last.” - Winston Churchill
ryzvonusef
9 months ago
You are focusing too much on my specific problem instead of using it as a guide to understand your own situation.
Sure we have mobs and you don't, but we are talking about AI here.
Infact let's imagine a totally different culture to illustrate my point.
Imagine you are an Israeli, and people in your office have a habit of sending Whatsapp voice notes to confirm various things instead of calls, because that way you can have a record but don't have to type every damn thing out. Totally innocent and routine behaviour, you are just doing what many other people do.
A colleague pissed at you for whatever damn stupid reason creates a fake of your voice saying you support Hamas by using said voice notes, using an online tool that doesn't cost much or require much... are you saying just because you won't be lynched, that there isn't a problem?
You are confused why everyone is pissed at you and why suddenly your boss fired you, and by the time you find out the truth... the lie has spread to enough people in your social circle that there is no clearing your name.
Think of how little data in voice samples is required to generate an audio clip thats sounds very realistic, and how better it will get in an year. You don't need fancy PC or tech knowledge for that, already websites exist that do for cheap.
Just because you weren't lynched is no solace.
People are the problem, AI is just providing quality tools with minimal skill and cost required, thus broadening the user base.
pmarreck
9 months ago
I rephrased my comment probably right before you posted this because I felt it was too confrontational.
The problem of people manufacturing evidence by using synthesized voices should eventually result in audio voice recordings losing importance from an evidentiary standpoint. In fact, the quicker the use of it occurs, the quicker it will get devalued. And that is good. Someday soon, someone who sounds like the CEO won't be able to drain the company bank account based solely on trusting his voice. And hopefully, along the same lines, a voice recording of Person X blaspheming will lose its evidential power since it will be so easy to manufacture.
firtoz
9 months ago
> You can say a lot of things about 'oh backward countries' but this will not stay there, this will spread
I'm sorry, but this is a cope out. The "lynching from apparent cultural deviation" is something that needs to be moved on from. Developed countries do the same too to some extent, with "cancel culture" and such.
There are ways to have progress in this, and, well, to feed someone's entrepreneurial spirit, it's one of those really hard problems that a lot of people, let's say, "a growing niche market", needs it to be solved.
ryzvonusef
9 months ago
Indeed, if one were to post a AI video of someone saying some racial slur or otherwise verboten language, sure it won't get them killed, but given how unemployeable and pariah they would be, that would be a death by a thousand cuts.
But Blasphemy by whatever means, is one of the tools by which society sets certain boundries, and it's really hard to move away from a model that worked so 'well' for us since the first civiliations.
Kenji
9 months ago
[dead]
cynicalsecurity
9 months ago
Is your country US? Somehow I think it is.
shagymoe
9 months ago
Oh yes, the United States, founded on religious freedom, is the place where you get stoned in the street for blasphemy.
cynicalsecurity
9 months ago
Try to say anything related to sex or sexual freedom in the US and they will immediately lynch you. Welcome to USAbad, a Puritanic republic.
shagymoe
9 months ago
That's hilarious.
ryzvonusef
9 months ago
https://news.ycombinator.com/user?id=ryzvonusef
It's in my profile :)