a2128
19 hours ago
AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
roflmaostc
18 hours ago
Partially agree. However, this problem has existed with scam e-mails since the 90s.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
TheOtherHobbes
18 hours ago
How do you prove the signature isn't fake?
Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.
All of those have their issues.
olmo23
18 hours ago
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
tenacious_tuna
18 hours ago
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
bigfishrunning
18 hours ago
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
daheza
11 hours ago
Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.
Ajedi32
16 hours ago
That's a different problem though. It's doing it on their behalf, not on behalf of a scammer who's impersonating them.
pixl97
15 hours ago
Until their computer is taken over....
MarsIronPI
16 hours ago
Well we should treat that as their own output. If it's crap, treat it the same way you would if they produced the crap themselves.
SirMaster
15 hours ago
Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.
pjaoko
2 hours ago
> It is AI generated, then we would loose trust in that person
You are assuming that only you can generate fake AI videos of yourself.
nsomaru
an hour ago
OP was talking about journalists attesting to the authenticity of video they produce
strogonoff
16 hours ago
As with any problem, scale changes its nature.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
pixl97
15 hours ago
> (or have transactions of up to certain size)
And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.
strogonoff
15 hours ago
The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and/or payment systems can (and do) resolve these cases and have certain KYC requirements.
mk89
17 hours ago
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
Forgeties79
18 hours ago
Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now. It doesn’t even compare.
hansonkd
16 hours ago
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
thisisit
16 hours ago
Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
Miraste
16 hours ago
Verification needs to work the other way around, some kind of verifiable chain of trust for photos and videos from real cameras. Watermarking all generated media is impossible.
SirMaster
15 hours ago
I don't really understand why this is so hard or why it wasn't just done from the get go.
Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms.
alpha_squared
15 hours ago
You're talking about the metadata of the files, which can always be edited and someone will inevitably try to make software to do exactly that. Also, Adobe's proposal for handling generated content is exactly this and they're not able to get buy-in from other companies.
SirMaster
15 hours ago
Edit the metadata in what way? It's a cryptographic hash.
If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted.
SAI_Peregrinus
14 hours ago
Essentially 0% of professional photography or videography uses "straight out of the camera" (SOOC) JPEGs or video. It's always raw photos or "log" video, then edited to look like what the photographer actually saw. The signal would be so noisy as to be useless.
SirMaster
9 hours ago
But we are talking about consumer devices here.
Are you saying Apple and Google can't put a secure hash into the output from their camera apps that apply after their internal processing is done?
Miraste
15 hours ago
It becomes a hard problem quickly when you introduce editing, and most photos and videos on social media are edited. I'm not sure how it would work. It seems more feasible than universal watermarks, though.
rcxdude
11 hours ago
It's pretty much impossible to do this in a useful way, _and_ it would also cement even more control over the media landscape to those companies.
petesergeant
15 hours ago
You can bootstrap some of it. I wrote the following for solving this ~9 years ago. Kinda wish I'd done the PhD now: https://github.com/pjlsergeant/multimedia-trust-and-certific...
red-iron-pine
15 hours ago
> Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
i've thought about this off and on and how to implement it. Not easily, was my general takeaway.
or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken
e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc.
friendzis
16 hours ago
> Information found online will also no longer be trustable
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
pstuart
16 hours ago
All these are true, but just as it happened before the internet, it's accelerating even further. There are clear costs that cannot just be hand waved away.
ottah
16 hours ago
I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble.
ACS_Solver
15 hours ago
The acceleration is in the decrease of the cost to produce misinformation.
Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake.
We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months.
friendzis
14 hours ago
> The acceleration is in the decrease of the cost to produce misinformation.
So it's a spam issue. And normally, while annoying it's possible to fight spam, however on these topics we have built structures that disable the very mechanisms allowing us to fight spam. That's worrying.
The fact that someone can instruct their computer to astroturf their flight tracking app on some forum for nerds is irrelevant - people have been instructing "marketing agencies" to astroturf their brand of caffeinated sugar water on tv, radio and press for decades and centuries. For a very long time the "traditional media" was aware that their ability to sell astroturfing capacity was hanging on their general trustworthiness. Then the internets rose to prominence, traditional media followed by selling more and more of their capacity to astroturfers. Now we have a worrying situation that the internets might be spammed by astroturfers a bit too much, but the backup is broken already. Now that's truly frightening.
Welcome to the post-truth world, where objective references outside of your own village cannot exist.
pstuart
9 hours ago
It's an algorithm issue. When people hold a media consumption device in front of their face all day and the algorithms are played, then it's literally a brainwashing device.
Dylan16807
3 hours ago
It is not an algorithm issue. It would still be a huge problem with zero algorithmic social media.
whatever1
2 hours ago
We need some sort of end to end verification. Aka from the sender camera to the receiver display / speakers.
Maybe Apple will be able to pull it off? Aka if you FaceTime me I know that you are a person
collinmcnulty
17 hours ago
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.
kelvinjps10
3 hours ago
How do you do when people don't protect their signatures? there is already scam where people get tricked into forwarding message from their own numbers to other people or email.
Forgeties79
18 hours ago
> footage of some incident somewhere may have been entirely fabricated by AI,
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
bigfishrunning
18 hours ago
Either way, the lack of trust is the damage.
Forgeties79
17 hours ago
Definitely
chistev
18 hours ago
We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time.
You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.
People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.
whateverboat
19 hours ago
What's the solution apart from an identity providing service?
a2128
19 hours ago
I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.
intrasight
18 hours ago
People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem
bigfishrunning
18 hours ago
People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it.
soco
17 hours ago
2FA has tried to solve exactly this. Not many attacked people will hand over their password AND their phone. Yes I know, they might hand over one authentication code (and I know people who did exactly that)... We should also look into reducing the attack surface - if you get Instagram hacked you shouldn't get your Facebook hacked as well. But the current big tech centralization leads us to that single point of failure, because they don't care about the user's concerns only market grab. So... what now? Do we get the politics into this?
bigfishrunning
17 hours ago
One authentication code is often all that's needed to *change where the authentication codes are sent*
Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws.
prox
17 hours ago
Best thing I think of is domain names. Domains are tied to addresses and billing, and sites are people or businesses, with physical locations one can visit.
Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real.
nathanaldensr
18 hours ago
Agreed. The sphere of trust around each of us will shrink back to only those in our physical proximity. Outside of that, no one can be trusted.
jjulius
16 hours ago
Touching grass. Valuing in-person connections. Focusing on the community, meatspaces and actual people around you.
Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years].
This sentiment is unpopular, but it's true. Prioritize true connections and experiences.
Gigachad
19 hours ago
I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.
rkomorn
19 hours ago
I think you might be right and I think I'll like some of the consequences and hate some of the others.
More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).
Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.
Gigachad
6 hours ago
Agreed. I don't think there is any saving the internet as a social space long term. And I'm not entirely sad about that either. I think a return to in person interaction, public social spaces, and a retreat from social media would do the world a lot of good.
Though there is a nightmarish possibility that people just accept this and willingly interact purely with bots, giving up all real relationships for AI ones.
dominotw
18 hours ago
linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.
identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.
adithyassekhar
19 hours ago
That's just shifting the problem not solving it.
47282847
7 hours ago
Honestly? Maybe that’s part of the solution, not the problem. I already see people including me going back to real world, local interactions and connections.
esafak
16 hours ago
It is already a problem. Try interviewing people from LinkedIn and you'll face an onslaught of imposters. https://www.darkreading.com/remote-workforce/north-korean-op...
nslsm
18 hours ago
If anything deepfakes will be good for the economy because if you can’t do business with people who are far away it becomes harder to outsource.
bitmasher9
18 hours ago
In general barriers to trust/trade are bad for tbr economy.
thunky
18 hours ago
> damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person
What damage are you talking about?
I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.
bigfishrunning
18 hours ago
Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost.
The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.
thunky
17 hours ago
Not disagreeing, but the context of GP was business/economy/hiring.
Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off.
bigfishrunning
17 hours ago
Ok fine, let's put it in the context of business. Your competitor impersonates your customer, gives you bad instructions. After following the bad instructions, you lose the contract with your customer, and your competitor (the attacker) is free to try and replace you.
If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder.
thunky
16 hours ago
> If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it
The communication channel is what you trust. So you would call the person using that trusted channel.
It's just like when you get a scam email or popup from "Microsoft" saying your laptop is compromised and you need to call their number ASAP.
Habgdnv
16 hours ago
Or even better, open the on-prem AI portal and type something like "I just got a suspicious call from client X, but I am on a lunch break. Call him and use a fake video of me. Ask him if what he said is true..."
contagiousflow
17 hours ago
You don't think people getting scammed is part of the economy?
rdevilla
18 hours ago
Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.
Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.
There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.
A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.
pixl97
15 hours ago
>McLuhan
Hmm, this guy may have been on to something
>Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.
--The Gutenberg Galaxy, 1962
rdevilla
14 hours ago
Thank you. I will add this to the list.
skydhash
18 hours ago
> What damage are you talking about?
Not GP, but there's a lot of damage that can be done with impersonation.
chii
18 hours ago
The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.
However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.
collinmcnulty
17 hours ago
You're not sure if human to human interaction is intrinsically more valuable than a human talking to a facsimile? That feels like a very dangerous position to hold for one's ethical calculations and general sanity. I'm clinging tightly to the value of the bond with other people, even the passing connection, but certainly with my family members as this article is about.
chii
2 hours ago
i much prefer using the ATM, self-checkouts and an e-commerce website, over having to talk to somebody at a branch to get money, buy my groceries, or booking a holiday.
pixl97
14 hours ago
Human to human may be more valuable, but that may not have much to do with the truth in their statements. For example if your relatives are hooked up to a constant misinformation feed it gets to become problematic to communicate and deal with them.
esseph
18 hours ago
Imagine how this plays out in courtrooms the world over for evidence.
We're in deep shit.