Quick reminder of scam economics:
* Your funnel starts digital and cheap (email, say).
* You need "warm leads" out of the funnel, and your closers are expensive (call center operators usually in SE Asia), so you prune to only great leads. You do this by making the email something only very credulous people would believe.
* You aim for nearly 100% close rate once you get them on the phone, since closers are expensive, 1 hour closing is 1 hour spent of human time.
There are two things AI with nice English accents that's scamming you do to change this: first, they make closing cheap, so the funnel can stay wide earlier. This means we'll be seeing much more plausible / hard to decide if it's a scam content -- there's no need to prune skeptical people so early. Second, the LLM is much smarter than, let's call it your bottom third call center operator, allowing you longer in safe direct contact with formerly inaccessible leads.
The economics here mean we're going to see a LOTTT of this over the next few years, and it's likely to change how we think about trust at all, and how we think about open communication networks, like the phone system.
It's not "super realistic" because no one would never cal you from Google, or provide any customer support in the first place.
I think 99% of HN readers agree, but we're not 99% of Gmail account holders
Same feeling when scammers pretend to be the USPS helping locate a lost package
[deleted]
The first clue that this is a scam is that Google Support actually called someone.
It's just a social engineering trick, nothing more. Stop elevating posts like this
Who the hell answers an unknown number…
Where did you read, that it was an unknown number? The article, and the original blog post said, the fall was from Google Sydney.
If someone claiming to be from your bank or Google or Amazon or wherever calls and says they need some sort of secret like a one time code, just say no. Or, say that they have the wrong number and your name is Ben Chode.
I think this is the same scam that Garry Tan wrote about recently:
https://x.com/garrytan/status/1844526882592784634
It looks like the person from Google who was impersonated has escalated this within Google. But it’s still a hint of a new era in cybersecurity.
uhuh so this “crazy hack” is just social engineering with a little AI sprinkled in.
this is slop, a slop article with a buzzing title
And the "hack" is not really specific to gmail.
I do think such automated social engineering will become an issue though.
Same as it ever was. AI doesn't really add anything. Credulous people are easy to trick.