ChatGPT creates phisher's paradise by serving the wrong URLs for major companies

176 pointsposted 17 hours ago
by josephcsible

20 Comments

sublinear

14 hours ago

> "It's actually quite similar to some of the supply chain attacks we've seen before [...] you're trying to trick somebody who's doing some vibe coding into using the wrong API."

I have renewed faith in the universe. It totally makes sense that vibe coding would be poisoned into useless oblivion so early in this game.

cryptoegorophy

9 hours ago

I confess to vibe coding. Specially with api work. And it has gotten so badly that I have to actually either send api pdfs and links to api documentation.

TZubiri

10 hours ago

I feel a bit icky, but whenever I see people work with such a disregard for quality, I'm actually rooting for their products to break and get hacked.

In 2015 it was copy and pasting code from stackoverflow, in 2020 it was npm install left-pad, in 2025 it's vibecoding.

I refuse to join them, and I patiently await the day of rapture.

sfoley

5 minutes ago

leftpad was 2016

andrei_says_

8 hours ago

It may already be here but in large moneyed organizations no one wants to take responsibility and speaking against management’s orders to lean in on AI may be a political suicide.

So a lot of people are quietly watching lots of ships slowly taking water.

Kesseki

15 hours ago

This is, in turn, making the world of comment and forum spam much worse. Site operators could tag all user-submitted links as "nofollow," making their sites useless for SEO spammers. But spammers have learned that most LLM content scraper bots don't care about "nofollow," so they're back to spamming everywhere.

labrador

11 hours ago

It reminds me of non-radioactive steel, the kind you can only get from ships sunk before the atomic bomb. Someday, we’ll be scavenging for clean data the same way: pre-AI, uncontaminated by the AI explosion of junk.

mananaysiempre

13 hours ago

I’m not sure if even for traditional search engines “nofollow” means that the scraper doesn’t follow the link, or that it just does not include it in the PageRank or whatever graph but still uses it for to discover new pages. (Of course, LLMs are far too impenetrable for such a middle ground to exist.)

ttoinou

12 hours ago

LLMs dont seem to hallucinate my niche products and company (sometimes the name of the company doing the product yes, but not the product name, not the url of the company), and according to CloudFlare Radar I'm only between 200000 and 500000 top domains https://radar.cloudflare.com/scan

boleary-gl

14 hours ago

I wonder if Cloudflare's new plan for blocking AI from scraping the "real" sites...

zahlman

14 hours ago

You wonder if the plan is (or does) what?

flufluflufluffy

9 hours ago

> “Crims have cottoned on to a new way to lead you astray”

Was - was the article written by AI?

sublinear

5 hours ago

No, it just sounds British.

chownie

3 hours ago

Australian. "Crims" isnt said much in the UK, for us that's "thugs" or "yobs".

9283409232

15 hours ago

I've been using Phind lately and I think they do a really good job avoiding this problem. I don't think I've ever run into a fake URL using it. As a search engine, I think I still prefer Kagi but Phind is a great if you want a free option.

kristopolous

15 hours ago

It's wild how many of the links are hallucinations.

Maybe the error rate is consistent with everything else, we just eisegesis our way into thinking it's not

whatsgonewrongg

8 hours ago

I’ve had Claude hallucinate options for a lot of things, most recently DOM method arguments, and Wrangler configuration. It’s like, very reasonable things you might expect to exist. But they don’t.

I must be holding it wrong. How are people working with these tools to produce quality software?

DanAtC

8 hours ago

In case you're not being sarcastic: they're not.