I'm tired of dismissive anti-AI bias

5 pointsposted 8 days ago
by namanyayg

5 Comments

1970-01-01

8 days ago

Here's the quote I find most substantial when I gestalt over AI:

"We dared to hope we had invented something that would bring lasting peace to the Earth. But we were wrong. We underestimated man’s capacity to hate and to corrupt good means for an evil end." --Orville Wright

AIs are the airplanes of the 21st century. They were invented to be used for good, but they will quickly be used for evil as well.

I did use an LLM to find the exact verbiage of this quote.

teachrdan

8 days ago

> They were invented to be used for good

I don't know that that's true. It seems a lot of development -- probably most of it, in money and engineer-hours -- has gone towards the cynical deployment of AI for making money for uses which are dubious at best. I don't think anyone developed AI chatbots to be used for good. I think they were designed to be good-enough replacements for human beings, who would be fired en masse in order to save enterprise customers a bunch of money, which would then go towards their wealthy shareholders and its (now smaller!) pool of employees.

MattSayar

8 days ago

I think optimism is an underrated quality. Cynicism is easy, hope is hard

onecommentman

8 days ago

I’m tired of the huckster hype, personally observed lackluster performance and annoying externalities of AI developments that justify and explain the bias. If the financial hype machine wasn’t distorting the reporting, it would be following a more rational evolving trajectory. It might still bubble, but bubble gracefully and naturally. Some AI is amazing and fun, but “electrostatic generator/science fair/look at my hair stand on end” amazing, not “my life will fundamentally change for the better” amazing. And remember, this is the second time around for an AI hype cycle…no one is more cynical than a reformed idealist who got burned once.

evil-olive

8 days ago

> If you can't see the value of LLMs at all at this point, you're holding it wrong.

> Sure, LLMs hallucinate. Said cynically, they "lie." You need to add lots of context, or enable Web Search to ground the answers in external sources, or connect MCP servers, or or or... You can't just ask ChatGPT to "make me a fitness program," you need to enable Deep Research and give it links to existing workout programs, articles from Men's Health, detail your current workout plan, goals, previous injuries, and and and and...

right...you can claim "everyone knows" about hallucinations. everyone knows they need to do prompt engineering. everyone knows how important hooking up an MCP server is. and if you use an LLM without knowing that, it's your fault for using the tool wrong.

but we know empirically that just isn't true. back in 2023, lawyers were sanctioned for filing LLM-generated legal briefs with hallucinated case citations [0]. then more than a year later, it happened again at a different law firm [1].

and does anyone think, realistically, that's the very last time we've seen a lawyer get sanctioned for LLM-hallucinated case citations?

the "dismissive" attitude towards LLMs that I find most compelling is that the benefits have been hugely oversold, and the caveats downplayed. LLMs have been rolled out in way too many places where they weren't ready for prime-time yet, and to users who weren't aware of the pitfalls.

Google has LLM-generated "overviews" at the top of every search result page. it famously said you could put glue on your pizza [2] and that it was safe to eat one rock per day [3].

you can say "everyone who uses an LLM should know such-and-such caveats" if you want. but Google's rollout of AI overviews mean that every single person who uses Google Search is suddenly an "LLM user". expecting all of those people to learn overnight how to use an LLM "properly" is simply not realistic.

0: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us...

1: https://www.theregister.com/2025/02/25/fine_sought_ai_filing...

2: https://www.theverge.com/2024/5/23/24162896/google-ai-overvi...

3: https://www.bbc.com/news/articles/cd11gzejgz4o