> Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter?
Obama, no. Geoff Hinton has his opinions and I’ve listened to them. For every smart person who believes in AI and AGI happening soon you can find other smart people who argue the other way.
> AI companies' revenues are growing rapidly, reaching the tens of billions.
Trading stock and Azure credits don’t equal revenue. OpenAI, the leader in the AI industry, loses billions every quarter. Microsoft and Google and Meta subsidize their work from other profitable activities. The profit isn’t there.
> The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains.
https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
A few questionable anecdotes? Given the years since ChatGPT and the billions invested I’d expect more tectonic changes than “It wrote my term paper.” Companies have not replaced employees with AI doing the same job at any scale. You simply can’t find honest examples of that except for call centers that got automated and offshored decades ago.
> To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
I might agree, but I didn’t make that claim. AI tools probably can add value as tools. If you find a real example of AI taking over a professional job at scale let us know.
> That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
We could have better discussions if the AI industry wasn’t led by chronic liars and frauds, people making ridiculous self-serving predictions not backed by anything that resembles science. AI gets literally shoved down our throats with no demonstrated or measurable benefit, poor accuracy, severe limitations, heavy costs that get subsidized by investors and the public. Forget about the energy and environmental impact. Which “side” acts in good faith in the so-called debate?
> I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument.
I have. You just need to take the critics seriously. No one can even define intelligence or AGI, but they sure can sell it to FOMO CIOs.