This technology demos incredibly well and you can just see how everyone gets giddy with excitement around using it. I watch my colleagues and executives proud to show what they could do or make endless jokes about it. It reminds me of when people first got their phones and couldn't stop showing everyone how cool they were.
This leads to an over rotation in the perceived value.. the value is significant just as the mobile phone was, but not going to live up to the hype in the near term.
It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
>> in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
When your boss is hyping it up and demanding all hands on deck full steam ahead on the Good Ship AI, lots of people join in out of fear, particularly in the currently awful job market which is partly being ruined by AI itself.
Some of us just stay quiet, keep our heads down, and plug away using tools actually fit for purpose, like LSPs and refactoring tools.
Very few have the courage to stand up in a professional setting and say the emperor has no clothes.
There are no error bars, no confidence intervals. Just a one trick pony that pastes tokens together to give you something that may look good to many people. Sure there are many good use cases, but there are still enough non-patchable and indeterminable in size holes in the watering pail to limit its effectiveness.
I think it's easy to forget how much low-hanging-fruit there still is, in terms of taking full advantage of this technology.
People are still figuring out very basic integrations, and even now, at this early stage, the things I can do with LLMs are pretty incredible. For example, I was able to set Cursor up on finally dragging an old codebase out of the dark ages. It then built new features that I've long wanted. It took a few hours on my end, but would have been at least a week or two without it.
I'm not exposed to much of the hype, though, so maybe my calibration of what the hype is, is wrong.
>It reminds me of when people first got their phones and couldn't stop showing everyone how cool they were.
Your comparison to smart phones is interesting. Smart phones are definitely transformative. There was a lot of hype, but still transformative.
Do you believe that LLMs and AI is also going to be transformative?
> how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in
Different speeches for different audiences. On HN, for all its faults, people don't need to be told that yes, SOTA LLM can somewhat help you with code, parsing documentation, etc. A lot of people in the "real world" are still grossly underestimating this technology.
On the one hand I see posts here by people who have no idea what they are talking about.
On the other I get 5 hours of work done in 5 minutes every other day.
Worst I can see happening is a doc-com crash. pets.com will go out of business, but amazon won't.
> This technology demos incredibly well
You just summed up machine learning, not just AI/LLMs.
My domain is very far from LLMs, but even in my domain, you can build a really cool demo, that is entirely misleading.
> It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
Well, when people's financial and employment stability are dependent on placating the overlords who are entranced...
I've been following progress on self-driving cars since the 2004 and 2005 Darpa Challenges. Things were very exciting in the 2011-2015 timeframe. Demos were mindblowing. Engineers were giddy. The auto industry went from laughing to shaking in their boots, and then scrambling to make it appear as though they had a horse in the self-driving car race. Hundreds of billions in investment were flying around. Now here we are in 2025, things aren't dead at all, but also nobody with skin in the game is under any delusions about how hard it is to build and scale a viable robotaxi operation.
So far the Self driving car hype cycle has served as a useful reference for understanding the hype with LLMs. The main difference with LLMs is that there is 10 times as much money flying around.
What's the path to recouping that money?
Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.
New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.
I'm just wondering what the plan is.
Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.
The "game plan" is, and always was, to target human labor. Some human labor is straight up replaceable by AI already, other jobs get major productivity boosts. The economic value of that is immense.
We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.
Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.
If LLMs could double the efficiency of white collar workers, major companies would be asked for far more than $100,000 a year. If could cut their expensive workforce in half and then paid even 25% of their savings it could easily generate enough revenue to make that valuation look cheap.
$100k/year is literally nothing.
Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.
Major companies will spend 10-100x that if it resulted in real tangible productivity gains for their businesses.
I think it's "scam everyone into giving us lots of money, then run before the bills come".
How big is $344B?
Apparently the total market capitalisation of the US stock market is $62.8 trillion. Shiller's CAPE ratio for the S & P index is currently about 38 -- CAPE is defined as current price / (earnings, averaged over the trailing 10 years)
That suggests that over the last 10 years, the average earnings of the US stock market is about $1.7 trillion annually.
So $344B of spending is about 1/5 of the average earnings of the total US stock market.
Still hard to interpret that, but 1/5 is an easier number to think about.
These days am looking at managing my own portfolio. I go off expected returns(using gdp as a component) plus dividend and adjust it for risk to compute which country/region has good expected returns. This i adjust a couple of times a year.
If one would assume it's nearly all a bubble, How would you correct earnings for the US? I am interested in applying it to any investment that tracks AI heavy companies in the US.
But the $344B figure is not annualized, it’s cumulative.
I think what we have right now is an excellent interface for low-friction, non-specialised interaction by humans (work or personal use) with a vast array of highly specialised and highly-complex systems.
What it isn't is the actual final "thing" itself. It's just the thin veneer right now.
I'm not convinced that that revolution was worth whatever trillions we'll end up spending, but fortunately that's not on my shoulders to be worried about.
This “article” is clickbait. Controversial title with no substance, asking “why are companies investing heavily in a technology that works for some (limited but valuable) use cases, when they could invest in pure R&D for something that might be better someday”.
It's obviously a lot of money, and of course it's too much money, but I think we can still get LLMs much further and I think they're probably the currently most interesting approach.
I don't even care about multimodality etc. I think pure text models are a very appealing idea.
Most of us believed that crypto currencies were trash. Look how valuable its now .
LLMs are a million times better than Crypto currencies.
Crypto never ended up filing those original goals of being an actual currency and smart contracts actually doing things. Blockchains and NFTs were never used to solve any of the problems they claimed to.
It’s just none of that had any baring on the value of the coins.
Pokemon cards are also super valuable these days, it says more about some people having way too much money for their own good than anything
There is a distinction between "vehicle for a large volume of speculation" and "valuable". Cryptocurrency has not created any "value".
Oracle's mega moves in the market (cpl hundred $B in market cap),due to their claim that OpenAI was doing a multi-year commit to move much of their workloads to their cloud ... likely a heavy revenue play rather than profit... with an aspiration to push as much CapEx deprecation to out years as possible (btw Oracle where is all the capex?) AKA Financial aengineering... just shows how overly leveraged this bubble has become.
The Economist recently featured a piece pointing out that it's no longer risk that drives the market but a balance of fear of loss and fear of missing out (https://www.economist.com/finance-and-economics/2025/08/06/w...). FOMO is out of control right now
This Oracle surge and revenue predictions really feels like jumping the shark. I mean, it's Oracle.... I've never felt confident enough to bet against a company, but a short position on Oracle may well be too tempting for me.
> FOMO is out of control right now
Exactly. The whole stock market is currently behaving like the crypto bubbles.
In the same way people thought the train would hit them when cinema first debuted so too do we believe the machine is thinking
They MUST. It doesn't matter if it looks fragile or how much money it is.
LLMs risk most of those companies business, they can't afford to not be ahead. If they aren't ahead, there's a risk that the entire US's economy would be in a terrible shape.
American Big Tech companies that make plenty of INTERNATIONAL revenue from Ads (Meta, Google), can quickly become a shell of its former self.
How? Countries and economic blocks could quickly substitute their American products counterparts if they have nothing to offer and could roll out their own.
The US's economy has become very dependant on FAANG cashflow, it's what gets other parts of the economy moving.
No wonder they had a dinner with Trump. If this fades away, US will look very weak and with a terrible economic outlook.
Fiat money in a credit based inflationary environment is free. Oil money is free, it literally comes from the ground. There's no "losses".
Not sure what this has to do with the article.
Oil isn't "free" anyway, it takes energy to make energy, and EROEI has been going down for some time as the easy oil is extracted.