avalys
9 hours ago
AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
jotras
3 hours ago
Something nobody's talking about: OpenAI's losses might actually be attractive to certain investors from a tax perspective. Microsoft and other corporate investors can potentially use their share of OpenAI's operating losses to offset their own taxable income through partnership tax treatment. It's basically a tax-advantaged way to fund R&D - you get the loss deductions now while retaining upside optionality later. This is why the "cash burn = value destruction" framing misses the mark. For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation. The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?" If you're sitting on a pile of profitable cloud revenue like Microsoft, suddenly OpenAI's burn rate starts looking like a pretty efficient way to minimize your tax bill while getting a free option on the AI leader. This also explains why big tech is so eager to invest at nosebleed valuations. They're not just betting on AI upside, they're getting immediate tax benefits that de-risk the whole thing.
Jare
2 hours ago
> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation
I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
ludicrousdispla
2 hours ago
>> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields
So just a loss for governments, or in other words, socializing the losses.
booi
2 hours ago
Hi, I'm here to hold the bag?
joncrane
2 minutes ago
You guys are getting bags?
Groxx
2 hours ago
We really should have thought of this before becoming peasants.
chrishare
an hour ago
Have you tried not being poor?
nineteen999
8 minutes ago
It gives you a new opportunity to pull yourself up by the bootstraps. Until mommy and daddy come along with another cash infusion.
chinathrow
2 hours ago
Your pension fund, yes.
lotsofpulp
an hour ago
This comment makes even less sense than jotras’ comment.
Pension funds buy shares in businesses such as Microsoft. The money going into the pension fund is not typically a function of the tax paid by companies such as Microsoft, but rather from a combination of actuaries’ recommendations, payroll tax receipts, and politicians’ priorities.
Therefore a pension funds’ equity holdings, such as Microsoft, doing well means taxes can be lower.
lenkite
an hour ago
> OpenAI's losses might actually be attractive to certain investors from a tax perspective.
OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
https://www.citizen.org/news/openais-request-for-massive-gov...
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
alex43578
14 minutes ago
There's already a lot that the US taxpayer is on the hook for that's a lot less valuable than a best on the next big thing in software, productivity, and warfare.
It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug test, and it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
pvtmert
an hour ago
Amazon already has not been paying any sort of income tax to the EU. There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
lotsofpulp
41 minutes ago
> Amazon already has not been paying any sort of income tax to the EU.
That should be expected, because
https://european-union.europa.eu/priorities-and-actions/acti...
> The EU does not have a direct role in collecting taxes or setting tax rates.
> There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Dec 2023.
> Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
https://budgetmodel.wharton.upenn.edu/issues/2024/10/14/the-...
danielscrubs
an hour ago
Can you explain it in another way? What you are saying is that instead of loosing 100% they loose 70% and loosing 70% is somehow good? Or are you saying the risk adjusted returns are then 30% better on the downside than previously thought? Because if you are, I think people here are saying the risk is so high that it is a given they will fail.
rebuilder
3 hours ago
It’s hardly a free option, by your numbers it’d be a 20-30% discount.
thrwaway55
2 hours ago
Sure but if there's no moat would you rather pay 100% or 80% until the credits run out? You reap the 100% spend in the meantime. Not everyone even has the no moat discount.
fooblaster
9 hours ago
There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.
simonsarris
7 hours ago
Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.
xivzgrev
7 hours ago
Not to mention security. I'd trust Google more not to have a data breach than open AI / whomever. Email accounts are hugely valuable but I haven't seen a Google data breach in the 20+ years I've been using them. This matters because I don't want my chats out there in public.
Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective
mootothemax
5 hours ago
While I don’t disagree with you, for historical purposes I think it’s important to highlight why google started its push for 100% wire encryption everywhere all the time:
The NSA and GHCQ and basically every TLA with the ability to tap a fibre cable had figured out the gap in Google’s armour: Google’s datacenter backhaul links were unencrypted. Tap into them, and you get _everything_.
I’ve no idea whether Snowdon’s leaks were a revelation or a confirmation for google themselves; either way, it’s arguably a total breach.
jedberg
2 hours ago
When I worked at PayPal back in 2003/4, one of the things we did (and I think we were the first) was encrypt the datacenter backhaul connections. This was on top of encrypting all the traffic between machines. It added a lot of expense and overhead, but security was important enough to justify it.
dilyevsky
3 hours ago
Not that I disagree with your assessment but in the spirit of hn pedantry - google had a very significant breach where gmail was a primary target and that was “only” 16 years ago in mid 2009. So bad that it has its own wikipedia page: https://en.wikipedia.org/wiki/Operation_Aurora
charcircuit
an hour ago
>very significant breach
That page says it was only 2 accounts and none of the messages within the mail was accessed. I wouldn't call that very significant.
why-o-why
5 hours ago
Is Google even required to inform you of a data breach?
bjt
4 hours ago
They're subject to California law, so yeah.
devsda
7 hours ago
Don't forget the other moat.
While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.
catoc
4 hours ago
‘Actively hostile’ as in objecting to your content getting ripped off without permission?
satvikpendem
3 hours ago
It's a matter of perspective. In this scenario both sides see the other as hostile, just as one would look at a war happening as an outside observer.
DoesntMatter22
5 hours ago
They also have one of the biggest negatives in that they abandon almost everything they build so it’s hard to get invested in thier products.
I agree with the rest though
satvikpendem
3 hours ago
They don't abandon their money makers. That's the thing people don't get about the Google graveyard meme, they only cut things that obviously aren't working to make them more money.
nateb2022
9 hours ago
I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.
ralph84
8 hours ago
The value of YouTube for AI isn't making AI videos, it's that it's an incredibly rich source for humanity's current knowledge in one place. All of the tutorials, lectures, news reports, etc. are great for training models.
Nextgrid
8 hours ago
Is that actually a moat? Seems like all model providers managed to scrape the entire textual internet just fine. If video is the next big thing I don’t see why they won’t scrape that too.
jmb99
4 hours ago
Scraping text across the entire internet is orders of magnitudes easier than scraping YouTube. Even ignoring the sheer volume of data (exabytes), you simply will get blocked at an IP and account level before you make a reasonable dent. Even if you controlled the entire IPv4 space I’m not sure you could scrape all of YouTube without getting every single address banned. IPv6 makes address bans harder, true, but then you’re still left with the problem of actually transferring and then storing that much data.
earthnail
2 hours ago
For now, you actually get pretty far with Tor. Just reset your connection when you hit an IP ban by sending SIGHUP to the Tor daemon.
I did that when I was retraining Stable Audio for fun and it really turned out to be trivial enough to pull of as a little evening side project.
monocasa
8 hours ago
And we're probably already starting to see that, given the semirecent escalations in game of cat and also cat of youtube and the likes of youtube-dl.
Reminds me of Reddit's cracking down on API access after realizing that their data was useful. But I'd expect both youtube to be quicker on the gun knowing about AI data collection, and have more time because of the orders of magnitude greater bandwidth required to scrape video.
jakeydus
6 hours ago
And reddit turned around and sold it all for a mess of pottage…
satvikpendem
3 hours ago
Sold being the operative word, rather than giving it away for free.
awesome_dude
6 hours ago
> Seems like all model providers managed to scrape the entire textual internet just fine
Google, though, has been doing it for literal decades. That could mean that they have something nobody else (except archive.org) has - a history on how the internet/knowledge has evolved.
fooblaster
9 hours ago
If you think they are going to catch up with Google's software and hardware ecosystem on their first chip, you may be underestimating how hard this is. Google is on TPU v7. meta has already tried with MTIA v1 and v2. those haven't been deployed at scale for inference.
nateb2022
8 hours ago
I don't think many of them will want to, though. I think as long as Nvidia/AMD/other hardware providers offer inference hardware at prices decent enough to not justify building a chip in-house, most companies won't. Some of them will probably experiment, although that will look more like a small team of researchers + a moderate budget rather than a burn-the-ships we're going to use only our own hardware approach.
fooblaster
8 hours ago
Well, anthropic just purchased a million TPUs from Google because even with a healthy margin from Google, it's far more cost effective because of Nvidia's insane markup. That speaks for itself. Nvidia will not drop their margin because it will tank their stock price. it's half of the reason for all this circular financing - lowering their effective margin without lowering it on paper.
fragmede
6 hours ago
And, don't forget everyone's buying from TSMC in every case!
margalabargala
8 hours ago
It's in Nvidia's interest to charge the absolute maximum they can without their customers failing. Every dollar of Nvidia's margin is your own lost margin. Utilities don't do that. Nvidia is objectively a way bigger liability than electricity rates.
bdangubic
8 hours ago
it is in every business’s best interest to charge the maximum…
wrs
6 hours ago
Utilities and insurance companies are two examples of business regulated to not charge the maximum, for public policy reasons.
bdangubic
5 hours ago
we suggesting that nvidia/google/.. be regulated for like utilities?
margalabargala
5 hours ago
No. That is not what anyone is saying, in this immediate thread anyway.
Reread the earlier comment and see if you can understand what you are missing.
AnonHP
2 hours ago
Not GP and haven’t participated in this thread. I’m clueless on what the point in your earlier comment is. Can you elaborate, please?
Ekaros
4 hours ago
I think it will be accepted by broader population. But if generation is easy and cheap I wonder if there is demand. And I mean as total demand in the segment. Will there be enough impressions to go around to actually profit from the content. Especially if storage is also considered.
Seattle3503
8 hours ago
The video data is probably good for training models, including text models.
why-o-why
5 hours ago
Given the fact that Apple and Coke but rushed to produce AI slop, and the agreements with Disney, we are going to see a metric fuck-ton of AI-generated cinema in the next decade. The broader population's tastes are absolute harbage when it comes to cinema, so I don't see why you need convincing. 40+ superhero films should be enough.
stevenjgarner
6 hours ago
Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.
fooblaster
9 hours ago
And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.
CharlieDigital
8 hours ago
Bigger problem for late starts now is that it will be hard to match the performance and cost of Google/Nvidia. It's an investment that had to have started years ago to be competitive now.
loloquwowndueo
7 hours ago
In this case the difference between its and it’s does alter the meaning of the sentence.
cdf
5 hours ago
On paper, Google should never have allowed the ChatGPT moment to happen ; how did a then non-profit create what was basically a better search engine than Google?
Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
miohtama
2 hours ago
Google allowed this to happen because they listened to their compliance department and were afraid of a backslash if LLM says something that could anger people.
Sergey Brin interview: https://x.com/slow_developer/status/1999876970562166968?s=20
This attitude also partially explains the black vikings incident.
hattmall
5 hours ago
Exactly, Google's business isn't search, it's ads. Is ChatGPT a more profitable system for delivering ads? That doesn't appear so, which means there's really no reason for Google to have created it first.
razodactyl
5 hours ago
There was a very negative "immune" response from the users when they perceived suggestions from ChatGPT as ads.
This will be hard for them to integrate in a way that won't annoy users / will be better implemented than any other competitor in the same space.
Or perhaps we just deal with all AI across the board serving us ads.... this makes more sense unfortunately.
transcriptase
4 hours ago
There’s a very negative immune response to the idea of Netflix running ads.
And yet they’re there, in the form of prominent product placement in all of their original series along with strategic placement in the frame to make sure they appear in cropped clips posted to social media and made into gifs.
Stranger Things alone has had 100-200 brands show up under the warm guise of nostalgia, with Coke alone putting up millions for all the less-than-subtle screen time their products get.
I’m certain AI providers will figure out how to slyly put the highest bidder into a certain proportion of output without necessarily acting out that scene in Wayne’s World.
mahirsaid
4 hours ago
I suspect google can last much longer in regards to an AI model chat engine that competes with open AI and other companies, without needing a profit from that particular product in a timely manner. I can's say the same for the others. Google is using it's own money to fund this without mch pressure for immediate profit in a time deadline. They can rely on their other services for revenue and profit for the meantime.
razodactyl
5 hours ago
Think about it in terms of the research they put out into the ether though. The research grows into something viable, they sit back and watch the response and move when it makes sense.
It's like that old concept of saying something wrong in a forum on purpose to have everyone flame you for being wrong and needing to prove themselves better by each writing more elaborate answers.
You catch more fish with bait.
choudharism
4 hours ago
The TAM for video generation isn't as big as the other use cases.
xnx
4 hours ago
I agree, but isn't the TAM for video generation all of movies, TV, and possibly video games, or all entertainment? That's a pretty big market.
dilyevsky
3 hours ago
What you’re competing for is people’s attention and the tam for that is biggest there is
lokar
4 hours ago
YT is also a giant corpus of English via the transcription
IncreasePosts
3 hours ago
Hasn't it all been scraped by other ai companies already?
throw310822
an hour ago
> AI is a world-changing technology, just like the railroads were
This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
lm28469
6 minutes ago
> just three years- around 900 million users on a weekly basis.
Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
shaky-carrousel
an hour ago
Paid user base or free user base? Because free user base on a very expensive product is next to meaningless.
throw310822
27 minutes ago
It's meaningful because it shows that people like the product a lot, and for a lot of different reasons. There are only few products that can reach such market penetration, not to mention in only three years. As the quality of AI increases, people will quickly realise that they are willing to pay for it as much as they pay for electricity. And the same goes for businesses.
steve1977
an hour ago
Railroads enabled people and goods to move from one place to another much easier and faster.
AI enables people to... produce even more useless slop than before?
throw310822
an hour ago
At this point I'm taking the word "slop" as a sign meaning "I really didn't think this through and I'm just autocompleting based on a gut feeling and the first word that comes to mind".
steve1977
an hour ago
That's an easy way out, isn't it?
Chyzwar
2 hours ago
Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
mhuffman
2 hours ago
>Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
iLoveOncall
24 minutes ago
A GPT wrapper isn't a moat.
jfrbfbreudh
6 hours ago
Google’s moat:
Try “@gmail” in Gemini
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
avalys
6 hours ago
If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.
tjwebbnorfolk
2 hours ago
It's funny that a decade ago the exit strategy of many of these startups would have been to get acquired by MSFT / META / GOOG. Now, the regulators have made a lot of these acquisitions effectively impossible for antitrust reasons.
Is it better for society for promising startups to die on the open market, or get acquired by a monopoly? The third option -- taking down the established players -- appears increasingly unlikely.
maeln
a few seconds ago
> Now, the regulators have made a lot of these acquisitions effectively impossible for antitrust reasons.
Is there any evidence that this is the case ? For very big merger (like nvdia and Arm tried) sure, but I can't think of a single time regulator stop a big player from buying a start up.
onion2k
3 hours ago
Try “@gmail” in Gemini
I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
edaemon
6 hours ago
That kind of makes it sound like AI is a feature and not a product, which supports avalys' point.
venusenvy47
5 hours ago
I tried it, but nothing happened. It said that it sent an email but didn't. What is supposed to happen?
dartharva
6 hours ago
Also, Google doesn't have to finance Gemini using venture capital or debt, it can use its own money.
latentsea
5 hours ago
DeepMind also solved the protein folding problem, so they have that going for them.
nr378
44 minutes ago
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result.
I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
londons_explore
40 minutes ago
Whilst those other layers are useful, none of them are particularly hard to build or rebuild when you have many millions of dollars on hand.
One doesn't need tens of billions for them.
nateb2022
9 hours ago
> AI is going to be a highly-competitive, extremely capital-intensive commodity market
It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
mark_l_watson
8 hours ago
> in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model
I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
sigbottle
7 hours ago
Very interested in this! I'm mainly a ChatGPT user; for me, o3 was the first sign of true "intelligence" (not 'sentience' or anything like that, just actual, genuine usefulness). Are these models at that level yet? Or are they o1? Still GPT4 level?
logicprog
7 hours ago
Not nearly o3 level. Much better than GPT4, though! For instance Qwen 3 30b-a3b 2507 Reasoning gets 46 vs GPT 4's 21 and o3's 60-something on Artificial Analysis's benchmark aggregation score. Small local models ~30b params and below tend to benchmark far better than they actually work, too.
airstrike
8 hours ago
> One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
zozbot234
8 hours ago
I don't think anyone knows for sure how much mileage/scalability LLMs have. Given what we do know, I suspect if you can afford to spend more compute on even longer training runs, you can still get much better results compared to SOTA, even for "simple" domains like text/language.
airstrike
7 hours ago
I think we're pretty much out of "spend more compute on even longer training runs" atp.
skort
4 hours ago
> But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off
Citation needed!
phyzix5761
8 hours ago
I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.
raw_anon_1111
8 hours ago
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
sod22
8 hours ago
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
OAI does not fit that description as of today.
aprilthird2021
8 hours ago
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?
thom
7 hours ago
The regular google search AI doesn’t do thinky thinky mode. For most buying decisions these days I ask ChatGPT to go off and search and think for a while given certain constraints, while taking particular note of Reddit and YouTube comments, and come back with some recommendations. I’ve been delighted with the results.
Marsymars
3 hours ago
I wouldn’t be surprised if ChatGPT was Pareto optimal for buying decisions… but I suspect there are a whole pile of Pareto optimal ways to make buying decisions, including “buy one of the Wirecutter picks” or “buy whatever Costco is selling”.
Bombthecat
12 minutes ago
Eh, I wouldn't be so sure, chips with brain matter and or light are on its way and or quantum chips, one of those or even a combination will give AI a gigantic boost in performance. Finally replacing a lot more humans and whoever implements it first will rule the world.
nineteen999
10 minutes ago
You seem to have forgotten that the ruling class requires tax payers to fund their incomes. If we're all out of work, there's nobody to buy their products and keep them rich.
variadix
8 hours ago
This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.
parentheses
4 hours ago
This is different because now the cats out of the bag: AI is big money!
I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
__MatrixMan__
4 hours ago
I think we'll find that that asymptote only holds for cases where the end user is not really an active participant in creating the next model:
- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
conartist6
4 hours ago
Yeah except that models don't propel communities towards new heights. They drive towards the averages. They take from the best to give to the worst, so that as much value is destroyed as created. There's no virtuous cycle there...
__MatrixMan__
4 hours ago
Is that constraint fundamental to what they are? Or are they just reflecting the behavior of markets when there's low hanging fruit around?
When you look at models that were built for a specific purpose, closely intertwined with experts who care about that purpose, they absolutely propel communities to new heights. Consider the impact of alphafold, it won a Nobel prize, proteomics is forever changed.
The issue is that that's not currently the business model that's aimed at most of us. We have to have a race to the bottom first. We can have nice things later, if we're lucky, once a certain sort of investor goes broke and a different sort takes the helm. It's stupid, but its a stupidity that predates AI by a long shot.
matwood
2 hours ago
Or the airlines. Airlines have created a huge amount of economic value that has mostly been captured by other entities.
johnnyanmac
8 hours ago
>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
mr_toad
3 hours ago
But some people do believe that AI is all hype and it will all go away. It’s hard to find two people who actually mean the same thing when they talk about a “bubble” right now.
578_Observer
5 hours ago
The "Railway Bubble" analogy is spot on.
As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.
ManuelKiessling
3 hours ago
I‘m aware this means leaving the original topic of this thread, but would you mind giving us a rundown of this whole Japan 1989 thing? I would love to read a first-person account.
578_Observer
2 hours ago
I am honored to receive a question from a fellow "Craftsman" (I assume from your name).
To be honest, in 1989, I was just a child. I didn't drink the champagne. But as a banker today, I am the one cleaning up the broken glass. So I can tell you about 1989 from the perspective of a "Survivor's Loan Officer."
I see two realities every day.
One is the "Zombie" companies. Many SMEs here still list Golf Club Memberships on their books at 1989 prices. Today, they are worth maybe 1/20th of that value. Technically, these companies are insolvent, but they keep the "Ghost of 1989" on the books, hoping to one day write it off as a tax loss. It is a lie that has lasted 30 years.
But the real estate is even worse. I often visit apartment buildings built during the bubble. They are decaying, and tenants have fled to newer, modern buildings. The owner cannot sell the land because demolition costs hundreds of thousands of dollars—more than the land is worth.
The owner is now 70 years old. His family has drifted apart. He lives alone in one of the empty units, acting as the caretaker of his own ruin.
The bubble isn't just a graph in a history book. It is an old man trapped in a concrete box he built with "easy money." That is why I fear the "Cash Burn" of AI. When the fuel runs out, the wreckage doesn't just disappear. Someone has to live in it.
hakfoo
4 hours ago
The railroads provided something of enduring value. They did something materially better than previous competitors (horsecarts and canals) could. Even today, nothing beats freight rail for efficient, cheap modest-speed movement of goods.
If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
choeger
an hour ago
You're absolutely correct! ( ;) )
The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
cco
an hour ago
I was really hoping, and with a different administration I think there was a real shot, for a huge influx of cash into clean energy infrastructure.
Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.
With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.
Plus a lot of extra compute, that's less clear of a long term value.
Alas.
retinaros
30 minutes ago
for me its clear OpenAI and Anthropic have a lead. I dont buy Gemini 3 being good. it isnt. whatever the benchmark said. same for meta and deepseek.
adventured
8 hours ago
Your premise is wrong in a very important way.
The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
zozbot234
8 hours ago
> Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
thom
7 hours ago
DeepSeek certainly managed that on the training side but in terms of inference, the actual product was unusably slow and unreliable at launch and for several months after. I have not bothered revisiting it.
snuxoll
5 hours ago
Are you talking about the model or their service? There's plenty of options for using their models other than the official DeepSeek API.
pier25
4 hours ago
Did railroads change the world though?
They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.
xhevahir
4 hours ago
Pretty much every major historical trend of Western societies in the second half of the eighteenth century, from the development of the modern corporation to the advent of total war, was intimately tied to railroad transportation.
Marsymars
3 hours ago
Transportation of people, yeah, but it still carries a majority of inter-city freight in North America.
oblio
40 minutes ago
Railroads built America and won multiple large wars.
anshumankmr
3 hours ago
Umm yes? The metro even if not a big deal in the states is like a small but quiet way it has changed public transport, plus moving freight, plus people over large distances, plus the bullet train that mixed luxury, speed and efficiency onto trains, all of these are quietly disruptive transformations, that I think we all take for granted.
bee_rider
9 hours ago
Massive upfront costs and second place is just first loser. It’s like building fabs but your product is infinitely copyable. Seems pretty rough.
gerdesj
9 hours ago
What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.
For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
raw_anon_1111
8 hours ago
First place looks a lot like Google…
vkou
6 hours ago
You and the other hobbyists aren't what's driving valuations. Enterprise subscriptions are.
Davidzheng
6 hours ago
Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)
BenFranklin100
5 hours ago
I’m waiting to get an RTX 5090 on the cheap.
2OEH8eoCRo0
4 hours ago
A penny saved is a penny earned
guluarte
7 hours ago
Also that open source models are just months behind
ares623
9 hours ago
Just in time for a Government guaranteed backstop.
dheera
7 hours ago
People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.
Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
raw_anon_1111
6 hours ago
You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked servers
dheera
5 hours ago
I was around pre-Google.
Ranking was Google's 5% contribution to it. They stood on the shoulders of people who invented physical server and datacenter infrastructure, Unix/Linux, file systems, databases, error correction, distributed computing, the entire internet infrastructure, modern Ethernet, all kinds of stuff.
raw_anon_1111
5 hours ago
And none of that had to do with learning from other search engines…
golem14
3 hours ago
Eh ... I question that 5% ranking is google's only contribution, even if it was important.
Everyone stood on the shoulders of file systems and databases, ethernet (and firewalls and netscreens, ...) Well, maybe a few stood on the shoulder of PHP.
Google did in fact pretty much figure out how to scale large number of servers (their racking, datacenters, clustering, global file systems etc) before most others did. I believe it was their ability to run the search engine cheap enough that enabled them to grow while largely retaining profitability early on.
ashirviskas
an hour ago
Yeah, I remember the moment search engines invented computing, I cannot look at sand the same way anymore /s
tootie
7 hours ago
Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.
api
7 hours ago
If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.
Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
adamnemecek
8 hours ago
AI is capital intensive because autodiff kinda sucks.
zeofig
3 hours ago
I still don't understand how it's world-changing apart from considerably degrading the internet. It's laughable to compare it to railroads.
kolinko
2 hours ago
Did you try asking chatgpt to explain?
gerdesj
8 hours ago
"AI is going to be a highly-competitive" - In what way?
It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
fcantournet
8 hours ago
Yes they did, at least twice in the 19th century. It was the largest financial crisis before 1929
johnnyanmac
8 hours ago
It did. I question the issue of "what problem am I trying to solve" with AI, though. Transportation across a huge swath of land had a clear problem space, and trains offered a very clear solution; created dedicated railing and you can transport 100x the resources at 10x the speed of a horseman (and I'm probably underselling these gains). In times where trekking across a continent took months, the efficiencies in communication and supply lines are immediately clear.
AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.
heavyset_go
7 hours ago
> AI feels like a solution looking for a problem.
The problem is increasing profits by replacing paid labor with something "good enough".
kolinko
2 hours ago
Isn’t this what industrialisation was always about?
johnnyanmac
6 hours ago
Doesn't sound like a very profitable problem to solve. At least, not in the long term (which no one orchestrating this is thinking in).
heavyset_go
6 hours ago
Long term is feudalism, the short term is how we get there.
johnnyanmac
5 hours ago
Well I wish them the worst of luck. Those doing this need to go back to the 1880s and see how that ended long term.
MangoToupe
6 hours ago
This is a use case that hasn't yet been proven out, though. "Good enough" for an executive may not be "good enough" to keep the company solvent, and there's no shortage of private equity morons who have no understanding of their own assets.
heavyset_go
6 hours ago
I agree, but it's the bet they're making. You don't end up with trillions in investment and valuations with chatbots and meme video generators.
aaronblohowiak
7 hours ago
Your view is ahistorical.
xbmcuser
5 hours ago
This is why I think China will win the AI race. As once it becomes a commodity no other country is capable of bringing down manufacturing and energy costs the way China is today. I am also rooting for them to get on parity with node size for chips for the same reason as they can crash the prices PC hardware.
energy123
8 hours ago
> There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
wild_egg
8 hours ago
I still find it so fascinating how experiences with these models are so varied.
I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
yoyohello13
7 hours ago
There is always a comment like this in these threads. It’s just 50-50 whether it’s Claude or OpenAI.
razodactyl
5 hours ago
We never hear what the actual questions are. I reckon it's Claude being great at coding in general and GPT being good at niche cases. "Spikey intelligence"
avalys
7 hours ago
I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.
Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
ashirviskas
an hour ago
Can you give specific examples? I'm super interested to see where it fails.
import
5 hours ago
For me OpenAI is the worst of all. Claude code and Gemini deep research is much much more better in terms of quality while ChatGPT hallucinating and saying “sorry you’re right”.
harrall
8 hours ago
I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.
But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
Truthfully I just use both.
gridspy
8 hours ago
I told ChatGPT via my settings that I often make mistakes and to call out my assumptions. So now it
1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)
Answers the question
At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.
nubg
6 hours ago
Care to share the system prompt?
guluarte
7 hours ago
codex is sooo slow but it is good at planning, opus is good at coding but not at good at seeing the big picture