jerf
9 months ago
I don't know about "Winter". The original "AI Winter" was near-total devastation. But it's probably reasonable to think that after the hype train of the last year or two we're due to be headed into the Trough of Disillusionment for LLM-based AI technologies on the standard Gartner hype cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle
mewpmewp2
9 months ago
Maybe, but modern AI, I find already immensely useful in a lot of ways and I see things constantly improving. E.g. realistic video generation, music generation, OpenAI advanced voice mode - it's still wild to me how good these are and how well LLMs can perform.
I still remember even when seeing GPT3.5 I thought it must be impossible what it can do and that there must be some sort of trickery involved, but no.
I feel like I'm still impressed and amazed daily what AI can do now.
jerf
9 months ago
Note the "trough of disillusionment" does not drop to 0.
It is also a measure of hype, not utility.
The current crop of AI tools have their uses and they aren't going away. However, the hype was basically built on the principle of "YOU CAN JUST WAVE AI AT ANYTHING AND REMOVE ALL THE PEOPLE!!!1!", without any need to think about how the AI will be useful, or think about how it will fail, or, you know, doing any of the usual engineering that new technologies inevitably need. You won't need to! The AI will just engineer itself!
This is, of course, bunk.
mewpmewp2
9 months ago
I am wondering if people are part of many different environments? I haven't witnessed this kind of hype except for few cherry picked maybe even out of the context statements.
Where do people actually notice all of the hype? Because what I'm noticing more is people complaining about the AI hype than the hype itself. Since the beginning, basically.
I'm not from the US so maybe I'm not exposed to those things, since when I did travel to SF some time ago I did see a lot of weird banners, so is this the hype that people are talking about?
I do see people talking about the future and what AI could mean for the future, that it could replace X and Y with many different opinions with many different provided timelines, but I think that's all still plausible to happen.
Do companies heavily invest in AI? Do they talk a lot about AI in their earnings release? Do they try to put AI into a lot of products? For sure - but I think that's a very reasonable thing to do when a new technology like this arises. It does have promise, we don't know exactly how much yet, but if we don't try it out, we won't find out either. You should try it out and see where it works and where it doesn't. Given the seeming promise of this, I think current investments in it are very, very reasonable.
It could fail and plateau, but given where it is right now and how much it has evolved in the past years, I think it makes sense to invest that much in it.
Despite seeing amazing things everyday that I wonder how they are possible, which I never would've thought 5 years ago would be possible now, there's people writing articles on how AI hype is dying out, how the AI winter is coming, which seems crazy to me. It's only been few years with amazing advancements. It's the time when tech has evolved with a pace never seen before and people are claiming that things are slowing down?
jerf
9 months ago
Well, one of the measures is, is anybody actually making money with this AI yet, in the sense of mining gold and not selling shovels?
The answer is, it's fairly hard to find a solid yes. The ad companies claim it's contributing to revenue, but a lot of them have a lot of reason to keep the bubble going and aren't going to say anything else anyhow, and it's hard to establish what is contributing to what in the ad space anyhow.
It's been put in a lot of things, but it is not at all clear that it is contributing money, as opposed to being in things that were loss leaders anyhow. How, exactly, does an AI assistant on Amazon's shopping site "make money"? It's an intrisically fuzzy question under the best of circumstances.
Evidence that it has helped programmers is decidedly mixed at best. For everyone saying it has given them superpowers we have an awful lot of reports of buggy generated code and more bugs making it into final code as a result.
You might say "it's not all about the money", which is true, but again, this is about hype, not social utility. I don't see AI living up to the hype. All the moneymakers are the shovel-sellers. If AI was living up to the hype, somebody ought to be making money by now.
Part of the reason it has not lived up to the hype is the sky-high bar the hype has set. The stock market bubble is not pricing companies like nVidia to make some decent money on AI over the next few years. It's priced like they're going to be the only company that can do AI and that everyone has not yet begun to spend on AI. But if returns don't start coming back on the AI spend, that valuation is going to prove to be premature.
It can help to look back at a previous example of this exuberance to see what I'm talking about: The Dot-Com boom. The reality is, basically everything that the Dot Com boom promised happened! Even the thing people mocked for years, "selling pet food online", happens now.
But it happened 20 years later. Far too late for a company founded in 1998 that absolutely depended on having 2020-levels of internet infrastructure.
AI isn't going to disappear and we may even be underestimating the change it will bring in the long term. But that doesn't mean that the curve is going to smoothly slope up over the next 50 years. These things often get out "over their skis". AI seems badly over its skis to me, not because it won't be as useful in 20 years as it is promised, but because it is not as useful right now as is promised.
mewpmewp2
9 months ago
> Well, one of the measures is, is anybody actually making money with this AI yet, in the sense of mining gold and not selling shovels?
With a new tech, or starting something new it's rare to be profitable within first 5 years in the first place. Certainly there's coming in revenue in many places. I spend a lot on AI services myself. Starting from all current popular LLM variants, like Claude, GPT, Perplexity to music generation tools like Suno. Some of them I use for fun, but many of them I do think my productivity and value output has increased more than I pay for them. Many I use for experimenting or just out of curiousity. There's a lot of revenue coming in, but it also at the same time makes sense that the costs right now are higher than what they are actually making. But I have also directly increased my income thanks to AI tools, because I do my usual work faster, and I do freelancing on the side which I charge quite a bit for if it comes to hourly. Far more than I spend on those AI tools. without AI tools I couldn't do the work as fast or have the energy to produce this much.
> How, exactly, does an AI assistant on Amazon's shopping site "make money"?
It depends on how this AI assistant is built. I have a lot of thoughts about shopping UX, and I think it's a UX related question, how a better UX will increase e-commerce conversion, but I don't want to go that deep into it here. I definitely imagine ways how AI can improve UX in such a way that it finds matching products for the customer much faster than standard UX would. This would provide value because it takes less time to find the product and it would possibly find a higher quality match.
> Evidence that it has helped programmers is decidedly mixed at best. For everyone saying it has given them superpowers we have an awful lot of reports of buggy generated code and more bugs making it into final code as a result.
I know that it's definitely helped me a lot. I don't know if it's a skill issue or a thought issue or what it is, how some people don't see it valuable multiplier for their productivity, but I haven't noticed myself doing buggy work more because of that.
> You might say "it's not all about the money", which is true, but again, this is about hype, not social utility. I don't see AI living up to the hype. All the moneymakers are the shovel-sellers. If AI was living up to the hype, somebody ought to be making money by now.
I mean I wouldn't say that. I do think it has to make money, but I also think that with new tech there's always a period of time where it makes strategical sense for it to lose money, just like starting any new company. And some definitely do make money. Again I make individually more money, because I can work more, and I can also translate it into freelance work which if I just did usually salary based work might not be rewarded as such as directly. Although I use AI that also help me at my work to spend less hours on it.
> Part of the reason it has not lived up to the hype is the sky-high bar the hype has set. The stock market bubble is not pricing companies like nVidia to make some decent money on AI over the next few years. It's priced like they're going to be the only company that can do AI and that everyone has not yet begun to spend on AI. But if returns don't start coming back on the AI spend, that valuation is going to prove to be premature.
This argument requires to come up with specific numbers, and the market valuation is very nuanced.
> It can help to look back at a previous example of this exuberance to see what I'm talking about: The Dot-Com boom. The reality is, basically everything that the Dot Com boom promised happened! Even the thing people mocked for years, "selling pet food online", happens now.
Yeah, but I think it's an argument for AI rather than anything else?
> But it happened 20 years later. Far too late for a company founded in 1998 that absolutely depended on having 2020-levels of internet infrastructure.
The major players right now have a lot of funds to keep going with it though.
> AI isn't going to disappear and we may even be underestimating the change it will bring in the long term. But that doesn't mean that the curve is going to smoothly slope up over the next 50 years. These things often get out "over their skis". AI seems badly over its skis to me, not because it won't be as useful in 20 years as it is promised, but because it is not as useful right now as is promised.
We don't know the future or the curve, and no one can for sure predict the timelines, but I think based on the knowledge we have it makes sense to put in the money that currently is being put into AI, based on at least capabilities and speed of those capabilities improving. If we had an estimation that there's 50% chances of AGI by 2035, I think people are absolutely not putting enough money in right now, because if AGI was to happen by 2035, then it would make sense for very many to go absolutely all in bonkers on that.
jerf
9 months ago
You seem to be unable to separate the concept of "hype" from "value".
Until you succeed in doing that, you're going to be terminally confused by a lot of things.
player1234
9 months ago
Infinite content produced by AI has close to no value.
foogazi
9 months ago
It’s can’t be economically sustainable if this is it right ?
brotchie
9 months ago
Feels different to past hype cycles (Internet bubble, Crypto bubble).
LLMs with meaningful capabilities arrive very quickly. e.g. One week they were not that useful, the next week they gained meaningful capabilities.
A function that takes text and returns text isn't that useful without it being integrated into products, and this takes time.
Next 12-24 months will be the AIfication of many workflows: that is, discovering and integrating LLM-based reasoning into business processes. Assuming even a gradual improvement in capabilities of LLMs over time, all of these AI enhanced business processes will simply get better.
Diffusion of technology is slow slow slow, and then fast. As I become more capable with AI (e.g. what tasks as an engineer are helped using AI) I'm getting better and better at it. So there's a non-linear learning curve where, as you learn to use the technology better, you can unlock more productivity.
vrighter
9 months ago
the aiification of products to me sounds like being made both less reliable and less predictable. Not a good thing
contravariant
9 months ago
Honestly I think we're already there, it just takes a bit before the realisation trickles down.
The successful uses of LLMs don't seem to depart too far from the basic chatbot that started the whole hype. And the truly 'magic' uses seem to fail in practice because even a small error rate is way too high for a system that cannot learn from its mistakes (quickly).
GaggiX
9 months ago
>don't seem to depart too far from the basic chatbot that started the whole hype.
Is ChatGPT-3.5 a basic chatbot now? It's been less than two years since it was SOTA.
urbandw311er
9 months ago
Nicely put