lolinder
9 hours ago
> Marcus wrote: "GPT-5 hasn't dropped, Sora hasn't shipped, the company had an operating loss of $5 billion last year, there is no obvious moat, Meta is giving away similar software for free, many lawsuits pending.
> "Yet people are valuing this company at $150 billion dollars.
> "Absolutely insane. Investors shouldn't be pouring more money at higher valuations, they should be asking what is going on."
I've been saying for a while now that the lack of GPT-5 is a huge red flag for their future. They burned all the hype for GPT-5 on 4o, letting the media call their upcoming model GPT-5 for months before coming out with "4 but cheaper". o1 is impressive but again not a new generation—it's pretty clearly just the result of using 4o's cost savings and throwing tons of extra tokens at the problem behind the scenes in a technique that's easily replicable by the competition.
OpenAI has no moat and there has been no serious movement by governments to give them the moat they're so desperately lobbying for. Everything about their actions over the past year—the departures, the weird marketing tactics, the AGI hype, and the restructuring—says to me that Altman is trying to set up an exit before investors realize that the competition has already caught up and he has no plan to regain the lead.
jraines
9 hours ago
It looks dire but it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.
Maybe not. But I always thought that, especially with their lobbying efforts and desire to project a sense of seriousness (even if unfounded) around those efforts, that it was very unlikely they would release a next-gen model before the US election.
I agree that the marketing/messaging, esp on social media, is borderline deranged, swinging between “we are basically The Foundation” and “pweese try my pwoduct i hope u love it (heart hands emoji)”
lolinder
9 hours ago
> it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.
I agree it's too early to call for sure, but just to clarify: The naysayers are nearly always right. We just only remember the times they were wrong.
jsheard
9 hours ago
See the waves of investor hype that immediately preceded the AI hypewave: Metaverse and Blockchain. The naysayers were absolutely right.
Not to worry though, the geniuses at Meta and a16z are sure that AI will stick the landing, after they bet the farm on Metaverse and Blockchain respectively.
ben_w
8 hours ago
Naysayers in the British establishment thought the 13 colonies would come crawling back when their little experiment with democracy failed and they needed some proper aristocrats who knew what they were doing.
The space race had naysayers; the NYT printed the opinion that manned flight would take one to ten million years a week before the Wright Brothers; the iPod was lame with less space than a Nomad; SpaceX and Tesla were both dismissed when they were young; 3D printing likewise dismissed as if it was only plastic trinkets; and I've seen plenty of people here on HackerNews proclaiming that "AI will never ${x}" for various things some of which had already been accomplished by that point.
There's always a naysayer. You can use their mere existence as evidence for much, it has to be the specific arguments.
philistine
8 hours ago
There's a difference between a naysayer and criticism. Using your iPod example, it was described as lame, in a way to dismiss it out of hand. But it did indeed have less space than a Nomad. That's valid, but ultimately unimportant, criticism.
People here are criticizing the business fundamentals of the openAI company. What are your takes on its finances? Or are you just on board, everything is great, do not look behind the curtain where we hide our five billion dollar losses?
hyggetrold
9 hours ago
> The naysayers are nearly always right. We just only remember the times they were wrong.
Wait what?
artwr
9 hours ago
I'll let parent elaborate more on the intent, but the way I interpreted it was : Saying that a startup will fail (i.e. being a naysayer) and being right about that is the most likely outcome due to the current "success" distribution (most businesses/startups fail).
Also the most memorable ones are when people were dismissive but ultimately wrong about the viability of the business (like the "dropbox" comment).
flkiwi
8 hours ago
I think there's a deeper implication that the naysayers _about the subject of the hype_ are usually right, rather than simply about anyone trying to exploit the hype. Metaverse was going to be the next big thing. Naysayers (correctly) laughed. Nobody talks about metaverse now.
anonymousab
8 hours ago
The vast majority of startups fail. Most attempts at business will fail. It's just the nature of things.
But I think a not uncommon pattern with "too big to fail" startups is that they can change their definition of success or failure in order to claim victory. Or at least, in order for Sam to do so.
They might not reach the stated goal of AGI or even general profitability, but if Sam and some key investors manage to come out ahead at the end of their maneuvering, then I'm sure they'll claim victory (and that they changed the world and all that pomp).
philistine
8 hours ago
Funny how their definition of changing the world amounts to: Fuck you, got mine.
tivert
7 hours ago
> Funny how their definition of changing the world amounts to: Fuck you, got mine.
It's sad, but at this point I pretty much assume anyone out of SV who claims they're trying to change the world are either liars, incompetents, or both. That whole culture has just been on a tear of goodwill-burning.
bunderbunder
8 hours ago
Because we mostly remember the things that are still around. Everyone knows about Charles Darwin but nobody knows about Erasmus Darwin, kind of thing.
lolinder
9 hours ago
Nearly every venture of any sort fails. The only times we remember the naysayers are on the rare occasions where they were wrong and the venture succeeded.
myprotegeai
8 hours ago
If it wasn't this way, it would mean new things are more likely to succeed than fail.
iamsrp
9 hours ago
No.
shmatt
9 hours ago
I would say the strawberry/o1 hype was even worse than the GPT-5 hype
There were months worth of articles on how strawberry is considered almost dangerous internally its so smart. I know we only got -mini and -preview but...this doesn't feel like AGI
jsheard
9 hours ago
> There were months worth of articles on how strawberry is considered almost dangerous internally its so smart.
Like clockwork, every time they need to drum up excitement:
https://www.theverge.com/2019/11/7/20953040/openai-text-gene...
noobermin
8 hours ago
I was bashing my head into walls since 2017 or so when people were saying AI will eat the world and we have to worry about non-alignment and I felt insane realizing no one else even asked if it was manufactured hype. People in my life to this day are still falling for these tactics despite, to me, seeming bright regarding everything else.
To be clear, it is true that transformers did change things but the merchants are still over selling it and everyone else laps it up while not meta-thinking about it for even one second.
ben_w
8 hours ago
It may be hype, but there's plenty of solid logic behind the general case.
There's also a huge range of practical demonstrations of non-aligned, monomaniacal and not particularly smart intelligences, that literally eat humans: bacteria.
(Also lions and tigers and bears, if you want to insist that evolution doesn't itself count as intelligence).
CephalopodMD
9 hours ago
Totally agree. It took me a full week before I realized that the Strawberry/o1 model was the mysterious Q* Sam Altman has been hyping up for almost a full year since the openai coup, which... is pretty underwhelming tbh. It's an impressive incremental advancement for sure! But it's really not the paradigm shifting gpt-5 worthy launch we were promised.
Personal opinion: I think this means we've probably exhausted all the low hanging fruit in LLM land. This was the last thing I was reserving judgement for. When the most hyped up big idea openai has rn is basically "we're just gonna have the model dump out a wall of semi-optimized chain of thought every time and not send it over the wire" we're officially out of big ideas. Like I mean it obviously works... but that's more or less what we've _been_ doing for years now! Barring a total rethinking of LLM architecture, I think all improvements going forward will be baby steps for a while, basically moving at the same pace we've been going since gpt-4 launched. I don't think this is the path to AGI in the near term, but there's still plenty of headroom for minor incremental change.
By analogy, i feel like gpt-4 was basically the same quantum leap we got with the iphone 4: all the basic functionality and peripherals were there by the time we got iphone 4 (multitasking, facetime, the app store, various sensors, etc.), and everything since then has just been minor improvements. The current iPhone 16 is obviously faster, bigger, thinner, and "better" than the 4, but for the most part it doesn't really do anything extra that the 4 wasn't already capable of at some level with the right app. Similarly, I think gpt-4 was pretty much "good enough". LLMs are about as they're gonna get for the next little while, though they might get a little cheaper, faster, and more "aligned" (however we wanna define that). They might get slightly less stupid, but i don't think they're gonna get a whole lot smarter any time soon. Whatever we see in the next few years is probably not going to be much better than using gpt-4 with the right prompt, tool use, RAG, etc. on top of it. We'll only see improvements at the margins.
lordswork
9 hours ago
To be fair, o1 is a major breakthrough in the field. If other AI labs can't crack scaling useful inference compute, OpenAI will maintain a big lead.
lolinder
9 hours ago
Isn't o1 just applying last year's Tree of Thoughts paper in production? Is there any reason to believe that the other companies will struggle to implement their own?
impossiblefork
8 hours ago
I don't think it's tree of thoughts at all.
I think it's as they say: reinforcement learning applied to cause it to generate a relatively long 'reasoning trace' of some kind from which the answer is obtained through summarisation.
I think it's likely a cleverly simplified version of QuietSTaR, with no thought tokens, just one big generation to which the RL is applied.
The way I believe it's trained in practice is as follows: they have a bunch of examples, some at the edge of GPT-4s ability to answer, some beyond it, some that GPT-4 can answer if you're lucky with the randomness. Then they give it one of these prompts, generate a fairly long text, maybe 3x the length of the answer, and summarize that to produce the final answer. Then they use REINFORCE to reward the generated texts that increase the probability of the summary being correct.
WJW
8 hours ago
Not be be nitpicky, but being the first to deploy recent academic research papers to production should count as a breakthrough IMHO.
njtransit
9 hours ago
o1 seems like it’s basically 4o with some chain of thought bolted on. Personally, I don’t consider chain of thought a breakthrough, let alone a major one.
aunty_helen
9 hours ago
CoT can be _easily_ achieved using langgraph in a similar manner. There’s no “scaling of inference” it’s just prompting, all the way down.
petesergeant
9 hours ago
> o1 is a major breakthrough
Is it? I feel like if you don't care about the cost it's pretty easily replicable on any other LLM, just with a lang-chain sort of approach
axpvms
8 hours ago
speaking of which, try asking ChatGPT how many r's are in strawberry
alfalfasprout
9 hours ago
The problem is that we're going through what happened in computer vision all over again. Convnets were getting bigger and bigger. New improvements came about in training efficiency, making medium models better, etc. Until... marginal improvements became more and more marginal.
LLMs are going through the same thing now. Better and better every iteration but increasingly we're starting to see a fast approaching wall on what they can really do with the current paradigm.
bottlepalm
8 hours ago
We are literally standing on the precipice of agency. After that, it's over. You see us approaching a wall, I see a cliff.
WJW
8 hours ago
To stick with the metaphor, it's not super unclear if agency is within grasp or on the other side of an abyss. LLMs are definitely an improvement, but it's not at all clear if they can scale to human-level agency. If they reach that, it's even more unclear if they could ever reach superhuman levels given that all their training data is human-level.
And finally, we can see from normal human society that it is hardly ever the smartest humans who achieve the most or rise to the highest levels of power. There is no reason to believe that an AI with agency would an inherent "it's over" scenario
philistine
8 hours ago
We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
WJW
7 hours ago
I don't want to be a hater but according to the UN about 61 million humans die per year, so that only comes out to ~167k per day rather than millions. Most of those will die from old age too, rather than "being killed".
Your main point is true though, even superhuman AI would have a rough time in actual combat. It's just too dependent on electricity and datacenters with known locations to actually have a chance.
labster
4 hours ago
I’m sure the superintelligent AI will convince humans to transport its core while plugged into a potato battery. But honestly did the supply chain attack on Lebanese pagers last week teach you nothing? AIs should be great at that.
philistine
4 minutes ago
What? Planting C4 in pagers? Those pagers did not blow up on their own. Hundreds of people needed to put all that explosive in all those devices.
Our world is still analogue.
tivert
7 hours ago
> We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
1. IMHO, genocidal apocalypse scenarios like you describe are the wrong way to think about the societal danger of AGI. I think a far more likely outcome, and nearly as disastrous, is AGI eliminating the dependency of capital on labor, leading to a global China Shock on steroids (e.g. unimaginable levels of inequality and wealth concentration, which no level of personal adaptability could overcome).
2. Even in your apocalypse scenario, I think you underestimate the damage that could be done. I don't have an orchard, so I know I'm fucked if urban life-support systems get messed up over large enough area that no aid is coming to my local area afterwards. And a genocidal AI that wasn't blindingly stupid would wait until it had control of enough robots to act in the real world, after which it could agent-orange your orchard.
orionsbelt
9 hours ago
I don't know if OpenAI has a moat or if they will succeed long term against competitors, and certainly the valuation is high, but pointing to the lack of GPT-5 already is laughable. The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding. The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date. Maybe the exec departures are a canary in the coal mine for the future, but I'm willing to at least give them a couple years until the last time they shipped what is magical to me (most recently, yesterday).
lolinder
9 hours ago
> The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding.
They're also rapidly replicable. I don't believe for a moment that Apple is designing their system in a way that doesn't allow switching at a moment's notice, and everything they're doing is copied within 6 months by competitors including LLaMA, which Meta keeps releasing for free.
> The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date.
I didn't assert that they were resting on their laurels, I asserted that they have no path forward to the next generation and AGI. If they did have a path they wouldn't keep burning their hype for it on applications of the previous generation that are easy to replicate.
404mm
7 hours ago
> o1 is impressive
I agree, the capabilities and reasoning are really nice compared to 4o. But my first thought when testing it- how are they going to pay for it.. trivial questions burn 15s and more complex 30-40s of some processing time. How does this scale to millions of users??
ben_w
8 hours ago
What they were lobbying for until very recently isn't a moat, if anything it's the opposite: "we know anything less capable is fine, focus your attention on us and make sure what we do is safe".
What's changed very recently, which is a moat and which they may or may not get, is seven 5 GW data centres — the equivalent of "can we tile half of Rhode Island in PV?"
lolinder
8 hours ago
> we know anything less capable is fine, focus your attention on us and make sure what we do is safe
This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.
Regulatory capture is used in this way to build moats.
ben_w
8 hours ago
> This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.
They were not making it hard to follow them. That's the point. They were saying they themselves don't really know what they're doing, while saying open source should be protected from any regulations imposed on them.
Hard to go past them, perhaps; but even then, given the scale of compute for training bigger models, the only people who could even try would have an easy time following whatever rules are created.
Janicc
8 hours ago
What's with his obsession with GPT-5? Altman has conistently been saying that there will be no GPT-5 this year almost since the years beginning. He's acting like OpenAI promised GPT-5 and are unable to release it or something.
sroussey
9 hours ago
Will there even be a GTP-5?
Maybe next up is o2 then o3 and so on.
zooq_ai
9 hours ago
This reminds of all the anti-Tesla "Tesla is dead/bankrupt" in 2017/18 takes from mostly clueless people (Gary Marcus is a certified idiot) who don't know anything about scaling a business, TAM of Knowledge Industry ($30T and growing).
Always, happy to take the other side of the bet of popular HN comments. (see META, TSLA)
lolinder
9 hours ago
"The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
You're welcome to whichever side you want to take, but you're not right by the simple virtue of being contrary.
groby_b
7 hours ago
All that proves is that shitty businesses that already have scale take a long while to crumble. (See also: Xitter)