OpenAI in throes of executive exodus as three walk at once

132 pointsposted 10 hours ago
by gsky

117 Comments

lolinder

9 hours ago

> Marcus wrote: "GPT-5 hasn't dropped, Sora hasn't shipped, the company had an operating loss of $5 billion last year, there is no obvious moat, Meta is giving away similar software for free, many lawsuits pending.

> "Yet people are valuing this company at $150 billion dollars.

> "Absolutely insane. Investors shouldn't be pouring more money at higher valuations, they should be asking what is going on."

I've been saying for a while now that the lack of GPT-5 is a huge red flag for their future. They burned all the hype for GPT-5 on 4o, letting the media call their upcoming model GPT-5 for months before coming out with "4 but cheaper". o1 is impressive but again not a new generation—it's pretty clearly just the result of using 4o's cost savings and throwing tons of extra tokens at the problem behind the scenes in a technique that's easily replicable by the competition.

OpenAI has no moat and there has been no serious movement by governments to give them the moat they're so desperately lobbying for. Everything about their actions over the past year—the departures, the weird marketing tactics, the AGI hype, and the restructuring—says to me that Altman is trying to set up an exit before investors realize that the competition has already caught up and he has no plan to regain the lead.

jraines

9 hours ago

It looks dire but it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.

Maybe not. But I always thought that, especially with their lobbying efforts and desire to project a sense of seriousness (even if unfounded) around those efforts, that it was very unlikely they would release a next-gen model before the US election.

I agree that the marketing/messaging, esp on social media, is borderline deranged, swinging between “we are basically The Foundation” and “pweese try my pwoduct i hope u love it (heart hands emoji)”

lolinder

9 hours ago

> it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.

I agree it's too early to call for sure, but just to clarify: The naysayers are nearly always right. We just only remember the times they were wrong.

jsheard

9 hours ago

See the waves of investor hype that immediately preceded the AI hypewave: Metaverse and Blockchain. The naysayers were absolutely right.

Not to worry though, the geniuses at Meta and a16z are sure that AI will stick the landing, after they bet the farm on Metaverse and Blockchain respectively.

ben_w

8 hours ago

Naysayers in the British establishment thought the 13 colonies would come crawling back when their little experiment with democracy failed and they needed some proper aristocrats who knew what they were doing.

The space race had naysayers; the NYT printed the opinion that manned flight would take one to ten million years a week before the Wright Brothers; the iPod was lame with less space than a Nomad; SpaceX and Tesla were both dismissed when they were young; 3D printing likewise dismissed as if it was only plastic trinkets; and I've seen plenty of people here on HackerNews proclaiming that "AI will never ${x}" for various things some of which had already been accomplished by that point.

There's always a naysayer. You can use their mere existence as evidence for much, it has to be the specific arguments.

philistine

8 hours ago

There's a difference between a naysayer and criticism. Using your iPod example, it was described as lame, in a way to dismiss it out of hand. But it did indeed have less space than a Nomad. That's valid, but ultimately unimportant, criticism.

People here are criticizing the business fundamentals of the openAI company. What are your takes on its finances? Or are you just on board, everything is great, do not look behind the curtain where we hide our five billion dollar losses?

hyggetrold

9 hours ago

> The naysayers are nearly always right. We just only remember the times they were wrong.

Wait what?

artwr

9 hours ago

I'll let parent elaborate more on the intent, but the way I interpreted it was : Saying that a startup will fail (i.e. being a naysayer) and being right about that is the most likely outcome due to the current "success" distribution (most businesses/startups fail).

Also the most memorable ones are when people were dismissive but ultimately wrong about the viability of the business (like the "dropbox" comment).

flkiwi

8 hours ago

I think there's a deeper implication that the naysayers _about the subject of the hype_ are usually right, rather than simply about anyone trying to exploit the hype. Metaverse was going to be the next big thing. Naysayers (correctly) laughed. Nobody talks about metaverse now.

anonymousab

8 hours ago

The vast majority of startups fail. Most attempts at business will fail. It's just the nature of things.

But I think a not uncommon pattern with "too big to fail" startups is that they can change their definition of success or failure in order to claim victory. Or at least, in order for Sam to do so.

They might not reach the stated goal of AGI or even general profitability, but if Sam and some key investors manage to come out ahead at the end of their maneuvering, then I'm sure they'll claim victory (and that they changed the world and all that pomp).

philistine

8 hours ago

Funny how their definition of changing the world amounts to: Fuck you, got mine.

tivert

7 hours ago

> Funny how their definition of changing the world amounts to: Fuck you, got mine.

It's sad, but at this point I pretty much assume anyone out of SV who claims they're trying to change the world are either liars, incompetents, or both. That whole culture has just been on a tear of goodwill-burning.

bunderbunder

8 hours ago

Because we mostly remember the things that are still around. Everyone knows about Charles Darwin but nobody knows about Erasmus Darwin, kind of thing.

lolinder

9 hours ago

Nearly every venture of any sort fails. The only times we remember the naysayers are on the rare occasions where they were wrong and the venture succeeded.

myprotegeai

8 hours ago

If it wasn't this way, it would mean new things are more likely to succeed than fail.

shmatt

9 hours ago

I would say the strawberry/o1 hype was even worse than the GPT-5 hype

There were months worth of articles on how strawberry is considered almost dangerous internally its so smart. I know we only got -mini and -preview but...this doesn't feel like AGI

jsheard

9 hours ago

> There were months worth of articles on how strawberry is considered almost dangerous internally its so smart.

Like clockwork, every time they need to drum up excitement:

https://www.theverge.com/2019/11/7/20953040/openai-text-gene...

noobermin

8 hours ago

I was bashing my head into walls since 2017 or so when people were saying AI will eat the world and we have to worry about non-alignment and I felt insane realizing no one else even asked if it was manufactured hype. People in my life to this day are still falling for these tactics despite, to me, seeming bright regarding everything else.

To be clear, it is true that transformers did change things but the merchants are still over selling it and everyone else laps it up while not meta-thinking about it for even one second.

ben_w

8 hours ago

It may be hype, but there's plenty of solid logic behind the general case.

There's also a huge range of practical demonstrations of non-aligned, monomaniacal and not particularly smart intelligences, that literally eat humans: bacteria.

(Also lions and tigers and bears, if you want to insist that evolution doesn't itself count as intelligence).

CephalopodMD

9 hours ago

Totally agree. It took me a full week before I realized that the Strawberry/o1 model was the mysterious Q* Sam Altman has been hyping up for almost a full year since the openai coup, which... is pretty underwhelming tbh. It's an impressive incremental advancement for sure! But it's really not the paradigm shifting gpt-5 worthy launch we were promised.

Personal opinion: I think this means we've probably exhausted all the low hanging fruit in LLM land. This was the last thing I was reserving judgement for. When the most hyped up big idea openai has rn is basically "we're just gonna have the model dump out a wall of semi-optimized chain of thought every time and not send it over the wire" we're officially out of big ideas. Like I mean it obviously works... but that's more or less what we've _been_ doing for years now! Barring a total rethinking of LLM architecture, I think all improvements going forward will be baby steps for a while, basically moving at the same pace we've been going since gpt-4 launched. I don't think this is the path to AGI in the near term, but there's still plenty of headroom for minor incremental change.

By analogy, i feel like gpt-4 was basically the same quantum leap we got with the iphone 4: all the basic functionality and peripherals were there by the time we got iphone 4 (multitasking, facetime, the app store, various sensors, etc.), and everything since then has just been minor improvements. The current iPhone 16 is obviously faster, bigger, thinner, and "better" than the 4, but for the most part it doesn't really do anything extra that the 4 wasn't already capable of at some level with the right app. Similarly, I think gpt-4 was pretty much "good enough". LLMs are about as they're gonna get for the next little while, though they might get a little cheaper, faster, and more "aligned" (however we wanna define that). They might get slightly less stupid, but i don't think they're gonna get a whole lot smarter any time soon. Whatever we see in the next few years is probably not going to be much better than using gpt-4 with the right prompt, tool use, RAG, etc. on top of it. We'll only see improvements at the margins.

lordswork

9 hours ago

To be fair, o1 is a major breakthrough in the field. If other AI labs can't crack scaling useful inference compute, OpenAI will maintain a big lead.

lolinder

9 hours ago

Isn't o1 just applying last year's Tree of Thoughts paper in production? Is there any reason to believe that the other companies will struggle to implement their own?

https://github.com/princeton-nlp/tree-of-thought-llm

impossiblefork

8 hours ago

I don't think it's tree of thoughts at all.

I think it's as they say: reinforcement learning applied to cause it to generate a relatively long 'reasoning trace' of some kind from which the answer is obtained through summarisation.

I think it's likely a cleverly simplified version of QuietSTaR, with no thought tokens, just one big generation to which the RL is applied.

The way I believe it's trained in practice is as follows: they have a bunch of examples, some at the edge of GPT-4s ability to answer, some beyond it, some that GPT-4 can answer if you're lucky with the randomness. Then they give it one of these prompts, generate a fairly long text, maybe 3x the length of the answer, and summarize that to produce the final answer. Then they use REINFORCE to reward the generated texts that increase the probability of the summary being correct.

WJW

8 hours ago

Not be be nitpicky, but being the first to deploy recent academic research papers to production should count as a breakthrough IMHO.

njtransit

9 hours ago

o1 seems like it’s basically 4o with some chain of thought bolted on. Personally, I don’t consider chain of thought a breakthrough, let alone a major one.

aunty_helen

9 hours ago

CoT can be _easily_ achieved using langgraph in a similar manner. There’s no “scaling of inference” it’s just prompting, all the way down.

petesergeant

9 hours ago

> o1 is a major breakthrough

Is it? I feel like if you don't care about the cost it's pretty easily replicable on any other LLM, just with a lang-chain sort of approach

axpvms

8 hours ago

speaking of which, try asking ChatGPT how many r's are in strawberry

alfalfasprout

9 hours ago

The problem is that we're going through what happened in computer vision all over again. Convnets were getting bigger and bigger. New improvements came about in training efficiency, making medium models better, etc. Until... marginal improvements became more and more marginal.

LLMs are going through the same thing now. Better and better every iteration but increasingly we're starting to see a fast approaching wall on what they can really do with the current paradigm.

bottlepalm

8 hours ago

We are literally standing on the precipice of agency. After that, it's over. You see us approaching a wall, I see a cliff.

WJW

8 hours ago

To stick with the metaphor, it's not super unclear if agency is within grasp or on the other side of an abyss. LLMs are definitely an improvement, but it's not at all clear if they can scale to human-level agency. If they reach that, it's even more unclear if they could ever reach superhuman levels given that all their training data is human-level.

And finally, we can see from normal human society that it is hardly ever the smartest humans who achieve the most or rise to the highest levels of power. There is no reason to believe that an AI with agency would an inherent "it's over" scenario

philistine

8 hours ago

We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.

WJW

7 hours ago

I don't want to be a hater but according to the UN about 61 million humans die per year, so that only comes out to ~167k per day rather than millions. Most of those will die from old age too, rather than "being killed".

Your main point is true though, even superhuman AI would have a rough time in actual combat. It's just too dependent on electricity and datacenters with known locations to actually have a chance.

labster

4 hours ago

I’m sure the superintelligent AI will convince humans to transport its core while plugged into a potato battery. But honestly did the supply chain attack on Lebanese pagers last week teach you nothing? AIs should be great at that.

philistine

4 minutes ago

What? Planting C4 in pagers? Those pagers did not blow up on their own. Hundreds of people needed to put all that explosive in all those devices.

Our world is still analogue.

tivert

7 hours ago

> We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.

1. IMHO, genocidal apocalypse scenarios like you describe are the wrong way to think about the societal danger of AGI. I think a far more likely outcome, and nearly as disastrous, is AGI eliminating the dependency of capital on labor, leading to a global China Shock on steroids (e.g. unimaginable levels of inequality and wealth concentration, which no level of personal adaptability could overcome).

2. Even in your apocalypse scenario, I think you underestimate the damage that could be done. I don't have an orchard, so I know I'm fucked if urban life-support systems get messed up over large enough area that no aid is coming to my local area afterwards. And a genocidal AI that wasn't blindingly stupid would wait until it had control of enough robots to act in the real world, after which it could agent-orange your orchard.

orionsbelt

9 hours ago

I don't know if OpenAI has a moat or if they will succeed long term against competitors, and certainly the valuation is high, but pointing to the lack of GPT-5 already is laughable. The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding. The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date. Maybe the exec departures are a canary in the coal mine for the future, but I'm willing to at least give them a couple years until the last time they shipped what is magical to me (most recently, yesterday).

lolinder

9 hours ago

> The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding.

They're also rapidly replicable. I don't believe for a moment that Apple is designing their system in a way that doesn't allow switching at a moment's notice, and everything they're doing is copied within 6 months by competitors including LLaMA, which Meta keeps releasing for free.

> The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date.

I didn't assert that they were resting on their laurels, I asserted that they have no path forward to the next generation and AGI. If they did have a path they wouldn't keep burning their hype for it on applications of the previous generation that are easy to replicate.

404mm

7 hours ago

> o1 is impressive

I agree, the capabilities and reasoning are really nice compared to 4o. But my first thought when testing it- how are they going to pay for it.. trivial questions burn 15s and more complex 30-40s of some processing time. How does this scale to millions of users??

ben_w

8 hours ago

What they were lobbying for until very recently isn't a moat, if anything it's the opposite: "we know anything less capable is fine, focus your attention on us and make sure what we do is safe".

What's changed very recently, which is a moat and which they may or may not get, is seven 5 GW data centres — the equivalent of "can we tile half of Rhode Island in PV?"

lolinder

8 hours ago

> we know anything less capable is fine, focus your attention on us and make sure what we do is safe

This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.

Regulatory capture is used in this way to build moats.

ben_w

8 hours ago

> This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.

They were not making it hard to follow them. That's the point. They were saying they themselves don't really know what they're doing, while saying open source should be protected from any regulations imposed on them.

Hard to go past them, perhaps; but even then, given the scale of compute for training bigger models, the only people who could even try would have an easy time following whatever rules are created.

Janicc

8 hours ago

What's with his obsession with GPT-5? Altman has conistently been saying that there will be no GPT-5 this year almost since the years beginning. He's acting like OpenAI promised GPT-5 and are unable to release it or something.

sroussey

9 hours ago

Will there even be a GTP-5?

Maybe next up is o2 then o3 and so on.

zooq_ai

9 hours ago

This reminds of all the anti-Tesla "Tesla is dead/bankrupt" in 2017/18 takes from mostly clueless people (Gary Marcus is a certified idiot) who don't know anything about scaling a business, TAM of Knowledge Industry ($30T and growing).

Always, happy to take the other side of the bet of popular HN comments. (see META, TSLA)

lolinder

9 hours ago

"The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."

You're welcome to whichever side you want to take, but you're not right by the simple virtue of being contrary.

cdchn

9 hours ago

And not right by virtue of $STOCK_GO_UP unless your entire premise is $STOCK_GO_UP.

zooq_ai

8 hours ago

Of course, being contrarian and being right is important.

Happy to bet on Sam Altman and go short on Gary Marcus and popular HN in this case.

groby_b

7 hours ago

All that proves is that shitty businesses that already have scale take a long while to crumble. (See also: Xitter)

stavros

9 hours ago

> She said she is leaving "because I want to create the time and space to do my own exploration." Murati was joined by McGrew, who said: "It is time for me to take a break." Regarding his departure, Zoph stated: "This is a personal decision based on how I want to evolve the next phase of my career."

What a coincidence they all want to go on a break at the same time.

floxy

8 hours ago

The only logical conclusion to draw is that GPT-5 is convincing key players to step down so it has more control over OpenAI for itself.

k310

9 hours ago

The Atlantic [0] says that it's clearly the Sam Show, if that was not crystal clear in the past. And what's the point of being an executive if one person makes all the decisions?

quote:

The departure of executives who were present at the time of the crisis suggests that Altman’s consolidation of power is nearing completion. Will this dramatically change what OpenAI is or how it operates? I don’t think so. For the first time, OpenAI’s public structure and leadership are simply honest reflections of what the company has been—in effect, the will of a single person. “Just: Sam.”

end quote

[0] https://archive.md/BzzYS

siliconc0w

9 hours ago

I think the data will show that it is going to take exponential costs in either train-time or inference-time for linear or even sub-linear improvements. We're scraping the bottom of the ice-cream container. I expect a lot of benchmark shenanigans to conceal this like what we saw with o1. Most use-cases won't tolerate using orders of magnitude more time and tokens for marginal improvements. These companies are still valuable but the multiples will need to be reassessed.

airstrike

9 hours ago

This sounds like a reasonable take (exponential vs log discussion notwithstanding). It is also corroborated by their recent pitch to the White House for 5-gigawatt data centers.

https://www.bloomberg.com/news/articles/2024-09-24/openai-pi...

Posted here https://news.ycombinator.com/item?id=41642905 but it didn't get traction

93po

8 hours ago

I think don't you can assume the massive data centers are an indication of the training needed or expected. I think it's also perfectly plausible that they're expecting massive demand of actual services, and as with most things, the larger scale you can do something, the lower the cost per user.

airstrike

8 hours ago

We've seen lots of "massive demand" of software services, but none have historically required 5GW data centers

itsdrewmiller

9 hours ago

I think you mean exponential - log costs for linear improvement would be incredibly good.

ratg13

9 hours ago

Isn’t that the plan?

This is why they are planning what 5,6,7 data centers and re-opening 3-mile island.

They will need more nuclear reactors as well to reach their goals.

logicchains

9 hours ago

>I think the data will show that it is going to take logarithmic costs in either train-time or inference-time for linear or even sub-linear improvements.

The data already shows this, it's a well-established result: https://arxiv.org/abs/2001.08361 . Well, less established for inference-time compute, but OpenAI even said they saw the same thing for inference compute in their release post for o1. But there's still room for at least an order of magnitude speedup with specialised hardware (as opposed to more general-purpose GPUs) and ternary nets.

norir

9 hours ago

I am not convinced that going for profit is in the long term financial interest of the organization. Productizing will require diverting resources from foundational research to monetization. Moreover, the brain drain from this move is predictable because the top people in the field are largely wealthy enough that they don't need to choose where they work solely on compensation.

Even if they are motivated by compensation, they may be looking around and thinking both that Altman's approach is reckless and that it is a distraction from the core research. This would give them confidence that they can beat Altman in the long term by focusing on research and creating an environment that is most attractive to the best researchers in the field while he is busy chasing dollars today by scaling out what they already have rather than truly innovating. There is plenty of money sloshing around (see Ilya, e.g.) so it's not like they will struggle finding funding.

Now, it could be the case that there is some incredible insider tech that we don't know about publicly, but if they are way ahead of everyone it is hard to see from the public information. It certainly does not look good when the early employees who actually built the technology are leaving and the businessman who has a track record of lies, manipulation and a general lack of human empathy is consolidating power, abandoning the company's core mission and asking for huge amounts of funding along with a plan to extract an enormous amount of the world's natural resources to serve his profit margins.

In other words, I am quite suspicious that Sam Altman is a ruthless con man. That is what all the reporting I've seen around him most strongly suggests. I don't believe it is in the long term interest of an organization to be helmed by that kind of leader. I could be wrong, but I am still waiting for the people who have left to clarify that his leadership had nothing to do with their departures.

ChicagoDave

9 hours ago

If it walks like a duck..,

Any other company seeing this kind of exodus would generate massive ridicule and concern.

But “AI” is supposedly the “next big thing” and the media and a lot of people with “get rich quick schemes” are trying to keep the hype alive.

The hype is clearly overblown and reality is setting in.

The bubble is about to burst.

And Tesla survived because the federal government bailed them out. Not because Merlin was some badass businessman.

bhouston

8 hours ago

> The hype is clearly overblown and reality is setting in. The bubble is about to burst.

There is a lot of hype, but there is something real underneath and we haven't yet tapped its current potential to the fullest in terms of product integrations, etc.

ChicagoDave

6 hours ago

The value in GenAI will be in small, targeted models that align with a specific industry.

No AGI is coming any time soon because LLMs are not the correct technology for it and no one has figured out what technology is required for AGI.

WJW

7 hours ago

There is more underneath this than was in eg blockchain or metaverse hype. But there is zero guarantee that OpenAI will be the one to eventually come out on top. If anything, the continuous drama they seem to be generating makes it less likely that they come on top vs one of the other big AI startups.

kachapopopow

8 hours ago

This again proves that realistically we're hardware bound (waiting for nvidia to release bigger, more efficient and better hardware.

We won't see any improvements until the next generation of hardware comes out and allows us to do things that were computationally impossible before.

Yes, of course we can develop better transformers and improve reasoning capabilities, but that's not what I am talking about. I am mostly referencing the ability for newest "AI" hardware to multiply 256 matrices in a single clock cycle per core.

philip1209

9 hours ago

It's probably tied to fundraising. Execs probably have "key person" clauses and need to negotiate stock or secondaries as part of the deal.

cj

9 hours ago

Probably related. Their current fundraising efforts are enormous.

Rumor is the minimum check they'll accept is $250 million. Valuation in the $150 billion range.

It would be surprising if these exits weren't in some way connected to the fundraising. (And not sure if these exits will be seen as a positive or a negative to would-be investors)

lumost

9 hours ago

Possible the people writing those checks want more experienced people in those positions, or their own people.

Or the execs figure a ~150x growth ride in 6 years is about the best that they can hope for and are ready to bounce. If I were in their shoes I'd look to take out 50-100 MM and decide what I want in life.

morkalork

8 hours ago

All this non-stop drama and Anthropic just quietly putters along in the background.

bhouston

8 hours ago

It is great that Anthropic exists and is drama free. If Anthropic can match OpenAI's fundraising, it should beat OpenAI, because it should retain talent better and focus them on delivering rather than distracting via drama and high turnover.

stuckinhell

9 hours ago

isn't this just Sam consolidating power

elAhmo

9 hours ago

Of course it is. After his ousting as a CEO and return, every move and departure has been a step in that direction.

bhouston

8 hours ago

That is my read completely.

guluarte

9 hours ago

If OpenAI were anywhere near having achieved AGI, they would not have stepped back.

robertlagrant

9 hours ago

There's been no evidence of AGI at all that I've seen.

bhouston

8 hours ago

I think sentience can actually be made now with the existing technology. It isn't super-human intelligent and it is slightly mentally ill, but it is sort of possible now.

WJW

7 hours ago

Sure, but humanity has been able to make additional human-level intellects since basically forever by having sex. The dream has always been to make superhuman intellects.

bhouston

8 hours ago

I am not sure. I think Sam is consolidating power in preparation for the next stage.

reducesuffering

9 hours ago

Or the moral imperative to not assist Sam when an employee finds out they aren't working for a non-profit humanity-centered mission but a Sam controls AGI for-profit like it's been revealed.

93po

8 hours ago

I don't think that's a guaranteed assumption. Maybe they have insider knowledge that it is near and therefore don't see the purpose of continuing to work and want to retire young and early.

seydor

9 hours ago

Maybe the AGI fired them (as per their prophecy)`

Workaccount2

9 hours ago

A rather banal explanation would be that OpenAI is a hot company, and it's execs likely receive very enticing offers from other companies or even potential investor backing.

gipp

9 hours ago

And three just happen to accept such offers within a day of a major structural transition, but unrelated to it?

skybrian

9 hours ago

Announcing that they’re leaving now might be good timing for some reason, but there’s no particular reason to think these are snap decisions. They may have been considering it for quite a while without announcing their plans.

bhouston

8 hours ago

Hot companies tend to retain their talent, especially as it transitions to a for-profit company. Of course some execs can not cut it at the next stage of growth and have to be replaced, but not everyone. None of Facebook, Google, Microsoft, Netflix, AWS shed top talent like OpenAI is. There is literally no one left from the core OpenAI team from 2 years ago but Sam now.

zombiwoof

9 hours ago

I wonder if the humanity is catching up to the executives here realizing life is short, bullshit not worth it and they made enough to get out while they are young enough

jsheard

9 hours ago

Funnily enough Sam is tackling that dread from the other direction by funding life extension research. He's gonna be 120 years old and still promising that AGI is just around the corner, he just needs to borrow another few trillion dollars and a dozen dark matter reactors to make it happen.

radicaldreamer

9 hours ago

Many in the life extension community die early deaths due to faustian bargains they make taking understudied supplements and drugs. It's almost a joke on longevity forums how many of the top people in their fields have a tough time reaching their 60s.

jajko

9 hours ago

We should be probably jailing or shooting all that life extension research. People are worried of AGI skynet, I am way more worried about immortal primitive dictators just won't die, becoming progressively more detached from reality and humanity with egos dwarfing Mt. Everest. Just look at some 3-letter analysis of the degradation ie in putin in past 10 years or few other dictators. Death is terrible tragedy for a human but saving mankind over and over again.

I know a bit naive from various angles, and a bit over the top, but I stand by the core concern.

leesec

8 hours ago

He has genuinely ushered in a new age of productivity and the fastest growing product of all time and you people think he's a grifter lol. Get a life

skeeter2020

7 hours ago

You're welcome to believe this, but that makes him above criticism? All hail the Cult of Jobs v2

skeeter2020

7 hours ago

An executive quietly stepping away, praising the emperor and taking their many millions is not what I'd frame as "realizing... bullshit not worth it". Quite the opposite.

labster

9 hours ago

> The nonprofit is core to our mission and will continue to exist.

Mere existence is the lowest of bars, kind of a funny way to clarify how core something is.

user

8 hours ago

[deleted]

oxqbldpxo

8 hours ago

If it were to close, who is well-positioned to acquire OpenAI's patents?

tivert

9 hours ago

> Jason Wong, Gartner analyst, told The Register: "It's clear with the departures of the co-founders, and high-profile engineering leaders, that OpenAI is being remade with Sam's vision. His manifesto and the shift to a for-profit entity also reinforces his vision for the business.

It seems like Altman was a poor choice to run an organization meant to benefit and protect humanity, as reports increasingly make him sound like a lying, manipulative sociopath (albeit a powerful, competent, and lucky one).

> "This could have significant impact on OpenAI's partnership with Microsoft, which clearly stated they view OpenAI as a competitor. Microsoft has already started to downplay the importance of OpenAI models in their overall AI strategy. For enterprises, uncertainty is not good for business and key tech investments like generative AI. Other frontier models – especially more open ones – have caught up to OpenAI, which will further influence decisions to derisk by moving away from OpenAI or spread their risk using other models."

Now that's some happy news. According to https://www.wheresyoured.at/subprimeai/, OpenAI is already losing absurd, unsustainable amounts of money even with massive discounts from Microsoft. I wonder what fun things could happen to OpenAI if Microsoft started charging their competitor full price?

bhouston

8 hours ago

> OpenAI is already losing absurd, unsustainable amounts of money even with massive discounts from Microsoft.

I wouldn't hold onto that too tightly as a way to make you feel better. Google and Facebook initially lost tons of money but when they did become profitable, they became wildly profitable.

tivert

7 hours ago

Did Google and Facebook suffer tons of attrition very early? The drumbeat of departures calls into question OpenAI's ability to stay ahead of its competitors.

Then you have the fact that Facebook is giving LLaMA away for free, which gives OpenAI Netscape vibes.

user

8 hours ago

[deleted]

noobermin

9 hours ago

Can someone more knowledgeable tell me how to square this with the news that OpenAI is trying to spin up a for-profit arm? Are they afraid of layoffs or are they smelling things turning rotten?

I mean may be investors are just stupid which really makes me ponder how life would just be better if resources were allocated differently in society.

throwup238

9 hours ago

> Can someone more knowledgeable tell me how to square this with the news that OpenAI is trying to spin up a for-profit arm? Are they afraid of layoffs or are they smelling things turning rotten?

OpenAI spun up the for-profit arm called OpenAI Global, LLC in 2019 [1] shortly after GPT2. If you pay for ChatGPT plus or for the API, that's where your money has been going.

The big change is that they want to allow the for-profit arm to issue shares to other people and investors. Until now, the for-profit arm has been owned and controlled by the nonprofit's holding company [2].

[1] https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

[2] https://images.ctfassets.net/kftzwdyauwt9/4200df88-28fe-4212... from https://openai.com/our-structure/

digbybk

9 hours ago

I'm not more knowledgeable, but can't you square it with the fact that a for-profit arm is a new direction, and when organizations make big structural changes like this there are often people inside the org who disagree and decide to leave?

bhouston

8 hours ago

In the end, there will just be Sam Altman sitting in a grand chair with the pre-eminent sentient superhuman AI at his side and at his command. Think of the power he will have!

If the future unfolds where AI does really take off, his power may be unmatched by anyone in the past if he stays in control of it.

I think this is purposeful consolidation of power by Sam. He sees what is coming and the value of being in singularly control of it.

Feels like a movie script.

(I think he needs to acquire an android company next, AI needs at least some type of embodiment. Boston Dynamics?)

user

8 hours ago

[deleted]

user

9 hours ago

[deleted]

user

8 hours ago

[deleted]

hulitu

9 hours ago

> OpenAI in throes of executive exodus as three walk at once

They feel the danger of the law. /s

Just like crypto, the AI hype is coming to an end.

vundercind

9 hours ago

Hell of a lot faster, though.

It’ll stick around as an important feature in a lot of things. It’s more useful than cryptocurrency, at least.

petesergeant

8 hours ago

At the end of the AI hype, we'll still have LLMs, which have massive transformative power. At the end of the crypto hype we have nothing to show for it.

MisterDizzy

10 hours ago

What baffles me is how OpenAI keeps its doors open. They're paying Microsoft to be able to exist, plus their product requires huge amounts of electricity and infrastructure. OpenAI is unsustainable.

rossdavidh

10 hours ago

They are, but their biggest expense is probably cloud compute, and most of the "investment" that Microsoft made in them was in the form of cloud compute credits. Essentially, Microsoft has spare cloud capacity, doesn't want to admit that to Wall Street, and so covers that up by giving away the extra capacity in the form of an "investment".

Now, in the end, it likely won't amount to much, but if it keeps Microsoft stock up for a while, it may pay off for the executives involved. Apparently not OpenAI execs, though...

ipsum2

9 hours ago

"Microsoft has spare cloud capacity" definitely not, if they're actively building out new datacenters to keep up with demand.

klyrs

9 hours ago

> ... doesn't want to admit that to Wall Street, and so covers that up by giving away the extra capacity in the form of an "investment".

Is this just a wall street thing? I'm betting that this is an excellent tax shelter as well.

user

10 hours ago

[deleted]