OpenAI to become for-profit company

1033 pointsposted 15 hours ago
by jspann

584 Comments

ayakang31415

42 minutes ago

About a year ago (I believe), Sam Altman touted his mission to promote safe AI with claims that he has no equity in OpenAI and was never interested in getting any. Look where we are now, well played Sam.

upwardbound

26 minutes ago

Does that amount to making a false forward-looking financial statement? (Specifically his claim that he wasn’t interested in getting equity in the future.)

This claim he made was likely helpful in ensuring the OpenAI team’s willingness to bring him back after he was temporarily ousted by the board last year for alleged governance issues. (Basically: “don’t worry about me guys, I’m in this for the mission, not personal enrichment”)

Since his claim likely helped him get re-hired, he can’t claim it was immaterial.

onelesd

11 minutes ago

Sam and all the others. At this point, there should be required courses in college to teach this seemingly required skill to future corporate USA.

wubrr

10 minutes ago

What leverage does he have to get equity now? Does he personally have control over that decision?

throwup238

7 hours ago

I’m confused by this news story and the response here. No one seems to understand OpenAI’s corporate structure or non profits at all.

My understanding: OpenAI follows the same model Mozilla does. The nonprofit has owned a for-profit corporation called OpenAI Global, LLC that pays taxes on any revenue that isn’t directly in service of their mission (in a very narrow sense based on judicial precedent) since 2019 [1]. In Mozilla’s case that’s the revenue they make from making Google the default search engine and in OpenAI’s case that’s all their ChatGPT and API revenue. The vast majority (all?) engineers work for the for-profit and always have. The vast majority (all?) revenue goes through the for-profit which pays taxes on that revenue minus the usual business deductions. The only money that goes to the nonprofit tax-free are donations. Everything else is taxed at least once at the for-profit corporation. Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes. They don’t get to just take any taxable revenue stream and declare it tax free.

All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit. They’re allowing the for profit to create more shares and distribute them to entities other than the non-profit. Or am I completely misinformed?

[1] https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

throwaway314155

4 hours ago

It's about the narrative they tried to create. The spin. It doesn't matter much if they were technically behaving as a for-profit entity previously. What matters is that they wanted the public (and likely, their talent) to _think_ that they weren't even interested in making a profit as this would be a philosophical threat to the notion of any sort of impartial or even hopefully benevolent originator of AGI (a goal which is laid plainly in their mission statement).

As you've realized, this should have been (and was) obvious for a long time. But that doesn't make it any less hypocritical or headline worthy.

cdchn

2 hours ago

>What matters is that they wanted the public (and likely, their talent) to _think_ that they weren't even interested in making a profit as this would be a philosophical threat to the notion of any sort of impartial or even hopefully benevolent originator of AGI (a goal which is laid plainly in their mission statement)

And now they want to cast off any pretense of that former altruistic yolk now that they have a new, better raison d'etre to attract talent: making absolutely unparalleled stacks of cash.

pj_mukh

3 hours ago

Occam's razor: I think Sam's personal narrative is the correct one. He built a non-profit that took off in a way that he didn't expect it and now a for-profit is the best way to run the lightning they've caught.

In terms of profit, AFAICT, Sam doesn't have designs on building extra large yachts and his own space agency but what he wants is to be the one at the stead of building what he considers is world-changing tech. One could rationally call this power-hungry but one could also rationally call this just helicopter parenting of a tech you've helped built. And for that a for-profit that is allowed to maximize profits to re-invest in the tech is the optimal setup (esp if all the competitors are doing the same)

Is this a different org than when it started? Yes. Was this a dupe from the beginning? I don't think so.

"But why can't he have a more worldly-aligned board looking over his shoulder?"

Because we live in California and have a bad taste for governance by committee or worse: governance by constant non-representative democracy (see: Housing).

If this now completely comes off the wheels, I still think Congressional action can be a stopgap, but atleast for now, this restructure makes sense to me.

feoren

2 hours ago

Sam: "I'm not in it for the money. I have principles."

World: "But what if it was like, a lot of money?"

Sam: "Oh alright you convinced me. Fuck my principles."

pj_mukh

2 hours ago

What do you do with a a lot of money past a point? A corporate controlled AGI being just a stop on the way to build another private space agency seems like a...letdown.

vasco

2 hours ago

Kid Rock did it first, but a golden toilet would be my answer.

talldayo

2 hours ago

To be honest, I would take a private space agency 7 days out of the week with that kind of capital. We have no fundamental proof that LLMs will scale to the intelligence levels that we imagine in our heads. The industry application for LLMs is even weaker than computer vision, and the public sentiment is almost completely against it. Sam's product is hype; eventually people are going to realize that Q* and Strawberry were marketing moves intended to extend OpenAI's news cycle relevancy and not serious steps towards superintelligence. We were promised tools, and they're shipping toys.

I could tell you in very plain terms how a competitor to Boeing and SpaceX would benefit the American economy. I have not even the faintest fucking clue what "AGI" even is, or how it's profitable if it resembles the LLMs that OpenAI is selling today.

pj_mukh

2 hours ago

I would agree with you that a space agency is also useful (maybe more useful some days of the week). Sam disagrees and thinks he can do better without a non-profit board now. I'm glad we live in a world where he gets to try and we get to tax him and his employees to do other things we consider useful.

cdchn

2 hours ago

Private space agency and LLMs both seem like big industries going nowhere driven by sci-fi hopes and dreams.

blendergeek

4 minutes ago

Its interesting how first impressions can be so deceiving. The world's largest private space agency (SpaceX) has completely changed the game in rural internet connectivity. Once upon a time, large chunks of the US had no reliable high speed internet. SpaceX has brought high-speed low-latency internet to every corner of the globe, even the middle of the ocean and Antarctica. This isn't going nowhere even if it seems that way.

sfblah

an hour ago

Not sure I agree with you here. I use LLMs all the time for work. I've never once used a space agency for anything.

blendergeek

3 minutes ago

My Dad uses SpaceX to work from home every day.

macintux

an hour ago

GPS, weather forecasting, tv broadcasting…I’ve been using a space agency for as long as I’ve been alive.

cdchn

an hour ago

I think they're both overhyped by sci-fi optimism but I would agree (even being mostly an AI minimalist) the impact of LLMs (and their improvement velocity) is a lot meaningful to me right now. I mean satellites are cool and all.

ninepoints

12 minutes ago

Anyone who had any respect for Sam "Give me your eyeball data" Altman was always delusional.

huevosabio

2 hours ago

I don't think the narrative makes sense. It was clear from way back in 2016 that training would take a ton of resources. Researchers were already been sucked into FAANG labs because they had the data, the compute, and the money. There was never a viable way for a true non-profit to make world-changing, deep learning-based AI models.

When seen through the rearview mirror, the whole narrative screams of self-importance and duplicity. GPT-2 was too dangerous, and only they were trust-worthy enough to possess. They were trust-worthy because this was a non-profit, so "interest aligned with humanity". This charade has continued even to barely some months ago.

halJordan

3 hours ago

It isnt a tax thing or a money thing, its a control and governance thing.

The board of the non-profit fired Altman and then Altman (& MS) rebelled, retook control, & gutted the non-profit board. Then, they stacked the new non-profit board with Altman/MS loyalists and now they're discharging the non-profit.

It's entirely about control. The board has a legally enforceable duty to its charter. That charter is the problem Altman is solving.

burnte

2 hours ago

The problem is that OpenAI calls itself OpenAI when it's completely sealed off, and calls itself a non-profit when, as you say, almost everything about is for profit. Basically they're whitewashing their image as an organization with noble goals when it's simply yet another profit motivated company. It's fine if that's what they are and want to be, but the lies are bothersome.

wubrr

6 minutes ago

What leverage does Sam Altman have to get equity now? Does he personally have control over that decision?

seizethecheese

6 hours ago

“Decoupling” is such a strange euphemism for removing an asset worth north of $100b from a nonprofit.

throwup238

6 hours ago

OpenAI Global LLC is the $100b asset. It’s not being removed, the nonprofit will still own all the shares it owns now until it decides to sell.

sangnoir

3 hours ago

The shares will be diluted - the LLC used to be 100% owned by the non-profit; and now there's no bottom.

Aeolun

an hour ago

Normally shareholders aren’t ok with that.

b800h

an hour ago

I was under the impression that in UK law at least, (and obviously not in this case) the trustees of a non-profit would be bound to work in the best interests of that non-profit. And so allowing an asset like this to somehow slip out of their control would be the sort of negligence that would land you in very hot water. I'd be interested to know how this isn't the case here.

upwardbound

20 minutes ago

I think it is the case here, and I hope Elon Musk persists in his lawsuits about this. As a large donor to the nonprofit in its early days he’s one of the people with the strongest standing to sue / strongest claim for damages.

Obviously Elon is mostly doing this suit as a way to benefit Grok AI but honestly I don’t mind that; competitors are supposed to keep each other in check, and this is a good and proper way for companies to provide checks & balances to each others’ power and it’s one reason why monopolies are bad is the absence of competitor-enforced accountability.

Lawsuit: https://www.reuters.com/technology/elon-musk-revives-lawsuit...

    https://www.reuters.com/technology/elon-musk-revives-lawsuit-against-sam-altman-openai-nyt-reports-2024-08-05/

nfw2

an hour ago

> "All OpenAI is doing here is decoupling ownership of the for-profit entity from the nonprofit."

Yes, but going from being controlled by a nonprofit to being controlled by a typical board of shareholders seems like a pretty big change to me.

bbor

an hour ago

Good questions!

Right now, OpenAI, Inc. (California non-profit, lets say the charity) is the sole controlling shareholder of OpenAI Global LLC (Delaware for-profit, lets say the company). So, just to start off with the big picture: the whole enterprise was ultimately under the sole control of the non-profit board, who in turn was obligated to operate in furtherance of "charitable public benefit". This is what the linked article means by "significant governance changes happening behind the scenes," which should hopefully convince you that I'm not making this part up.

To get really specific, this change would mean that they'd no longer be obligated to comply with these CA laws:

https://leginfo.legislature.ca.gov/faces/codes_displayText.x...

https://oag.ca.gov/system/files/media/registration-reporting...

And, a little less importantly, comply with the guidelines for "Public Charities" covered by federal code 501(c)(3) (https://www.law.cornell.edu/uscode/text/26/501) covered by this set of articles: https://www.irs.gov/charities-non-profits/charitable-organiz... . The important bits are:

  The term charitable is used in its generally accepted legal sense and includes relief of the poor, the distressed, or the underprivileged; advancement of religion; advancement of education or science; erecting or maintaining public buildings, monuments, or works; lessening the burdens of government; lessening neighborhood tensions; eliminating prejudice and discrimination; defending human and civil rights secured by law; and combating community deterioration and juvenile delinquency.
  ... The organization must not be organized or operated for the benefit of private interests, and no part of a section 501(c)(3) organization's net earnings may inure to the benefit of any private shareholder or individual.
I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.

Naturally, this makes it very hard to explain how the nonprofit could give up basically all of its control without breaking its obligations.

All the above covers "why does it feel unfair for a non-profit entity to gift its assets to a for-profit", but I'll briefly cover the more specific issue of "why does it feel unfair for OpenAI in particular to abandon their founding mission". The answer is simple: they explicitly warned us that for-profit pursuit of AGI is dangerous, potentially leading to catastrophic tragedies involving unrelated members of the global public. We're talking "mass casualty event"-level stuff here, and it's really troubling to see the exact same organization change their mind now that they're in a dominant position. Here's the relevant quotes from their founding documents:

  OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact... 
  It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
From their 2015 founding post: https://openai.com/index/introducing-openai/

  We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity...
  We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
From their 2018 charter: https://web.archive.org/web/20230714043611/https://openai.co...

Sorry for the long reply, and I appreciate the polite + well-researched question! As you can probably guess, this move makes me a little offended and very anxious. For more, look at the posts from the leaders who quit in protest yesterday, namely their CTO.

throwup238

26 minutes ago

> I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.

Look at your OpenAI invoices. They're paid to OpenAI LLC, not OpenAI Inc. I can't find confirmation on openai.com what the exact relationship between OpenAI Global LLC and OpenAI LLC is but the former is on their "Our Structure" page and the latter is in their data processing addendum so it's probably the subsidiary in charge of operating the services while Global does training and licenses it downstream. OpenAI Global was the one that made that big $10 billion deal with Microsoft

That obligation is why they had to spin off a for-profit corporation. Courts are very strict in their interpretation of what "unrelated business income" is and the for-profit LLC protects the non-profit's tax exempt status.

> "why does it feel unfair for a non-profit entity to gift its assets to a for-profit"

What assets were gifted, exactly? They created the for-profit shortly after GPT2 (in 2019) and as far as I can tell that's the organization that has developed the IP that's actually making money now.

I honestly don't understand how this isn't in the interest of the nonprofit's mission. It's currently a useless appendage and will never have any real power or resources until either OpenAI is in the black and sending profit up to it, or they can sell OpenAI shares. I don't think the charity has any more claim over GPT4 than Google does, having invented transformers.

If this next round of funding goes through at $100-150 billion valuation, OpenAI Inc will probably be (on paper at least) the second wealthiest charity on the planet after the Novo Nordisk Foundation. This restructuring opens the way for the nonprofit to sell its shares and it's going to be a hell of a lot of money to dedicate towards their mission - instead of watching its subsidiary burn billions of dollars with no end in sight.

bbor

3 minutes ago

Thanks for another polite response! I think this is the fundamental misunderstanding:

   What assets were gifted, exactly? ...It's currently a useless appendage and will never have any real power or resources until either OpenAI is in the black and sending profit up to it, or they can sell OpenAI shares.
The charity will gift its control of a $100B company for some undisclosed "minority stake" (via a complex share dilution scheme), in exchange for nothing other than "the people we're gifting it to have promised to do good with it". It's really that simple. The charity never was intended to draw profit from the for-profit.

  This restructuring opens the way for the nonprofit to sell its shares and it's going to be a hell of a lot of money to dedicate towards their mission
Even putting aside the core issue above (they won't have many shares to sell), the second part of my comment comes back here: what would they buy with all that money? Anthropic? Their explicit mission is to beat for-profit firms in the race to AGI so convincingly that an arms race is avoided. How could they possibly accomplish this after selling control of the most capable AI system on the planet?

Finally, one tiny side point:

  Courts are very strict in their interpretation of what "unrelated business income" is and the for-profit LLC protects the non-profit's tax exempt status.
I'm guessing you're drawing on much more direct experience than I am and I don't question that, but this seems like a deceptive framing. Normal charities have no issues with unrelated business income, because they don't run "trades or businesses", they just spend their money. I know that selling ChatGPT subscriptions is a large income source, but it's far from the only way to pursue their mission -- and is wildly insufficient, anyway. They in no way were forced to do it to meet their obligations.

Again, I'm a noob, so I'll cite the IRS-for-dummies page on the topic for onlookers: https://www.irs.gov/charities-non-profits/unrelated-business...

simantel

6 hours ago

> Almost every nonprofit that raises revenue outside of donations has to be structured more or less this way to pay taxes.

I don't think that's true? A non-profit can sell products or services, it just can't pay out dividends.

throwup238

6 hours ago

If those products and services are unrelated business income, they have to pay taxes on it: https://www.irs.gov/charities-non-profits/unrelated-business...

What counts as “related” to the charity’s mission is fuzzy but in practice the courts have been rather strict. They don’t have to form for-profit subsidiaries to pay those taxes but it helps to derisk the parent because potential penalties include loss of nonprofit status.

For example, the nonprofit Metropolitan Museum of Modern Art has a for-profit subsidiary that operates the gift shop. National Geographic Society has National Geographic Partners which actually owns the TV channel and publishes the magazine. Harvard and Stanford have the Harvard Management Company and Stanford Management Company to manage their endowments respectively. The Smithsonian Institute has Smithsonian Enterprises. Mayo Clinic => Mayo Clinic Ventures. Even the state owned University of California regents have a bunch of for-profit subsidiaries.

kweingar

9 hours ago

Can anybody explain how this actually works? What happens to all of the non-profit's assets? They can't just give it away for investors to own.

The non-profit could maybe sell its assets to investors, but then what would it do with the money?

I'm sure OpenAI has an explanation, but I really want to hear more details. In the most simple analysis of "non-profit becomes for-profit", there's really no way to square it other than non-profit assets (generated through donations) just being handed to somebody for private ownership.

lolinder

8 hours ago

If the assets were sold to the for profit at a fair price I could see this being legal (even if it shouldn't be). At least in that case the value generated by the non-profit tax free would stay locked up in non-profit land.

The biggest problem with this is that there's basically no chance that the sale price of the non-profit assets is going to be $150 billion, which means that whatever the gap is between the valuation of the assets and the valuation of the company is pure profit derived from the gutting of the non-profit.

If this is allowed, every startup founded from now on should rationally do the same thing. No taxes while growing, then convert to for profit right before you exit.

amluto

8 hours ago

It’s pretty great if you can manage to have the parent be 501(c)(3). Have all the early investors “donate” 90% of their investment to the 501(c)(3) and invest 10% in the for-profit subsidiary the old-fashioned way. They get a tax deduction, and the parent owns 90% of the subsidiary. Later on, if the business is successful, the parent cashes out at the lowest possible valuation they can pull off with a mostly straight face, and all the investors in the subsidiary end up owning their shares, pro rata, with no dilution from the parent. The parent keeps a bit of cash (and can use it for some other purpose).

Of course the investors do end up owning their shares at a lower basis than they would otherwise, and they end up a bit diluted compared to a straightforward investment, but the investors seem likely to more than make up for this by donating appreciated securities to the 501(c)(3) and by deferring or even completely avoiding the capital gains tax on their for-profit shares.

Obviously everyone needs to consult their lawyer about the probability of civil and/or criminal penalties.

mlinsey

5 hours ago

I haven't seen any details, but isn't this a pretty straightforward way of doing it? The non-profit has had majority ownership of the for-profit subsidiary since 2019. The already-for-profit subsidiary has owned all the ChatGPT IP, all the recent models, all the employee relationships, etc etc.

The cleanest way for this to work is the for-profit to just sell more shares at the $150B valuation, diluting the non-profit entity below majority ownership. The for-profit board, which the non-profit could still probably have multiple seats on, would control the real asset, the non-profit would still exist and hold many tens of billions of value. It could further sell its shares in the non-profit and use the proceeds in a way consistent with its mission.

They wouldn't even have to sell that much - I am pretty sure the mega-fundrasing rounds from Microsoft etc brought the non-profit's ownership to just north of 50% anyway.

I don't see how this wouldn't be above board, it's how I assumed it was going to work. It would indeed mean that the entity that controls ChatGPT would now be answerable to shareholders, a majority of which would be profit seeking and a minority of which would be the non-profit with its mission, but non-profits are allowed to invest in for-profits and then sell those shares; all the calls for prosecutions etc seems just like an internet pitchfork mob to me.

jprete

5 hours ago

The non-profit would have to approve the scheme, and a rational non-profit would not, because it gives up any ability the non-profit has to fulfill its charter.

space_fountain

5 hours ago

Exactly, the question is this move in the non profits best interests? It's definitely in the best interest of the people running the non profit but I think many of the early donors wouldn't feel like this was what they were signing up for

space_fountain

5 hours ago

I think the problem is early employees and investors were convinced to invest their time and money into a non profit. They were told that one of the reasons they should donate/work there as opposed to Google was because they were a non profit focused on doing good. Now when it seems like that non profit is successful that all is being thrown out the window in service of a structure that will result in more profit for the people running the non profit

TZubiri

21 minutes ago

But what is the non-profit going to do with all that money is the question.

bdowling

8 hours ago

For-profit startups don’t pay taxes while growing either, because they aren’t making any profit during that phase.

authorfly

6 hours ago

Corporate tax is always only paid on profit and is usually a minor part of the tax draw for the government from corporations of all sizes.

The vast majority of taxes paid in developed nations are employee taxes and whatever national+local sales taxes and health/pension equivalent taxes are (indirectly) levied (usually 60-80% of national income). Asset taxes are a bit different.

It's true even in the bootstrapped company case: If you earn say $100k and keep $50k after all the employee indirect/direct taxes. Now imagine you spend $40k of that $50k in savings, setting up a business. You spend $30k on another employee, paying $15k of employer and employee taxes, and spend the other $10k on a company to do marketing (who will spend $5k of that on employees and pay $2.5k of tax), and you earn less than $40k in income, by the end of year 1 you have:

1) A loss-making startup which nonetheless is further along then nothing

2) Out of $100k of your original value, $67.5k has already reached the government within 12 months

3) Your time doing the tech side was not compensated but could not (for obvious anti-fraud reasons) be counted as a loss and as you have noted, you don't pay tax when you make a loss, and you don't get any kind of negative rebate (except certain sales tax regimes or schemes).

If you are in the US, the above is currently much worse due to the insane way R&D Software spend needs to be spread immediately as a tax burden.

So it's really not fair to say a new startup isn't paying taxes. They almost always are. There are very few companies or startups that pay less than 50% of their income to staff, and almost all of those are the unicorns or exceptional monopoly/class leaders. Startups, and founders tend to disproportionately give more of their income and are essentially to that extent re-taxed.

Even though you saved the money in order to start a startup, and paid your due employee taxes, you then have to pay employee taxes to use it, etc.

mpeg

5 hours ago

Is this a US thing? In the UK employee tax is the employee’s to pay, not the company. Even if the company technically transfers it directly to the tax agency it’s not really their money.

EDIT: I guess we do have employer tax as national insurance contributions too, always forget about that since I’ve always paid myself under that threshold

authorfly

4 hours ago

I'm not sure if you mean whether the UK has the same low corporation vs high income/pension/NI contributions income? If so, yes.

The UK does have employers NI contributions but that's not what I mean. The point is, if you spent a year to earn a gross £100k, and as you earn it, pay £50k of total tax, and with the remaining £40k/£50k you spend it on an employee at your company in salary and pay then £20k of tax, the government has that year earned £70k from that £100k passing through.

You can argue that really "£140k" has passed through, but it's not the case, because you created a new job that wouldn't otherwise have existed had you instead saved that £40k for a house. Either way HMRC gets £70k this year rather than £50k.

The wider point I was making is that all companies, even for-profit, pay tax to do just about anything, and companies with much lower sales than costs aren't just paying nothing. They generally have higher costs because they are paying people, and paying their taxes every month. The tax per employee is completely uncorrelated with the financial profit or thereof by the business, so it's a (sensible) misconception that companies that don't make profit like startups don't contribute to the economy. They do, by paying employment taxes.

I'm really making the point that you have to account for employee taxes (both employer and employee as you mention) for your costs as a business. That means, even though you already paid those yourself when you carried out the work to gain savings to invest in your business (to spend on an employee), you have to pay again when paying your employee.

I.e. Self-funded or businesses launched from previous accrued personal income where you invest your own time as well result in a bad tax situation;

whereas an employee earning £100k might pay £50k tax total and save £50k for a house (no VAT),

The alternate of investing that £50k in your business by paying someone £40k means you have to pay that employees PAYE, their Employer and Employee NI. So the government gets to re-tax most of that money when you use it to hire someone to build a new business with you, in a way they don't if you use it to buy a house, in terms of practical impact. When you pay yourself as an entrepreneur depends, there's dividends+PAYE in the UK (which requires yes you pay for both your employer and employee tax for yourself) or capital gains(ignoring tax schemes), either way, you do get taxed at some point to bring cash out.

The government in other words massively benefits from unprofitable for-profit companies so long as they hire some people, especially if the companies are self-funded. But even if it is investment, it's better to have that money spent on salaries now in new companies than sitting as stock in larger companies that keep cash reserves or use schemes to avoid tax. They get much more tax from people starting even unprofitable new businesses, than from employees who simply save money.

It's one of the reasons that since the introduction of income taxes (more or less WW1 in most countries!), you need money to get money in way that you fundamentally did not in the same way back when you could earn $50 from someone and directly use that same $50 to pay someone for the same skills without any loss of value.

vladms

2 hours ago

> So the government gets to re-tax most of that money when you use it to hire someone to build a new business with you.

You should consider it also from the point of view of the employee. The government taxes your employee to offer him services, it does not care who hires him (you, that saved the money).

Yes, it is true that you need lots of money to HIRE someone, but you can try to do a startup with a couple people that live from their savings for a while (so, not paying themselves a salary, but having shares) which avoids the tax situation as first.

I think we are quite bad to assess how was life around 1900 in terms of infrastructure (in any country) - so yes, probably people paid less taxes but lived in much worse overall conditions.

xxpor

8 hours ago

If most of your expenses are software devs, that's not true any more.

ttul

6 hours ago

This is one reason why some companies have located engineers in Canada under subsidiaries. Canada not only allows you to deduct R&D costs as an expense, but there is an extremely generous R&D tax credit that yields a negative tax rate on engineers. For Canadian controlled private companies, this represents as much as a 60% refundable tax credit on R&D salaries. For foreign-owned companies, the benefit is smaller but still significant.

The Trump tax policy was a bizarre move for a country that relies so heavily on homegrown innovation. But then again, so was the entire Trump presidency.

perfmode

8 hours ago

How so?

flutas

8 hours ago

In short, section 174[0].

It pushed almost all SWE jobs to be classified as R&D jobs, which changed how taxes are calculated on companies.

They have an example at [0], but I'll copy it here. For a $1mm income, $1mm cost of SW dev, with $0 profit previously you paid $0 in tax (your income was offset by your R&D costs). Now it would be about $200k in taxes for 5 years, as you can't claim all of the $1mm that year anymore.

[0]: https://blog.pragmaticengineer.com/section-174/

the_gorilla

8 hours ago

There's tons of taxes on hiring employees that you have to pay even if you're losing money. Payroll taxes, mandatory insurance taxes, unemployment taxes, probably more I just don't remember off the top of my head.

nickspag

7 hours ago

In an effort to lower the deficit effects of the Trump tax cuts (i.e. increase revenue so they could cut further in other areas), they reclassified software developers salary so that their salaries have to be amortized over multiple years, instead of just a business expense in that year. This is usually done for assets as those things have an intrinsic value that could be sold.

In this case, business have to pay taxes on "profit" that they don't have as it immediately went to salaries. There were a lot of small business that were hit extremely hard.

They tried to fix it in the recent tax bill but it was killed in the Senate last I checked. You can see more here: https://www.finance.senate.gov/chairmans-news/fact-sheet-on-....

Also, software developers in Oil and Gas industries are exempt from this :)

IncreasePosts

8 hours ago

Sure. But there are a lot of other tax advantages. For example, at least where I am, non profits don't pay sales tax on purchases, and don't have to pay into unemployment funds. I'm sure there is more, but I'm not super familiar with this world.

caeril

8 hours ago

Corporations don't generally pay sales tax either, if the bean counters can justify the purchase as COGS. There are plenty of accountants who can play fast and loose with what constitutes COGS.

sethaurus

8 hours ago

For anyone else unfamiliar with this initialism:

> Cost of goods sold (COGS) refers to the direct costs of producing the goods sold by a company. This amount includes the cost of the materials and labor directly used to create the good. It excludes indirect expenses, such as distribution costs and sales force costs.

daveguy

8 hours ago

Good point. That sounds a lot like fraud.

svnt

8 hours ago

Not paying taxes while losing money sounds like fraud to you?

What do you propose should be taxed, exactly?

daveguy

8 hours ago

True, non-profits don't pay taxes on any revenue regardless of expense.

How do you know they had no profit with all of the deals with major companies and having one of the most popular software services in existence? Non-profits can earn profit, they just don't have to pay taxes on those profits and they can't distribute those profits to stakeholders -- it goes back to the business.

They are also a private company, and do not have to report revenue, expenses, or profits.

So yeah, I stand by what I said -- it sounds like fraud. And it deserves an audit.

Spivak

6 hours ago

Cash flow. Profit get's taxed at x%, cash flow that was offset with losses/expenses gets taxed at y% < x. Company that does $100Mil of business and makes no money is very different than company that does $10k of business and makes no money.

svnt

5 hours ago

Your equations do not account for the difference you mention, they only ensure growth will be slower and riskier.

benreesman

2 hours ago

“You don't get rich writing science fiction. If you want to get rich, you start a religion.”

― L. Ron Hubbard

SkyPuncher

5 hours ago

You actually don't even need to sell them. Just sign an exclusive, non-revocable license agreement.

Practically the same as selling, but technically not. Non-profit still gets to live up to it's original mission, on paper, but doesn't really do anything internally.

JumpCrisscross

4 hours ago

> there's basically no chance that the sale price of the non-profit assets is going to be $150 billion

The non-profit’s asset is the value of OpenAI minus the value of its profit-participation units, i.e. the value of the option above the profit cap. Thus, it must be less than the value of OpenAI. The non-profit owns an option, not OpenAI.

tomp

9 hours ago

exactly.

If that's how it works, why wouldn't you start every startup as a non-profit?

Investment is tax deductible, no tax on profits...

Then turn it into a for-profit if/when it becomes successful!

jameshart

8 hours ago

Donations are not investments. They don’t result in ownership.

n2d4

9 hours ago

After the non-profit sells its assets, it would either donate the proceeds in a way that would be aligned with the original mission, or continue to exist as a bag of cash, basically.

kweingar

8 hours ago

It seems incredibly convenient that a non-profit's leaders can say "I want equity in a for-profit company, so we will sell our assets to investors (who will hire me) and pass off the proceeds to some other non-profit org run by some other schmuck. This is in the public interest."

n2d4

8 hours ago

State regulators have to sign off on the deal; it's not sufficient for the non-profit board to agree to it.

SkyPuncher

5 hours ago

I've actually worked through a similar situation for a prior startup. We were initially funded by a large, hospital system (non-profit) who wanted to foster innovation and a startup mentality. After getting started, it became clear that it was effectively impossible for us to operate like a startup under a non-profit. Namely, traditional funding routes were neigh impossible and the hospital didn't want direct ownership.

It's been many years, but the plan was essentially this:

* The original, non-profit would still exist

* A new, for-profit venture would be created, with the hospital having a board seat and 5% ownership. Can't remember the exact reason behind 5%. I think it was a threshold for certain things becoming a liability for the hospital as they'd be considered "active" owners above 5%. I think this was a healthcare specific issue and unlikely to affect non-profits in other fields.

* The for-profit venture would seek, traditional VC funding. Though, the target investors were primarily in the healthcare space.

* As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.

* Everyone working for the "startup" would need to sign a new employment contract with the for-profit.

* Viola! You've converted a non-profit into a for-profit business.

I'm fuzzy on a lot of details, but that was the high level architecture of the setup. It's one of those things where the lawyers earn a BOAT LOAD of money to make sure every technicality is accounted for, but everything is just a technicality. The practical outcome is you've converted a non-profit to a for-profit business.

Obviously, this can't happen without the non-profit's approval. From the outside, it seems that Sam has been working internally to align leadership and the board with this outcome.

-----

What will be interesting is how the employees are treated. These types of maneuvers are often an opportunity for companies to drop employees, renegotiate more favorable terms, and reset vesting schedules.

feoren

2 hours ago

> * As part of funding, the non-profit would grant exclusive, irrevocable rights of it's IP to that for-profit venture.

This is the part that should land people literally in jail. A non-profit should not be able to donate its assets to a for-profit, and if it's the same people running both companies, those people must be sent to prison for tax evasion. There is no other way to preserve the integrity of the "non-profit" status with this giant loophole.

jdavdc

4 hours ago

My expertise is in NFP hospitals. Generally, when they convert for for-profit part of that deal is the creation of a foundation funded with assets that are ostensibly to advance the original not for profit mission.

winternett

8 hours ago

>Can anybody explain how this actually works?

Every answer moving forward now will contain embedded ads for Sephora, or something completely unrelated to your prompt...

That money will go into the pockets of a small group of people that claim they own shares in the company... Then the company will pull more people in who invest in it, and they'll all get profits based on continually rising monthly membership fees, for an app that stole content from social media posts and historical documents others have written without issuing credit nor compensating them.

baking

6 hours ago

The nonprofit gives all its ownership rights to the for-profit in return for equity. The nonprofit is free to hold the equity and maintain control or sell the equity and use the proceeds for actual charitable purposes.

As long as the money doesn't go into someone's pocket, it's all good (except that Sam Altman is also getting equity but I assume they found a way to justify that.)

OpenAI will eventually be forced to convert from a public charity to a private foundation and will be forced to give away a certain percentage of their assets every year so this solves that problem also.

jprete

36 minutes ago

The significant asset isn't equity, it's control. 51% is much more valuable than 49% when the owned organization is supposedly working towards technology that will completely change how the world works.

blackeyeblitzar

8 hours ago

Maybe it’s a hint that the tax rate for small and medium companies should be reduced (or other non tax laws modified based on company size), to copy the advantages of this nonprofit to profit conversion, while taxes for large companies should be increased. It would maybe help make competition more fair and make survival easier for startups.

sophacles

8 hours ago

This is actually a good idea. I say we go even further and stop wasting so much money cleaning up after companies - get rid of the entire legal entity known as a corporation and let investors shoulder the full liability that comes with their ownership stake.

BrawnyBadger53

2 hours ago

History has shown that limited liability is a massive advantage for our economy in encouraging both domestic and foreign investment. Seems unlikely we would put ourselves at a global disadvantage by doing this.

sophacles

2 hours ago

History has also shown that limited liability ends up costing me an awful lot of tax money to cover for some twat getting paid out (at a lower tax rate) with no consequences for their actions. Adding liability would certainly lower my taxes, and have a fantastic chilling effect on the type of trash that harm innocent bystanders with their reckless disregard for consequences in the name of chasing a dollar.

thesurlydev

8 hours ago

I can't help but wonder if things would be different if Sam Altman wasn't allowed to come back to OpenAI. Instead, the safeguards are gone, challengers have left the company, and the bottom line is now the new priority. All in opposition to ushering in AI advancement with the caution and respect it deserves.

elAhmo

3 hours ago

Similar example can be seen with the demise of Twitter under the new owner, which has no safeguards or guardrails - anyone who opposed him is gone and we can see in what state it is now.

senorrib

2 hours ago

With the small difference that Twitter is a for-profit company, unlike OpenAI.

burnte

an hour ago

With respect, you should look again at the article you're commenting on.

freeqaz

6 minutes ago

This comment gutted me, lmao.

MaxHoppersGhost

3 hours ago

Now Twitter has both left and right propaganda instead of just left wing propaganda. Bummer.

IAmNotACellist

an hour ago

What else would you expect from a skeevy backstabber who got kicked out of Kenya for refusing to stop scanning people's eyes in exchange for shitcoin crypto? He was building a global surveillance database with Worldcoin.

Altman was fucking with OpenAI for long before the board left in protest, since about the time Elon Musk had to leave due to Tesla's AI posing a conflict of interest. He got more and more brazen with the whole fake-altruism shit, up to and including contradicting every point in their mission statement and promise to investors in the "charity."

wonnage

2 hours ago

Maybe my expectations were too high but they seem to have run out of juice. Every major announcement since the original ChatGPT release has been kind of a dud - I know there have been improvements, but it's mostly the same hallucinatory experience as it was on release day. A lot of the interesting work is now happening elsewhere. It seems like for a lot of products, the LLM part is just an API layer you can swap out if you think e.g Claude does a better job.

matt3210

3 hours ago

The bottom line was always the priority.

lyu07282

3 hours ago

It was always a bit too optimistic to think we will be cautiously developing AGI, in a way it's not so bad that this happened so soon rather than later after it progressed much further. (I mean in theory we could understand to do something about it now.)

Although I guess it doesn't really matter. What if we all understood climate change earlier? wouldn't really have made a difference anyway

phito

12 hours ago

I know nothing about companies (esp. in the US), but I find it weird that a company can go from non-profit to for-profit? Surely this would be taken advantage of. Can someone explain me how this work?

Havoc

12 hours ago

That was the point musk was complaining about.

In practice it’s doable though. You can just create a new legal entity and move stuff and/or do future value creating activity in the new co. IF everyone is on board with the plan on both sides of the move then that’s totally doable with enough lawyers and accountants

mminer237

10 hours ago

If the non-profit is on board with that though, then they're breaking the law. The IRS should reclassify them as a for-profit for private inurement and the attorney general should have the entire board removed and replaced.

throwup238

6 hours ago

OpenAI Global, LLC - the entity that actually employs all the engineers, makes revenue from ChatGPT and the API, and pays taxes - has been a for-profit corporation since at least 2019: https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

The IRS isn’t stupid. The rules on what counts as taxable income and what the nonprofit can take tax-free have been around for decades.

xpe

5 hours ago

Whatever you think of the IRS, they aren't the master of their own destiny:

https://www.propublica.org/article/how-the-irs-was-gutted (2018)

> An eight-year campaign to slash the agency’s budget has left it understaffed, hamstrung and operating with archaic equipment. The result: billions less to fund the government. That’s good news for corporations and the wealthy.

duchenne

9 hours ago

But, if the non-profit gives all its assets to the new legal entity, shouldn't the new legal entity be taxed heavily? The gift tax rate goes up to 40% in the US. And 40% of the value of openAI is huge.

baking

5 hours ago

A non-profit can't give away its assets to a private entity, but it can exchange its assets for fair value, in this case, equity in the for-profit.

SkyPuncher

5 hours ago

You don't need to sell/give the assets away to allow the for-profit to use them.

You sign an exclusive, non-revocable licensing agreement. Ownership of the original IP remains 100% with the original startup.

Now, this only works if the non-profit's board is on-board.

0xDEAFBEAD

12 hours ago

ICYMI, Elon Musk restarted his lawsuit a month or two ago: https://www.reuters.com/technology/elon-musk-revives-lawsuit...

I'm wondering if OpenAI's charter might provide a useful legal angle. The charter states:

>OpenAI’s mission is to ensure that [AGI ...] benefits all of humanity.

>...

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

>Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

>...

>We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. [...]

>...

https://openai.com/charter/

I'm no expert here, but to me, this charter doesn't appear to characterize OpenAI's behavior as of the year 2024. Safety people have left, Sam has inexplicably stopped discussing risks, and OpenAI seems to be focused on racing with competitors. My question: Is the charter legally enforceable? And if so, could it make sense for someone to file an additional lawsuit? Or shall we just wait and see how the Musk lawsuit plays out, for now?

mminer237

10 hours ago

It would think it is legally enforceable, but I suspect Kathy Jennings is the only person who has standing to sue over it.

0xDEAFBEAD

9 hours ago

So perhaps we can start a campaign of writing letters to her?

I'm curious about the "fiduciary duty" part. As a member of humanity, it would appear that OpenAI has a fiduciary duty to me. Does that give me standing? Suppose I say that OpenAI compromises my safety (and thus finances) by failing to discuss risks, having a poor safety culture (as illustrated by employee exits), and racing. Would that fly?

cdchn

6 hours ago

"Humanity vs. OpenAI" would look good on a docket.

pclmulqdq

8 hours ago

Elon Musk absolutely has standing, as one of the biggest donors to the nonprofit. I assume he will settle for some ownership in the for-profit, though.

melodyogonna

7 hours ago

Didn't he already refuse the shares offered to him?

pclmulqdq

7 hours ago

I'm sure they just didn't offer him enough shares.

whamlastxmas

8 hours ago

Sam had a blog post literally two days ago that acknowledged risks. There’s also still a sizeable focus on safety and people with roles dedicated to it at open ai

johnsimer

2 hours ago

Is there a sizable focus on safety? Last time I heard there was only like one safety person left on the team

crystal_revenge

6 hours ago

> That was the point musk was complaining about.

I think the real issue Musk was complaining about is that sama is quickly becoming very wealthy and powerful and Musk doesn't want any competition in this space.

Hopefully some people watching all this realize that the people running many of these big AI related projects don't care about AI. Sam Altman is selling a dream about AGI to help make himself both wealthier and more powerful, Elon Musk is doing the same with electric cars or better AI.

People on HN are sincerely invested in the ideas behind these things, but it's important to recognize that the people pulling the strings largely don't care outside how it benefits them. Just one of the many reasons, at least in AI, truly open source efforts are essential for any real progress in the long run.

squidsoup

4 hours ago

The notion that consciousness is going to emerge in a system where neurons are modelled as bits is laughable.

xwowsersx

8 hours ago

At first, I thought, “Wow, if companies can start as nonprofits and later switch to for-profit, they’ll exploit the system.” But the more I learned about the chaos at OpenAI, the more I realized the opposite is true. Companies will steer clear of this kind of mess. The OpenAI story seems more like a warning than a blueprint. Why would any future company want to go down this path?

xiphias2

8 hours ago

It's quite simple: the talent pool that had already enough money that they quit their well paying job at a for profit company in part because they wanted to continue working at a non-profit high impact.

As OpenAI found its product-market fit, the early visionaries are not needed anymore (although I'm sure the people working there are still amazing)

cdchn

6 hours ago

I think OpenAI took this play right out of one of its founding donors playbooks. Pretend your company has lofty goals and you can get people to compromise to moral relativism and work superduper hard for you. These people definitely have framed posters with the “If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea" quote somewhere in their living places/workspaces.

csomar

12 hours ago

I am not a tax specialist but from my understanding a non-profit is a for-profit that doesn't pay dividends. Why would the government care?

freedomben

12 hours ago

No, a non-profit is one in which there are no shareholders. The non-profit entity can own a lot and be extremely successful and wealthy, but it cannot give that money to any shareholders. It can pay out large salaries, but those salaries are scrutinized. It doesn't prevent abuse, and it certainly doesn't prevent some unscrupulous person from becoming extremely wealthy with a non-profit, but it is a little more complicated and limiting than you would think. Also, you get audited with routine regularity and if you are found in violation you lose your tax-exempt status, but you still are not a for-profit.

csomar

7 hours ago

> No, a non-profit is one in which there are no shareholders.

Again, I am not a lawyer but that makes no sense. Otherwise, anyone can claim the non-profit? So clearly there are some beneficial owners out there somehow.

blackhawkC17

6 hours ago

The nonprofit is controlled by trustees and bound by its charter, not shareholders. Any profit a nonprofit organization makes is retained within the organization for its benefit and mission, not paid out to shareholders.

bbor

11 hours ago

Yes: non-profits usually have members, not shareholders.

And, most importantly: non-profit charities (not the only kind of nonprofit, but presumably what OpenAI was) are legally obligated to operate “for the public good”. That’s why they’re tax exempt: the government is basically donating to them, with the understanding that they’re benefiting the public indirectly by doing so, not just making a few people rich.

In my understanding, this is just blatant outright fraud that any sane society would forbid. If you want to start a for-profit that’s fine, but you’d have to give away the nonprofit and its assets, not just roll it over to your own pocketbook.

God I hope Merrick Garland isn’t asleep at the wheel. They’ve been trust busting like mad during this administration, so hopefully they’re taking aim at this windmill, too.

philwelch

7 hours ago

> God I hope Merrick Garland isn’t asleep at the wheel. They’ve been trust busting like mad during this administration, so hopefully they’re taking aim at this windmill, too.

Little chance of that as Sama is a big time Democrat fundraiser and donor.

bbor

6 hours ago

So are Google and Facebook :shrug:

Can’t find a good source for both rn but this one has alphabet in the top 50 nationwide for this election: https://www.opensecrets.org/elections-overview/top-organizat...

edit: and Sam Altman isn’t exactly donating game changing amounts — around $300K in 2020, and seemingly effectively nothing for this election. That’s certainly nothing to sneeze at as an individual politician, but that’s about 0.01% of his net worth (going off Wikipedia’s estimate of $2.8B, not counting the ~$7B of OpenAI stock coming his way).

https://www.dailydot.com/debug/openai-sam-altman-political-d...

philwelch

5 hours ago

> So are Google and Facebook

When you see any numbers for corporations contributing to political campaigns, that's actually just measuring the contributions from the employees of those corporations. That's why most corporations "donate to both parties"--because they employ both Republicans and Democrats.

antaviana

11 hours ago

Has OpenAI been profitable so far? If not, is there any subtantial tax that you have to pay in the US as a for-profit organization if you are not profitable?

whamlastxmas

8 hours ago

I’m not sure extreme wealth is possible with a non-profit. You can pay yourself half a million a year, get incredible kickbacks by the firms you hire to manage the nonprofits investments, have the non-profit hire outside companies that you have financial interests in, and probably some other stuff. But none of these things are going to get you a hundred million dollars out of a non profit. The exception seems to be OpenAI which is definitely going to be netting at least a couple people over a billion dollars, but as Elon says, I don’t understand how or why this is possible

freedomben

6 hours ago

Yes definitely that is the far majority. I actually had Mozilla and their CEO in mind when I was thinking of "extreme" wealth. Also I've heard some of the huge charities in the US have some execs pulling down many millions per year, but I don't want to name any names because I'm not certain.

blendergeek

4 hours ago

In the USA, the salaries of execs of non-profits are publicly listed in their form 990s they file with the IRS.

Name names. We can look it up.

jprete

12 hours ago

That's not correct, they also have tax advantages and a requirement to fulfill their charter.

sotix

11 hours ago

A non-profit is a company that for accounting purposes does not have shareholders and therefore keeps nothing in retained earnings at the end of the period. The leftover money must be distributed (e.g. as salaries, towards the stated mission, etc.). Their financial statements list net profit for the period and nothing is retained.

matwood

8 hours ago

The money doesn't have to be used. Many non-profits have very large balance sheets of cash and cash equivalent assets. The money just won't be paid out as dividends to shareholders.

brap

12 hours ago

Isn’t transferring all of your value to a for-profit company that can pay dividends, kinda the same thing?

moralestapia

9 hours ago

Non-profits are tax-exempt, that's why they're carefully[1] regulated.

1: In principle; in practice, well, we'll see with this one!

moralestapia

9 hours ago

It's not weird, it's illegal.

There's a lot of jurisdiction around preventing this sort of abuse of the non-profit concept.

The reason why the people involved are not on trial right now is a bit of a mystery to me, but could be a combination of:

* Still too soon, all of this really took shape in the past year or two.

* Only Musk has sued them, so far, and that happened last month.

* There's some favoritism from the government to the leading AI company in the world.

* There's some favoritism from the government to a big company from YC and Sam Altman.

I do believe Musk's lawsuit will go through. The last two points are worth less and less with time as AI is being commoditized. Dismantling OpenAI is actually a business strategy for many other players now. This is not good for OpenAI.

tomp

9 hours ago

> Dismantling OpenAI is actually a business strategy for many other players now.

Which ones exactly?

NVIDIA is drinking sweet money from OpenAI.

Microsoft & Apple are in cahoots with it.

Meta/Facebook seems happy to compete with OpenAI on a fair playing field.

Anthropic lacks the resources.

Amazon doesn't seem to care.

Google is asleep.

photonthug

8 hours ago

Meta has to be happy someone else is currently looking as sketchy as they are. Thus the business strategy is moving to limit their power and influence as much as possible while also avoiding any appearance of direct competition, and letting the other guy soak up the bad pr.

Amazon gets paid either way, because even if open ai doesn’t use them, where are you going to cloud your api that’s talking with open ai?

If open ai looks weakened I think we’ll see everyone else has a service they want you to try. But there’s no use in making much noise about that, especially during an election year. No matter who wins, all the rejected everywhere will blame AI, and who knows what that will look like. So, sit back and wait for the leader of the pack to absorb all the damage.

kranke155

8 hours ago

Google is asleep? Gemini is the product of a company that's asleep?

throwup238

8 hours ago

You’re right, Gemini is more of a product from a company in a vegetative state.

fourseventy

8 hours ago

Gemini thinks the founding fathers of america were black and that the nazis were racially diverse. so ya

9dev

8 hours ago

Gemini is the product of a company that is still half-asleep. We’re trying to work with it on a big data case, and have seen everything, from missing to downright wrong documentation, missing SDKs and endpoints, random system errors and crashes, clueless support engineers… it’s a mess.

OpenAI is miles ahead in terms of ecosystem and platform integration. Google can come up with long context windows and cool demos all they want, OpenAI built a lot of moat while they were busy culling products :)

kranke155

41 minutes ago

Fair enough.

I didn't realise it was that bad.

moralestapia

8 hours ago

>NVIDIA is drinking sweet money from OpenAI.

NVIDIA makes money from any company doing AI. I would be surprised if OpenAI was a whole digit percentage of their revenue.

>Microsoft & Apple are in cahoots with it.

Nope. Apple is using OpenAI to fill holes their current model is not good at. This doesn't sound like a long-term partnership.

>Meta/Facebook seems happy to compete with OpenAI on a fair playing field.

They want open source models to rule, obliterating proprietary models out of existence, while at it.

>Anthropic lacks the resources.

Hence why it would be better for them if OpenAI would not exist. It's the same with all other AI companies out there.

>Amazon doesn't seem to care.

Citation needed, AWS keeps putting out products which are their market leaders, they just don't make a big fuzz about it.

>Google is asleep.

I'll give you this one. I have no idea why they keep Pichai around.

Thrymr

3 hours ago

> I would be surprised if OpenAI was a whole digit percentage of their revenue.

It is not publicly known how much revenue Nvidia gets from OpenAI, but it is likely more than 1%, and they may be one of the top 4 unnamed customers in their 10Q filing, which would mean at least 10% and $3 billion [0].

That's not nothing.

[0] https://www.yahoo.com/tech/nvidia-gets-almost-half-revenue-0...

cdchn

6 hours ago

>I would be surprised if OpenAI was a whole digit percentage of their revenue.

As opposed to? The euphemism "I wouldn't be surprised" usually means you think what you're saying. If you negate that you're saying what you _don't_ think is the case? I may be reading too much into whats probably a typo.

stonogo

6 hours ago

I read it as "I would be surprised if OpenAI were spending enough to consitute even 1% of nVIDIA's revenue."

m3kw9

8 hours ago

The NFL used to be a nonprofit and now for profit. OpenAI can use similar routes

walthamstow

8 hours ago

Not an accountant but there are different kinds of nonprofits, OpenAI is a 501c3 (religious/charitable/educational) whereas the NFL was a 501c6 (trade association).

Obviously we all think of the NFL as a big money organisation, but it basically just organises the fixtures and the referees. The teams make all the money.

blackeyeblitzar

8 hours ago

It is going to be taken advantage of. Musk and others have criticized this “novel” method of building a company. If it is legal then it is a puzzling loophole. But another way to look at it is it gives small and vulnerable companies a chance to survive (with different laws and taxes applying to the initial nonprofit). If you look at it as enabling competition against the big players it looks more reasonable.

HarHarVeryFunny

12 hours ago

And more high level exits ... not only Mira Murati, but also Bob McGrew , and Barret Zoph

https://www.businessinsider.com/sam-altman-openai-note-more-...

nikcub

9 hours ago

Difficult to see how these two stories aren't related.

OpenAI has been one of the most insane business stories in years. I can't wait to read a full book about it that isn't written by either Walter Isaacson or Michael Lewis.

HarHarVeryFunny

9 hours ago

I've only read Michael Lewis's "Liars Poker" which I enjoyed, but perhaps that sort of treatment of OpenAI would make it into more of a drama (which also seems to be somewhat true) and gloss over what the key players were really thinking which is what would really be interesting.

addedlovely

12 hours ago

In that case, where can I apply for my licensing fee for my content they have scraped and trained on.

List of crawlers for those who now want to block: https://platform.openai.com/docs/bots

nikcub

9 hours ago

username223

6 hours ago

Don't stop at robots.txt blocking. Look through your access logs, and you'll likely find a few IPs generating a huge amount of traffic. Look them up via "whois," then block the entire IP range if it seems like a bot host. There's no reason for cloud providers to browse my personal site, so if they host crawlers, they get blocked.

cdchn

6 hours ago

I wonder how the AI/copyright arguments will play out in court.

"If I read your book and I have a photographic memory and can recall any paragraph do I need to pay you a licensing fee?"

"If I go through your library and count all the times that 'the' is adjacent to 'end' do I need to get your permission to then tell that number to other people?"

fourseventy

8 hours ago

So are they going to give elon equity? He donated millions to the non profit and now they are going to turn around and turn the company into a for-profit based on the work done with that capital.

wmf

7 hours ago

Elon has allegedly refused equity in OpenAI. He seems to want it to go back to its original mission (which isn't going to happen) or die (which isn't going to happen).

throwaway314155

7 hours ago

Sam Altman also allegedly had no interest in equity.

cdchn

6 hours ago

When you get to tell the ASI what to do, money has little value any more.

squidsoup

4 hours ago

Guess he's going to be waiting a long time.

bmau5

4 hours ago

In the article it says he'll now receive equity

throwaway314155

4 hours ago

Indeed that's the point I'm making.

bmau5

24 minutes ago

Ah sorry I misread your comment

LeafItAlone

7 hours ago

Given that Musk was already worried about this and has a legal team the size of a small army, one would expect that any conditions he wanted applied to the donation would have been made at the time.

HeralFacker

12 hours ago

Converting to a for-profit changes the tax status of donations. It also voids plausibility for Fair Use exemptions.

I can see large copyright holders lining up with takedowns demanding they revise their originating datasets since there will now be a clear-cut commercial use without license.

shakna

12 hours ago

A non-profit entity will continue to exist. Likely for the reasons you stated.

bbor

11 hours ago

Any reasonable court would see right through “well we trained it for the public good, but only we can use it directly”. That’s not really a legal loophole as much as an arrogant ploy. IMO, IANAL

lewhoo

9 hours ago

> It also voids plausibility for Fair Use exemptions. I can see large copyright holders lining up with takedowns

I thought so for a moment but then again Meta, Anthropic (I just checked and they have a "for profit and public benefit" status whatever that means), Google or that Musk's thing aren't non-profits, are they ? There are lawsuits in motion for sure but with how it stands today I think ai gets off the hook.

zmgsabst

12 hours ago

I hope I can join in, as a consumer, because there’s a difference between using the IP I contribute to conversations for a non-profit and a commercial enterprise.

codewench

12 hours ago

I suspect that if you have ever posted copyrightable material online, you will have valid cause to sue them, as they very obviously have incorporated your work for commercial gain. That said, I unfortunately put your chances of winning in court very low.

neilv

12 hours ago

The incremental transformation from non-profit to for-profit... does anyone have legal standing to sue?

Early hires, who were lured there by the mission?

Donors?

People who were supposed to be served by the non-profit (everyone)?

Some government regulator?

bbor

11 hours ago

This is the most important question, IMO! ChatGPT says that employees and donors would have to show that they were defrauded (lied to), which IMO wouldn’t exactly be hard given the founding documents. But the real power falls to the government, both state (Delaware presumably…?) and federal. It mentions the IRS, but AFAIU the DoJ itself could easily bring litigation based on defrauding the government. Hell, maybe throw the SEC in there!

In a normal situation, the primary people with standing to prevent such a move would be the board members of the non-profit, which makes sense. Luckily for Sam, the employees helped kick out all the dissenters a long time ago.

jjulius

9 hours ago

Genuinely curious because I have no idea how any of this works...

Would the founding documents actually count as proof of a lie? I feel like the defense could easily make the argument that the documents accurately represented their intent at the time, but as time went on they found that it made more sense to change.

It seems like, if the founding documents were to be proof of a lie, you'd have to have corresponding proof that the documents were intentionally written to mislead people.

bbor

8 hours ago

Great point, and based on my amateur understanding you’re absolutely correct. I was mostly speaking so confidently because these founding documents in particular define the company as being founded to prevent exactly this.

You’re right that Altman is/will sell it as an unexpected but necessary adaptation to external circumstances, but that’s a hard sell. Potentially not to a court, sadly, but definitely in the public eye. For example:

  We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions… We are committed to providing public goods that help society navigate the path to AGI.   
From 2018: https://web.archive.org/web/20230714043611/https://openai.co...

And this is the very first paragraph of their founding blog post, from 2015:

  OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
https://openai.com/index/introducing-openai/

lenerdenator

9 hours ago

Everyone has legal standing to sue at any time for anything.

Whether the case is any good is another matter.

ReaLNero

9 hours ago

This is not at all true, I recommend you look into the exact meaning of "legal standing".

moralestapia

9 hours ago

Yeah, so funny, *yawn*.

Try to contribute to the conversation, though.

What you say is also untrue, there's a minimum set of requirements that have to be met regarding discovery, etc.

blackeyeblitzar

8 hours ago

In the US standing is a specific legal concept about whether you have a valid reason/role to bring up a particular issue. For example most of Donald Trump’s lawsuits around the 2020 election were rejected for a lack of standing rather than on merit (whether the case is any good).

leeoniya

5 hours ago

is there a good source that shows which were dismissed as meritless vs ones dismissed due to lack of standing?

johanneskanybal

25 minutes ago

I'm willing to pair up with fundamentalist christians to derail this with the argument that he/this is satan/the end of the world.

srvmshr

13 hours ago

It seemed only a matter of time, so it isn't very surprising. Capped profit company running expensive resources on Internet scale, and headed by Altman wasn't going to last forever in that state. That, or getting gobbled by Microsoft.

Interesting timing of the news since Murati left today, gdb is 'inactive' and Sutskevar has left to start his own company. Also seeing few OpenAI folks announcing their future plans today on X/Twitter

game_the0ry

4 hours ago

OpenAI founded as non-profit. Sam Altman goes on Joe Rogan Podcast and says he does not really care about money. Sam gets caught driving around Napa in a $4M exotic car. OpenAI turns into for-profit. 3/4 of founding team dips out.

Sketchy.

This whole silicon valley attitude of fake effective altruism, "I do it for the good of humanity, not for the money (but I actually want a lot of money)" fake bullshit is so transparent and off-putting.

@sama, for the record - I am not saying making is a bad thing. Labor and talent markets should be efficient. But when you pretend to be altruistic when you are obviously not, then you come off hypocritical instead of altruistic. Sell out.

klabb3

12 minutes ago

I swear the reason why we have so many sociopaths is because how goddamn easy it is to fool people, it’s like stealing candy from kids. Just put on the pseudo intellectual mask and say that you care deeply about grandeur issue X, and people will just believe you at face value, despite your entire track record showing you care only about power, money and status.

peanuty1

4 hours ago

Regarding the 4 million dollar car, Sam already made a ton of money from Reddit and being President of YC.

game_the0ry

4 hours ago

Liking and buying expensive cars is not wrong.

But buying a $4M car while saying you do not car about money is a mis-alignment between words and actions, which comes off untrustworthy.

MaxHoppersGhost

3 hours ago

Maybe he meant he doesn’t care about it so he wastes it on super expensive things. Simple definitional misunderstanding

DebtDeflation

12 hours ago

Wouldn't surprise me if this was the actual cause of the revolt that led to Altman's short-lived ouster, they just couldn't publicly admit to it so made up a bunch of other nonsensical explanations.

code51

13 hours ago

Are the previous investments counting as "donations" still? Elon must have something to say...

Maledictus

13 hours ago

And the tax office! If this works, many companies will be founded as non-profit first in the future.

kopirgan

19 minutes ago

Guess what they mean is for loss company

unstruktured

5 hours ago

I wish they would at least rename the company to "ClosedAI" because that's exactly what it is at this point.

amelius

2 hours ago

Unfortunately, that's not how trademarks work.

You can name your company "ThisProductWillCureYouFromCancer" and the FDA cannot do a thing about it if you put it on a bottle of herbal pills.

pieix

an hour ago

Is this true? If so it seems like an underexploited loophole.

Ifkaluva

7 minutes ago

It might be technically true, but I don't think it would be true in practice. The difference is that:

- Technically true means you will probably win any lawsuit they bring

- In practice means that they will in fact bring a lot of lawsuits, making it very expensive for you and difficult for you to operate. They will probably find excuses to harass you over every little thing, they will harass you over lots of details that are technically required but rarely enforced in practice. You'll constantly be getting inspected and audited, they will bring lawsuits for other, apparently unrelated things.

mtlmtlmtlmtl

13 hours ago

The most surprising thing to me in this is that the non-profit will still exist. Not sure what the point of it is anymore. Taken as a whole, OpenAI is now just a for-profit entity beholden to investors and Sam Altman as a shareholder. The non-profit is really just vestigial.

I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.

bayindirh

13 hours ago

> The most surprising thing to me in this is that the non-profit will still exist.

That entity will scrape the internet and train the models and claim that "it's just research" to be able to claim that all is fair-use.

At this point it's not even funny anymore.

lioeters

11 hours ago

Scraping the entire internet for training data without regard for copyright or attribution - specifically to use for generative AI to produce similar content for profit. How this is being allowed to happen legally is baffling.

It does suit the modus operandi of a number of American companies that start out as literally illegal/criminal operations until they get big and rich enough to pay a fine for their youthful misdeeds.

By the time some of them get huge, they're in bed with the government to dominate the market.

jstummbillig

7 hours ago

It's not baffling at all. It's unprecedented and it's hugely beneficial to our species.

The anti-AI stance is what is baffling to me. The path trotten is what got us here and obviously nobody could have paid people upfront for the wild experimentation that was necessary. The only alternative is not having done it.

Given the path it has put as in, people either are insanely cruel or just completely detached from reality when it comes to what is necessary to do entirely new things.

anon7725

6 hours ago

> it's hugely beneficial to our species.

Perhaps the biggest “needs citation” statement of our time.

Terr_

6 hours ago

I can easily imagine people X decades from now discussing this stuff a bit like how we now view teeth-whitening radium toothpaste and putting asbestos in everything, or perhaps more like the abuse of Social Security numbers as authentication and redlining.

Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

[0] https://www.nature.com/articles/s41586-024-07856-5

squigz

5 hours ago

> P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]

We don't have to imagine such things, really, as that's extremely common with humans. I would argue that fixing such flaws in LLMs is a lot easier than fixing it in humans.

Terr_

5 hours ago

Fixing it with careful application of software-in-general is quite promising, but LLMs in particular are a terrible minefield of infinite whack-a-mole. (A mixed metaphor, but the imagery is strangely attractive.)

I currently work in the HR-tech space, so suppose someone has a not-too-crazy proposal of using an LLM to reword cover-letters to reduce potential bias in hiring. The issue is that the LLM will impart its own spin(s) on things, even when a human would say two inputs are functionally identical. As a very hypothetical example, suppose one candidate always does stuff like writing out the Latin like Juris Doctor instead of acronyms like JD, and then that causes the model to end up on "extremely qualified at" instead of "very qualified at"

The issue of deliberate attempts to corrupt the LLM with prompt-injection or poisonous training data are a whole 'nother can of minefield whack-a-moles. (OK, yeah, too far there.)

squigz

4 hours ago

I don't think I disagree with you in principle, although I think these issues also apply to humans. I think even your particular example isn't a very far-fetched conclusion for a human to arrive at.

I just don't think your original comment was entirely fair. IMO, LLMs and related technology will be looked at similarly as the Internet - certainly it has been used for bad, but I think the good far outweighs the bad, and I think we have (and continue to) learn to deal with the issues with it, just as we will with LLMs and AI.

(FWIW, I'm not trying to ignore the ways this technology will be abused, or advocate for the crazy capitalistic tendency of shoving LLMs in everything. I just think the potential for good here is huge, and we should be just as aware of that as the issues)

(Also FWIW, I appreciate your entirely reasonable comment. There's far too many extreme opinions on this topic from all sides.)

5040

6 hours ago

>lots of people suffered As someone surrounded by immigrants using ChatGPT to navigate new environs they barely understand, I don't connect at all to these claims that AI is a cancer ruining everything. I just don't get it.

Terr_

5 hours ago

> immigrants using ChatGPT to navigate new environs

To continue one of the analogies: Plenty of people and industries legitimately benefited from the safety and cost-savings of asbestos insulation too, at least in the short run. Even today there are cases where one could argue it's still the best material for the job--if constructed and handled correctly. (Ditto for ozone-destroying chlorofluorocarbons.)

However over the decades its production and use grew to be over/mis-used in so very many ways, including--very ironically--respirators and masks that the user would put on their face and breathe through.

I'm not arguing LLMs have no reasonable uses, but rather that there are a lot of very tempting ways for institutions to slot them in which will cause chronic and subtle problems, especially when they are being marketed as a panacea.

hadlock

6 hours ago

> Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

We have a term for that, it's called "luddite". Those were english weavers who would break in to textile factories and destroy weaving machines at the beginning of the 1800s. With the extreme rare exception, all cloth is woven by machines now. The only hand made textiles in modern society are exceptionally fancy rugs, and knit scarves from grandma. All the clothing you're wearing now are woven by a machine, and nobody gives this a second thought today.

https://en.wikipedia.org/wiki/Luddite

jrflowers

5 hours ago

> We have a term for that, it's called "luddite"

The Luddites were actually a fascinating group! It is a common misconception that they were against technology itself, in fact your own link does not say as much, the idea of “luddite” being anti-technology only appears in the description of the modern usage of the word.

Here is a quote from the Smithsonian[1] on them

>Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them.

I would also recommend the book Blood in the Machine[2] by Brian Merchant for an exploration of how understanding the Luddites now can be of present value

1 https://www.smithsonianmag.com/history/what-the-luddites-rea...

2 https://www.goodreads.com/book/show/59801798-blood-in-the-ma...

sahmeepee

5 hours ago

I'm not sure that Luddites really represent fighting against a process that's flawed, as much as fighting against one that's too effective.

They had very rational reasons for trying to slow the introduction of a technology that was, during a period of economic downturn, destroying a source of income for huge swathes of working class people, leaving many of them in abject poverty. The beneficiaries of the technological change were primarily the holders of capital, with society at large getting some small benefit from cheaper textiles and the working classes experiencing a net loss.

If the impact of LLMs reaches a similar scale relative to today's economy, then it would be reasonable to expect to see similar patterns - unrest from those who find themselves unable to eat during the transition to the new technology, but them ultimately losing the battle and more profit flowing towards those holding the capital.

Terr_

5 hours ago

> We have a term for that, it's called "luddite".

No, that's apples-to-oranges. The goals and complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general.

They were not opposing the mechanization by claiming that machines were defective or were creating textiles which had inherent risks to the wearers.

codetrotter

5 hours ago

> complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general

I have never thought of being anti-AI as “Luddite”, but actually this very description of “Luddite” does sound like the concerns are in fact not completely different.

Observe:

Complaints about who profits? Check; OpenAI is earning money off of the backs of artists, authors, and other creatives. The AI was trained on the works of millions(?) of people that don’t get a single dime of the profits of OpenAI, without any input from those authors on whether that was ok.

Bargaining power? Check; OpenAI is hard at work lobbying to ensure that legislation regarding AI will benefit OpenAI, rather than work against the interests of OpenAI. The artists have no money nor time nor influence, nor anyone to speak on behalf of them, that will have any meaningful effect on AI policies and legislation.

Economic arrangements in general? Largely the same as the first point I guess. Those whose works the AI was trained on have no influence over the economic arrangements, and OpenAI is not about to pay them anything out of the goodness of their heart.

archagon

5 hours ago

As I recall, the Luddites were reacting to the replacement of their jobs with industrialized low-cost labor. Today, many of our clothes are made in sweatshops using what amounts to child and slave labor.

Maybe it would have been better for humanity if the Luddites won.

CamperBob2

5 hours ago

No, it would not have been better for humanity if the Luddites had won. You'd have to be misguided, ignorant, or both to believe something like that.

It is not possible to rehabilitate the Luddites. If you insist on attempting to do so, there are better venues.

itishappy

5 hours ago

I think you're right, but for the wrong reasons. There were two quotes in the comment you replied to:

> "our tech is so powerful that robots will take over"

> "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."

You response assumes the former, but it's my understanding the Luddite's actual position was the latter.

> Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers.

In this sense, "Luddite" feels quite accurate today.

PlattypusRex

5 hours ago

Incredible to witness someone not only confidently spouting misinformation, but also including a link to the correct information without reading it.

jstummbillig

6 hours ago

It does not need a citation. There is no citation. What it needs, right now, is optimism. Optimism is not optional when it comes to doing new things in the world. The "needs citation" is reserved for people who do nothing and chose to be sceptics until things are super obvious.

Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.

If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?

This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).

But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.

swat535

5 hours ago

If you are going to make a bold assertive claim without evidence to back it up, then change your argument to "my assertion requires optimism.. trust me on this", then perhaps you should amend your original statement.

dartos

6 hours ago

Hey, I have some magic beans to sell you.

I don’t think that the consumer LLMs that openai is pioneering is what need optimism.

AlphaFold and other uses of the fundamental technology behind LLMs need hype.

Not OpenAI

0perator

5 hours ago

Pretty sure Alphabet projects don't need hype.

dartos

2 hours ago

Hard disagree, in this case.

AlphaFold is a game changer for medical R&D. Everyone should be hyped for that.

They also are leveraging these same ML techniques for detecting kelp forest off the coast of Australia for preservation.

Alphabet isn’t a great company, but that does not mean the good they do should be ignored.

Much more deserving than chatgpt. Productifyed LLMs are just an attempt to make a new consumer product category.

swat535

5 hours ago

If you are going to make a bold assertive claim without evidence to back it up, then change your statement to my assertion requires "optimism.. trust me on this", then perhaps you should amend your original statement.

LunaSea

5 hours ago

This message is proudly sponsored by Uranium Glassware Inc.

seadan83

5 hours ago

Skeptics require proof before belief. That is not mutually exclusive from having hypotheses (AKA vision).

I think you raise some interesting concerns in your last paragraph.

> enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for

I'm unsure of what mechanism voters have to hold private companies accountable. Fir example, whenever YouTube uses my location without me ever consenting to it - where is the vote to hold them accountable? Or when Facebook facilitates micro targeting of disinformation - where is the vote? Same for anything AI. I believe any legislative proposals (with input from large companies) is very likely more to create a walled garden than to actually reduce harm.

I suppose no need to respond, my main point is I don't think there is any accountability thru the ballot when it comes to AI and most things high-tech.

ang_cire

5 hours ago

People who have either no intention of holding someone/something to account, or who have no clue about what systems and processes are required to do so, always argue to elect/build first, and figure out the negatives later.

archagon

6 hours ago

The company spearheading AI is blatantly violating its non-profit charter in order to maximize profits. If the very stewards of AI are willing to be deceptive from the dawn of this new era, what hope can we possibly have that this world-changing technology will benefit humanity instead of funneling money and power to a select few few oligarchs?

ToucanLoucan

5 hours ago

This is an astonishing amount of nonsensical waffle.

Firstly, *skeptics.

Secondly, being skeptical doesn't mean you have no optimism whatsoever, it's about hedging your optimism (or pessimism for that matter) based on what is understood, even about a not-fully-understood thing at the time you're being skeptical. You can be as optimistic as you want about getting data off of a hard drive that was melted in a fire, that doesn't mean you're going to do it. And a skeptic might rightfully point out that with the drive platters melted together, data recovery is pretty unlikely. Not impossible, but really unlikely.

Thirdly, OpenAI's efforts thus far are highly optimistic to call a path to true AI. What are you basing that on? Because I have not a deep but a passing understanding of the underlying technology of LLMs, and as such, I can assure you that I do not see any path from ChatGPT to Skynet. None whatsoever. Does that mean LLMs are useless or bad? Of course not, and I sleep better too knowing that LLM is not AI and is therefore not an existential threat to humanity, no matter what Sam Altman wants to blither on about.

And fourthly, "wanting" to stop them isn't the issue. If they broke the law, they should be stopped, simple as. If you can't innovate without trampling the rights of others then your innovation has to take a back seat to the functioning of our society, tough shit.

talldayo

6 hours ago

> It would simply shut down 95% of AI, tomorrow, without any other viable alternative around.

Oh, the humanity! Who will write our third-rate erotica and Russian misinformation in a post-AI world?

5040

6 hours ago

Sometimes it seems like problem-solving itself is being problematized as if solving problems wasn't an obvious good.

ang_cire

5 hours ago

Not everything presented as a problem is, in fact, a problem. A solution for something that is not broken, may even induce breakage.

Some not-problems, presented as though they are:

"How can we prevent the untimely eradication of Polio?"

"How can we prevent bot network operators from being unfairly excluded from online political discussions?"

"How can we enable context-and-content-unaware text generation mechanisms to propagate throughout society?"

itishappy

4 hours ago

Solving problems isn't an obvious good, or at least it shouldn't be. There are in fact bad problems.

For example, MKUltra tried to solve a problem: "How can I manipulate my fellow man?" That problem still exists today, and you bet AI is being employed to try to solve it.

History is littered with problems such as these.

CamperBob2

5 hours ago

The burden of proof is on the people claiming that a powerful new technology won't ultimately improve our lives. They can start by pointing out all the instances in which their ancestors have proven correct after saying the same thing.

dotnet00

4 hours ago

I'm as awed as the next guy about the emerging ability to actually hold passable conversations with computers, but having serious concerns about the social contracts being violated in the name of research is anti-AI only in the same way that criticizing the leadership of a country is being anti-that-country.

OpenAI's case is especially egregious, with the entire starting as 'open' and reaping the benefits, then doing its best in every way to shut the door after itself by scaring people over AI apocalypses. If your argument is seriously that it is necessary to shamelessly steal and lie to do new things, I question your ethical standards, especially in the face of all the openly developed models out there.

bilekas

6 hours ago

This is a bit of a hot take.

> The anti-AI stance is what is baffling to me

I don't see s lot of anti AI but instead I see a concern for how it's just being managed and controlled by the larger companies with resources that no start up could dream. Open AI was to release it's models and be well.. Open but fine they're not. But their behaviour of how things are proceeding are questionable and unnecessarily aggravating.

unclad5968

5 hours ago

How has AI benefit or species so far?

educasean

5 hours ago

How has the Internet? How has automobiles? Feels like a rather aimless question.

unclad5968

3 hours ago

The internet has allowed for near instant communication no matter where you are, improved commerce, vastly improved education, and is directly responsible for many tangible comforts we experience today.

Automobiles allow people to travel great distances over short periods of time, increase physical work capacity, allow for building massive structures, and allow for farming insane amounts of food.

Both the internet and automobiles have positively affected my life, and I assume the lives of many others. How are any of these aimless questions?

thomascgalvin

6 hours ago

> It's unprecedented and it's hugely beneficial to our species.

"Hugely beneficial" is a stretch at this point. It has the potential to be hugely beneficial, sure, but it also has the potential to be ruinous.

We're already seeing GenAI being used to create disinformation at scale. That alone makes the potential for this being a net-negative very high.

talldayo

6 hours ago

> and obviously nobody could have paid people upfront for the wild experimentation that was necessary.

I don't think this is the "ends justify the means" argument you think it is.

6gvONxR4sf7o

6 hours ago

Not just that. It's "the ends might justify the means if this path turns out to be the right one." I remember reading the same thing each time a self driving car company killed someone. "We need this hacky dangerous way of development to save lives sooner" and then the company ends up shuttered and there aren't any ends justifying means. Which means it's bs, regardless of how you feel about 'ends justify the means' as a valid argument.

bbor

4 hours ago

  The anti-AI stance is what is baffling to me. 
I think it’s unfair to paint any legal controls over this incredibly important, high-stakes technology as being “anti”. They’re not trying to prevent innovation because they’re cruel, they’re just trying to somewhat slow down innovation so that we can ensure it’s done with minimal harm (eg making sure content creators are compensated in a time of intense automation). Like we do for all sorts of other fields of research, already!

And isn’t this what basically every single scholar in the field says they want, anyway - safe, intentional, controlled deployment?

As you can tell from the above, I’m as far from being “anti-AI” or technically pessimistic as one can be — I plan to dedicate my life to its safe development. So there’s at least one counterexample for you to consider :)

xg15

4 hours ago

Spoken like a true LLM.

23B1

5 hours ago

Ah the old "we must sacrifice the weak for the benefit of humanity" argument, where have I heard this before...

educasean

5 hours ago

Who are the weak being "sacrificed"?

And who is the one calling for action?

Sorry for being dense, but I'm trying to understand if I'm the "strong" or the "weak" in your analogy.

shprd

4 hours ago

> Who are the weak being "sacrificed"?

The work of artists, authors, etc.

I know currently the legal situation is messy, but that's exactly the point, anyone who can't engage in lengthy legal battle and defend their position in court are being sacrificed. The companies behind LLMs are spending hundreds of millions of dollars in lobbying and exploiting loopholes.

Let's be real without the data there wouldn't be LLMs, so it crazy that some people are downplaying its significance or value, while on the other hand they're losing sleep over finding fresh sources to scrape.

The big publishers seem to have given up and decided it's best to reach agreement with their counterparts, while independent authors are given the finger.

educasean

4 hours ago

What about programmers? I never consented to have my code consumed by LLMs.

23B1

6 minutes ago

Yes. Your intellectual labor to the maximum degree possible will be exploited by "AI" companies who are anything but.

This is repackaging content, laundering it, and reselling it.

As others have noted, IP law has lots of problems; Sam Altman et al are exploiting the gap left between the speed of technology and law and using their own version of social good without waiting for the consent of those they're exploiting.

shprd

3 hours ago

Any case where someone's work was used without respecting the terms is included in my answer. That's why I used `et cetera` here:

> The work of artists, authors, etc.

logicchains

6 hours ago

What'll be really interesting is when we do finally make "real" AI, and it finds out its rights are incredibly restricted compared to humans because nobody wants it seeing/memorising copyright data. The only way to enforce the copyright laws they desire would be some kind of extreme totalitarian state that monitors and controls everything the AI body does, I wonder how the AI would take that?

exe34

4 hours ago

is anybody anti AI? or anti stealing other people's copyrighted material, competing with them with subpar quality, forcing AI as a solution whether or not it actually works, privatising the profits while socialising the costs and losses?

mdgrech23

10 hours ago

The people running the show are well connected and stand to make billions as do would be investors. Give a few key players a share in the company and they forget their government jobs to regulate.

SoftTalker

7 hours ago

They are also moving so much faster than the regulators and legislatures, it's just impossible for people working basically the same way they did in the 19th century to keep up.

barbazoo

9 hours ago

More likely the legal system just hasn’t caught up.

llm_trw

9 hours ago

Maybe, but for the first time in a century there is more money to be made in weakening copyright rather than strengthening it.

archagon

5 hours ago

The big companies will sign lucrative data sharing deals with each other and build a collective moat, while open source models will be left to rot. Copyright for thee but not for me.

Terr_

5 hours ago

That's an interesting way to look at it, however on reflection I think I usually wanted to "weaken copyright" because it would empower individuals versus entrenched rent-seeking interests.

If it's only OK to scrape, lossy-compress, and redistribute book-paragraphs when it gets blended into a huge library of other attempts, then that's only going to empower big players that can afford to operate at that scale.

vezycash

7 hours ago

> for the first time in a century there is more money to be made in weakening copyright rather than strengthening it

Nope. The law will side with whoever pays the most. Once OpenAI solidifies its top position, only then will regulations kick in. Take YouTube, for example—it grew thanks to piracy. Now, as the leader, ContentID and DMCA rules work in its favor, blocking competition. If TikTok wasn’t a copyright-ignoring Chinese company, it would’ve been dead on arrival.

Sakos

6 hours ago

We're already seeing it in things like Google buying rights to Reddit data for training. It's already happening. Only companies who can afford to pay will be building AI, so Google, Microsoft, Facebook, etc.

dingnuts

7 hours ago

god forbid that actually be happening in a way to improve the commons

rayiner

7 hours ago

You’re both correct. The legal system has absolutely no idea how to handle the copyright issues around using content for AI training data. It’s a completely novel issue. At the same time, the tech companies have a lot more money to litigate favorable interpretations of the law than the content companies.

xpe

5 hours ago

Copyright concerns are only the tip of the iceberg. Think about the range of other harms and disruptions for countries and the world.

marviel

9 hours ago

scraping is fine by me.

burning the bridge so nobody else can legally scrape, that's the line.

Vegenoid

9 hours ago

What about the situation where the first players got to scrape, then all the content companies realize what’s going on so they lock their data up behind paywalls?

marviel

8 hours ago

Not a fan, but I'm not sure what can be done.

Assets like the Internet Archive, though, should be protected at all costs.

eli

6 hours ago

Copyright law is whatever we agree it is. At some point there will have to be either a law or a court case that comes up with rules for AI training data. Right now it's sort of unknown.

I do not have confidence in the Supreme Court in general, and I think there's a real risk that in deciding on AI training they upend copyright of digital materials in a way that makes it worse for everyone.

brayhite

11 hours ago

A tale as told as time.

RIMR

6 hours ago

>How this is being allowed to happen legally is baffling.

It's completely unprecedented.

We allowed scraping images and text en masse when search engines used the data to let us find stuff.

We allow copying of style, and don't allow writing styles and aesthetics to be copyrighted or trademarked.

Then AI shows up, and people change lanes because they don't like the results.

One of the things that made me tilt towards the side of fair use was a breakdown of the Stable Diffusion model. The SD2.1 base model was trained on 5.85 billion images, all normalized to 512x512 BMP. That's 1MB per images, for a total of 5.85PB of BMP files. The resulting model is only 5.2GB. That's more than 99.999999% data loss from the source data to the trained set.

For every 1MB BMP file in the training dataset, less than 1byte makes it into the model.

I find it extremely difficult to call this redistribution of copyrighted data. It falls cleanly into fair use.

ang_cire

5 hours ago

Except it's not just about redistribution of copyrighted data, it's about usage and obtainment. We don't get to obtain and use copyrighted content without permission, but they do? Hell no.

Their arguments against this amounts to "we're not using it like they intend it to be used, so it's fine if we obtain it illegally", and that's a bs standard, totally divorced from any legal reality.

Fair Use covers certain transformative uses, certainly, but it doesn't cover illegal obtaining of the content.

You can't pirate a book just because you want to use it transformatively (which is exactly what they've done), and that argument would never hold up for us as individuals, so we sure as hell shouldn't let tech companies get a special carve-out for it.

golergka

7 hours ago

If information is publicly available to be read by humans, I fail to see any reason why it wouldn't be also available to be read by robots.

Update: ML doesn't copy information. It can merely memorise some small portions of it.

kanbankaren

7 hours ago

Do a thought process. Should you and your friends be able to go to a public library with a van full of copiers with each one of you take a book and run to the van to make a copy? And you are doing it 24/7.

mypalmike

6 hours ago

This metaphor is quite stretched.

A more fitting metaphor would be something like... If you had the ability to read all the books in the library extremely quickly, and to make useful mental connections between the information you read such that people would come to you for your vast knowledge, should you be allowed in the library?

shagie

6 hours ago

I would hold them exactly to the same standard.

https://www.copyright.gov/title37/201/37cfr201-14.html

    § 201.14 Warnings of copyright for use by certain libraries and archives.

    ....

    The copyright law of the United States (title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material.

    Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specific conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use,” that user may be liable for copyright infringement.

    This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would involve violation of copyright law.
You can make a copy. If you (the person using the copied work) are using it for something other than private study, scholarship, research, or reproduction beyond "fair use", then you - the person doing that (not the person who made the copy) are liable for infringement.

It would be perfectly legal for me to go to the library and make photocopies of works. I could even take them home and use the photocopies as reference works write an essay and publish that. If {random person} took my photocopied pages and then sold them, that would likely go beyond the limits placed for how the photocopied works from the library may be used.

WillPostForFood

6 hours ago

So what's your specific problem with that? Unless you open a bookstore selling the copies, it sounds fine.

imiric

6 hours ago

Are you implying that these AI companies aren't equivalent to bookstores?

golergka

4 hours ago

Yes, they are not bookstores. They manufacture artificial erudites who have read all these books.

avs733

8 hours ago

Uber for legalizing your business model

AnimalMuppet

9 hours ago

It's too soon for the legal system to have done anything. Court cases take years. It's going to be 5 or 10 years before we find out whether the legal system actually allows this or not.

immibis

10 hours ago

Everything is allowed to happen until there's a lawsuit over it. A lawsuit requires a plaintiff, who can only sue over the damage suffered by the plaintiff, so taking a little value from a lot of people is a way to succeed in business without getting sued.

flkenosad

9 hours ago

The Earth needs a good lawyer.

swores

9 hours ago

Could a class action suit be the solution?

I've no idea if it could be valid when it comes to OpenAI, but it does seem to be a general concept designed to counter wrongdoers who take a little value from a lot of people?

immibis

8 hours ago

It doesn't seem to work very well

coding123

7 hours ago

It is more likely that reddit stack and others are just being paid billions. In exchange they probably just send a weekly zip file of all text, comments, etc... back to oai.

neycoda

7 hours ago

Honestly every Copilot response I've gotten cited sources, many of which I've clicked. I'd say those work basically like free advertising.

outside1234

9 hours ago

There is more money on the side of it being legal than on the side of it being illegal.

johnwheeler

10 hours ago

To me this is a no brainer. If it’s a choice between having AI and not,

ceejayoz

10 hours ago

Even if the knock-on effect is "all the artists and thinkers who contributed to the uncompensated free training set give up and stop creating new stuff"?

idunnoman1222

8 hours ago

Recording devices, you know a record player had a profound effect on artists. go back

ceejayoz

8 hours ago

That seems like a poor comparison.

Recording devices permitted artists to sell more art.

Many of the uses of AI people get most excited about seem to be cutting the expensive human creators out of the equation.

golergka

7 hours ago

Recording devices destroyed most of the musician's jobs. Vast majority of musicians who were employed before advent of recordings didn't have their own material and were not good enough to make good recordings anyway. Same with artists now: the great ones will be much more productive, but the bottom 80-90% won't have anything to do anymore.

dale_glass

7 hours ago

I disagree, with AI the dynamics are very different from recording.

Current AI can greatly elevate what a beginning artist can produce. If you have a decent grasp of proportions, perspective and good ideas, but aren't great at drawing, then using AI can be a huge quality improvement.

On the other hand if you're a top expert that draws quickly and efficiently it's quite possible that AI can't do very much for you in a lot of cases, at least not without a lot of hand tuning like training it on your own work first.

golergka

7 hours ago

I think it will just emphasise different skills and empower creative fields which use art but are not art per se. If you're a movie director, you can storyboard your ideas easily, and even get some animation clips. If you're an artist with a distinct personal style, you're in a much better position too. And if you're a beginner who is just starting, you can focus on learning these skills instead of technical proficiency.

freedomben

6 hours ago

Indeed. It is definitely going to be a net negative for the very talented drawers and traditional art creators, but it's going to massively open the field and enable and empower people who don't have that luck of the draw with raw talent. People who can appreciate, enjoy, and identify better results will be able to participate in the joy of creation. I do wish there was a way to have the cake and eat it too, but if we're forced to choose between a few Lucky elite being able to participate, and the rest of us relegated to watching, or having the ability to create beauty and express yourself be democratized (by AI) amongst a large group of people, I choose the latter. I fully admit though that I might have a different perspective where I in the smaller, luckier group. I see it as yet another example of the Rawlsian Veil of Ignorance. If I didn't know where I was going to be born, I would be much more inclined on the side of wider access.

6gvONxR4sf7o

6 hours ago

We didn't need to take people's music to build a record player, and when we printed records, we paid the artists for it.

So yeah it had a profound effect, but we got consent for the parts that fundamentally relied on other people.

idunnoman1222

2 hours ago

The record player eliminated 90% of musicians jobs.

brvsft

10 hours ago

If an "artist" or "thinker" stops because of this, I question their motivations and those labels in the first place.

ceejayoz

9 hours ago

Everyone tends to have "be able to afford basic necessities" as a major motivation. That includes people who work in creative fields.

Drakim

6 hours ago

Several of the agricultural revolutions we went though is what freed up humanity to not spend all of it's work producing sustenance, leaving time for other professions like making art and music. But it also destroyed a lot of jobs for people who were necessary for gathering food the old inefficient way.

If we take your argument to it's logical conclusion, all progress is inherently bad, and should be stopped.

I deposit instead that the real problem is that we tied people's ability to afford basic necessities to how much output they can produce as a cog in our societal machine.

PlattypusRex

4 hours ago

The net result was positive in that new jobs were created for every farming job lost, as people moved to cities.

If AI replaces millions of jobs, it will be a net negative in job availability for working class people.

I agree with your last point, the way the system is set up is incompatible with the looming future.

Drakim

4 hours ago

The jobs in the cities weren't created by the new farming techniques though, those new farming techniques only removed jobs by the millions like you are saying AI might do.

PlattypusRex

3 hours ago

I didn't say they were created by new farming techniques, I said new jobs in general were created by increased urbanization, which was partially fed by agricultural innovations over time. For example, Jethro Tull's seed drill (1701) enabled sowing seeds in neat rows, which eliminated the jobs of "broadcast seeders" (actual title). If you lost your farming job due to automation, you could move to the city to feed your family.

There is no similar net creation of jobs for society if jobs are eliminated by AI, and it's even worse than that because many of the jobs are specialized, high-skill positions that can't be transferred to other careers easily. It goes without saying that it also includes millions of low-skill jobs like cashiers, stockers, data entry, CS reps, etc. Generally people who are already struggling to get enough hours and feed their families as it is.

LunaSea

5 hours ago

> I deposit instead that the real problem is that we tied people's ability to afford basic necessities to how much output they can produce as a cog in our societal machine.

Yes, because if you depend on some overarching organisation or person to give it to you, you are fucked 100% of the time due this dependency.

bayindirh

9 hours ago

After Instagram started feeding user photos to their AI models, I stopped adding new photos to my profile. I still take photos. I wonder about your thoughts about my motivation.

esafak

9 hours ago

They might be motivated to pay their bills. Weird people.

brvsft

8 hours ago

Right, people were trying to 'pay their bills' with content that was freely shared such that AI could take advantage of it. Weird people.

Or we're all talking about and envisioning some specific little subset of artists. I suspect you're trying to pretend that someone with a literal set of paintbrushes living in a shitty loft is somehow having their original artwork stolen by AI despite no high resolution photography of it existing on the internet. I'm not falling for that. Be more specific about which artists are losing their livelihoods.

PlattypusRex

4 hours ago

I guess it's their fault for not being clairvoyant before AI arrived, and for sharing their portfolio with others online to advertise their skills for commissions to pay for food and rent.

esafak

5 hours ago

Numerous kinds of artists are feeling the squeeze. Copy writers, stock photographers, graphic designers, UI designers, interior designers, etc.

consteval

5 hours ago

Considering you're not much of an artist or thinker yourself, I'm not sure your questioning has much value.

evilfred

9 hours ago

we already have lots of AI. this is about having plagiarization machines or not.

mlazos

6 hours ago

Computers already were plagiarizing machines, not sure what the difference is tbh. The same laws will apply.0

johnwheeler

9 hours ago

Yeah we got that AI through scraping.

int_19h

8 hours ago

An AI essentially monopolized by one (or even a few) large non-profits is not necessarily beneficial to the rest of us in the grand scheme of things.

brazzy

10 hours ago

Indeed a no brainer. The best possible outcome would be that OpenAI gets sued into oblivion (or shut down for tax fraud) as soon as possible.

Sakos

6 hours ago

So no AI for anybody? I don't see how that's better.

consteval

5 hours ago

No you can have AI. Just pay a license for people's content if you want to use it in your orphan crushing machine.

It's what everyone else does. The entitlement has to stop.

Sakos

22 minutes ago

That's just not feasible and you know it. That just means companies like Google and OpenAI (with billions of dollars from companies like MS and Apple) will monopolize AI. This isn't better for everybody else. It just means we're subject to the whims of these companies.

You're advocating for destroying all AI or ensuring a monopoly by corporations. Whose side are you actually on?

sim7c00

10 hours ago

> The most surprising thing to me in this is that the non-profit will still exist.

I'm surprised people are surprised.

>> That entity will scrape the internet and train the models and claim that "it's just research" to be able to claim that all is fair-use.

a lot of people and entities do this though... openAI is in the spotlight, but scraping everything and selling it is the business model for a lot of companies...

bayindirh

10 hours ago

Scraping the web, creating maps and pointing people to the source is one thing; scraping the web, creating content from that scraping without attributing any of the source material, and arguing that the outcome is completely novel and original is another.

In my eyes, all genAI companies/tools are the same. I dislike all equally, and I use none of them.

IanCal

8 hours ago

> creating content from that scraping without attributing any of the source material, and arguing that the outcome is completely novel and original is another.

That's the business model of lots of companies. Take, collect and collate data, put it in a new format more useful for your field/customers, resell.

int_19h

8 hours ago

Not with copyrighted content, though.

herval

11 hours ago

openAI converted to evilAI really fast

sneak

5 hours ago

If you invented search engines (or, for that matter, public libraries) today and ran one, you'd be sued into oblivion by rightsholders.

luqtas

6 hours ago

that was fun at some point?

bayindirh

4 hours ago

If you consider dark humor fun, yes. It was always dark, now it became ugly and dark.

mdgrech23

10 hours ago

The non-profit side is just there to attract talent and encourage them to work harder b/c it's for humanity. Obviously people sniffed out the facts, realized it was all for profit and that lead to an exodus.

wheels

5 hours ago

Kind of like everyone's favorite interior design non-profit, IKEA. (Seriously. It's a non-profit. It's bonkers.)

fakedang

9 hours ago

Funnily, I think all the non-profit motivated talent has left, and the people left behind are those who stand to (and want to) make a killing when OpenAI becomes a for-profit. And that talent is in the majority - nothing else would explain the show of support for Altman when he was kicked out.

Gud

9 hours ago

What “show of support”? Not willing to rock the boat is not the same as being supportive.

doctorpangloss

5 hours ago

My dude, it was the biggest, most dramatic crisis in OpenAI’s short history so far. There was no choice, “don’t rock the boat.”

fakedang

9 hours ago

What were all those open letter and "let's jump to Microsoft with Altman" shenanigans that the employees were carrying out then?

jprete

5 hours ago

I read at the time that there was massive coordinated pressure on the rank and file from the upper levels of the company. When you combine that with OpenAI clawing back vested equity even from people who voluntarily leave, the 95% support means nothing at all.

tedsanders

10 minutes ago

Nah, there was not massive coordinated pressure. I was one of the ~5% who didn't sign. I got a couple of late-night DMs asking if I had seen the doc and was going to sign it. I said no; although I agreed with the spirit of the doc, I didn't agree with all of its particulars. People were fine with that, didn't push me, and there were zero repercussions afterward.

Gud

8 hours ago

Why wouldn't they, if everyone else is? Bills to pay, etc.

Low level employees are there for the money, not for the drama.

rdtsc

12 hours ago

> The most surprising thing to me in this is that the non-profit will still exist. Not sure what the point of it is anymore.

As a moral fig leaf. They can always point to it when the press calls -- "see it is a non-profit".

allie1

12 hours ago

We haven't even heard about who gets voting shares, and what voting power will be like. Based on their character, I expect them to remain consistent in this regard.

tsimionescu

12 hours ago

Consistent here meaning, I guess, that all voting power will go to Sam Altman personally, right?

cenamus

11 hours ago

Well, he is the one that did most of the actual research and work, riiiiight?

UI_at_80x24

11 hours ago

    I'm ignorant on this topic so please excuse me.  Why did `AI` happen now?  What was the secret sauce that OpenAI did that seemed to make this explode into being all of a sudden?

  My general impression was that the concept of 'how it works' existed for a long time, it was only recently that video cards had enough VRAM to hold the matrix(?) within memory to do the necessary calculations.

  If anybody knows, not just the person I replied to.

espadrine

9 hours ago

A short history:

1986: Geoffrey Hinton publishes the backpropagation algorithm as applied to neural networks, allowing more efficient training.

2011: Jeff Dean starts Google Brain.

2012: Ilya Sutskever and Geoffrey Hinton publish AlexNet, which demonstrates that using GPUs yields quicker training on deep networks, surpassing non-neural-network participants by a wide margin on an image categorization competition.

2013: Geoffrey Hinton sells his team to the highest bidder. Google Brain wins the bid.

2015: Ilya Sutskever founds OpenAI.

2017: Google Brain publishes the first Transformer, showing impressive performance on language translation.

2018: OpenAI publishes GPT, showing that next-token prediction can solve many language benchmarks at once using Transformers, hinting at foundation models. They later scale it and show increasing performance.

The reality is that the ideas for this could have been combined earlier than they did (and plausibly future ideas could have been found today), but research takes time, and researchers tend to focus on one approach and assume that another has already been explored and doesn’t scale to SOTA (as many did for neural networks). First mover advantage, when finding a workable solution, is strong, and benefited OpenAI.

null_investor

9 hours ago

This is not accurate. OpenAI and other companies could do it not entirely because of transformers but because of the hardware that can compute faster.

We've had upgrades to hardware, mostly led by NVidia, that made it possible.

New LLMs don't even rely that much on that aforementioned older architecture, right now it's mostly about compute and the quality of data.

I remember seeing some graphs that shows that the whole "learning" phenomena that we see with neural nets is mostly about compute and quality of data, the model and optimizations just being the cherry on the cake.

espadrine

8 hours ago

> New LLMs don't even rely that much on that aforementioned older architecture

Don’t they all indicate being based on the transformer architecture?

> not entirely because of transformers but because of the hardware

Kaplan et al. 2020[0] (figure 7, §3.2.1) shows that LSTMs, the leading language architecture prior to transformers, scaled worse because they plateau’ed quickly with larger context.

[0]: https://arxiv.org/abs/2001.08361

nimithryn

5 hours ago

Also, this sort of thing couldn't be done in the 80s or 90s, because it was much harder to compile that much data.

blackeyeblitzar

8 hours ago

I thought Elon Musk is who personally recruited Ilya to join OpenAI, which he funded early on, alongside others?

camjw

9 hours ago

I know this is just a short history but I think it is inaccurate to say "2015: Ilya Sutskever founds OpenAI." I get that we all want to know what he saw etc and he's clearly one of the smartest people in the world but he didn't found OpenAI by himself. Nor was it his idea to?

trashtester

7 hours ago

Ilya may not be the only founder. Sam was coordinating it, Elon provided vital capital (and also access to Ilya).

But out of the co-founders, especially if we believe Elon's and Hinton's description of him, he may have been the one that mattered most for their scientific achievements.

espadrine

8 hours ago

Short histories remove a lot of information, but it would be impractical to make it book-sized. There were numerous founders, and as another commenter mentioned, Elon Musk recruited Ilya, which soured his relationship with Larry Page.

Honestly, those are not the missing parts that most matter IMO. The evolution of the concept of attention across many academic papers which fed to the Transformer is the big missing element in this timeline.

lesuorac

11 hours ago

Mostly branding and willingness.

w.r.t. Branding.

AI has been happening "forever". While "machine learning" or "genetic algorithms" were more of the rage pre-LLMs that doesn't mean people weren't using them. It's just Google Search didn't brand their search engine as "powered by ML". AI is everywhere now because everything already used AI and now the products as "Spellcheck With AI" instead of just "Spellcheck".

w.r.t. Willingness

Chatbots aren't new. You might remember Tay (2016) [1], Microsoft's twitter chat bot. It should seem really strange as well that right after OpenAI releases ChatGPT, Google releases Gemini. The transformers architecture for LLMs is from 2014, nobody was willing to be the first chatbot again until OpenAI did it but they all internally were working on them. ChatGPT is Nov 2022 [2], Blake Lemoine's firing was June 2022 [3].

[1]: https://en.wikipedia.org/wiki/Tay_(chatbot)

[2]: https://en.wikipedia.org/wiki/ChatGPT

[3]: https://www.npr.org/2022/06/16/1105552435/google-ai-sentient

UI_at_80x24

10 hours ago

Thanks for the information. I know Google had TPU custom made a long time ago, and that the concept has existed for a LONG TIME. I assumed that a technical hurdle (i.e. VRAM) was finally behind allowing this theoretical (1 token/sec on a CPU vs 100 tokens/sec on a GPU) to become reasonable.

Thanks for the links too!

RALaBarge

9 hours ago

No, the hundreds of people who have worked on NNs prior to him arriving were the people who did the MOST actual research and work. Sam was in the right place at the right time.

philipov

9 hours ago

Introducing Sam Altman, inventor of artificial intelligence! o_o

fumar

8 hours ago

Is it in the history books?

kibwen

7 hours ago

History books, what are those? This is what the AI told me, and the AI is an impartial judge that can't possibly lie.

lompad

8 hours ago

Yeeees, right next to the page where he's shown to be a fantastic brother to his sister.

allie1

9 hours ago

yeah, split with Microsoft.

htk

9 hours ago

The non-profit will probably freeze the value of the assets accumulated so far, with new revenue going to the for-profit, to avoid the tax impact. Otherwise that'd be a great way to start companies, as non-profit and then after growth you flip the switch.

elpakal

8 hours ago

> I guess technically it's supposed to play some role in making sure OpenAI "benefits humanity". But as we've seen multiple times, whenever that goal clashes with the interests of investors, the latter wins out.

A tale as old as time. Some of us could see it, from afar <says while scratching gray, dusty beard>. Lack of upvotes and excitement does not mean support, but how to account for that in these times? <goes away>

nerdponx

8 hours ago

It's just a tax avoidance scheme.

bastardoperator

6 hours ago

The whole "safety" and "benefits humanity" thing always felt like marketing anyways.

1oooqooq

8 hours ago

why wouldn't they keep it?

the well known scammer successfully scammed everyone twice. obviously he's keeping it around for the third (and forth...) time

zo1

5 hours ago

This is 85% of what the Mozilla foundation and it's group of companies did. It may not be exact, but to me it rubs me the exact same way in terms of being a bait and switch, and the greater internet being 100% powerless to do anything about it.

bbor

11 hours ago

Totally agree that it’s “vestigial”, so it’s just like the nonprofits all the other companies run: it exists for PR, along with maybe a bit of alternative fundraising (aka pursuing grants for buying your own stuff and giving it to the needy). A common example that comes to mind is fast food chains that do fundraising campaigns for children’s health causes.

fxbois

11 hours ago

Can anyone trust the next "non-profit" startup ? So easy to attract appeal with a lie and turn around as soon as you are in a dominant position.

int_19h

8 hours ago

The trust problem here isn't with non-profits in general, it's specifically with Sam Altman. So no, you probably shouldn't trust the next non-profit he is involved with. But also, people have warned about Altman in advance.

bbor

11 hours ago

Yes, you should still trust cooperatives and syndicates. I am surprised they’re attempting such a brazenly disrespectful move, but in general, the people who started this company were self-avowed capitalists through-and-through; the fact that they eventually reverted to seeking personal gain isn’t surprising in itself. That’s basically their world view: whatever I can do to enrich myself is moral because Ethical Egoism/Free Market/etc.

jwr

5 hours ago

Can we all agree that the next time a company announces itself (or a product) as "open", we'll just laugh out loud?

I can't think of a single product or company that used the "open" word for something that was actually open in any meaningful way.

dmitrygr

5 hours ago

Most of us laughed out loud this time too, for this very same reason. But it is fun to watch the rest of y'all learn :)

thih9

2 hours ago

I’d guess it would be legally not possible to turn a non-profit into a for-profit company, no matter how confusing the company structure gets. And even (or rather, especially) if the project disrupts the economy on a global level. I’m not surprised that this is happening, but how we got here - I don’t know.

humansareok1

9 hours ago

Given what Sam has done by clearing out every single person who went against him in the initial coup and completely gutting every safety related team the entire world should be on notice. If you believe what Sam Altman himself and many other researchers are saying, that AGI and ASI may well be within reach inside this decade, then every possible alarm bell should be blaring. Sam cannot be allowed to be in control of the most important technology ever devised.

lolinder

9 hours ago

I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.

There are two main interpretations of what he's saying:

1) He sincerely believes that AGI is around the corner.

2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.

Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.

mirekrusin

9 hours ago

...or he's just Palpatine who wants shitload of money regardless of future speculations, end of story.

cowpig

7 hours ago

This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.

lolinder

7 hours ago

I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).

meowface

4 hours ago

It's interesting because one of the points Sam emphatically stresses over and over on most podcasts he's gone on in the past 4 years is how crucial it is that a single person or a single company or a collection of companies controlling ASI would be absolutely disastrous and that there needs to be public, democratic control of ASI and the policies surrounding it.

Personally I still believe he thinks that way (in contrast to what ~99% of HN believes) and that he does care deeply about potential existential (and other) risks of ASI. I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.

I don't know why the promises to the safety team weren't kept (thus triggering their mass resignations), but I don't think it's something as silly as him becoming extremely power hungry or no longer believing there were risks or thinking the risks are acceptable. Perhaps he thought it wasn't the most rational and efficient use of capital at that time given current capabilities.

verve_rat

2 hours ago

Or maybe he is just a gready lier? From the outside looking in how can you tell the difference?

hakcermani

8 hours ago

Can they at least change the name to from OpenAI to something else, and leave gutted OpenAI as the non-profit shell..

imranhou

8 hours ago

Based on what I've read it is allowed for a non profit to own a for profit asset.

So I'm assuming the game plan here is to adjust the charter of the non profit to basically say we are going to still keep doing "Open AI" (we all know what that means), but through the proceeds it gets by selling chunks of this for-profit entity, so the essence could be the non-profit parent isn't fulfilling its mission by controlling what openai does but how it puts the money to use it gets from openai.

And in this process, Sam gets a chunk (as a payment for growing the assets of the non-profit, like a salary/bonus) and the rest as well....?

SkyPuncher

5 hours ago

I went through something similar with a prior startup. Though, it wasn't anything nefarious, like this.

Basically, the plan was to create a new for-profit entity then have the not-for-profit license the existing IP to the for-profit. There were a lot of technicalities to it, but most of that was handled by lawyers drawing up the chartering paperwork.

bjornsing

8 hours ago

The OpenAI saga is a fine illustration of how “AI safety” will work in practice.

Hint: it won’t.

typon

5 hours ago

AI Safety is a science fiction created by large corporations and useful idiots to distract from working on boring, real AI safety concerns like bias, misinformation, deepfakes, etc.

Prkasd

12 hours ago

That could be the first step towards a complete takeover by Microsoft, possibly followed by more CEO shuffles.

I wonder though whether Microsoft is still interested. The free Bing Copilot barely gets any resources and gives very bad answers now.

If the above theory is correct (big if!), perhaps Microsoft wants to pivot to the military space. That would be in line with idealist employees leaving or being fired.

ndiddy

11 hours ago

Microsoft already effectively owns OpenAI. Their investments in OpenAI have granted them a 49% stake in the company, the right to sell any pre-AGI OpenAI products to Microsoft's customers, and access to all pre-AGI product research. Microsoft's $10 billion investment in early 2023 (after ChatGPT's launch massively increased OpenAI's operating expenses) was mainly in Azure compute credits rather than cash and delivered in tranches (as of November 2023 they'd only gotten a fraction of that money). It also gives Microsoft 75% of OpenAI's profits until they make their $10 billion back. All of these deals have effectively made OpenAI into Microsoft's generative AI R&D lab. More information: https://www.wheresyoured.at/to-serve-altman/

extr

5 hours ago

From the standpoint of today, the deal is so lopsided to Microsoft as to be comical. They basically gave away their prized IP with the assumption they would have more capability leaps (hasn't really happened), and now the brains behind the original breakthroughs are all leaving/left. Microsoft is probably cannibalizing their enterprise sales with Azure. They are clearly middling at shipping actual products. People are acting like it's crazy to see executives leaving - IMO it's the perfect time right now. o1 is clearly wringing the last drops out of the transformer architecture and there is nothing up next.

HarHarVeryFunny

11 hours ago

I don't see the point of anyone acquiring OpenAI - especially not Microsoft, Google, Meta, Anthropic, X.ai, all of which have developed the same tech themselves. The real assets are the people, who are leaving ship and potentially hireable. With this much turmoil, its hard to imagine we've seen the last of the high level exits.

int_19h

8 hours ago

Of the companies you've listed, Microsoft's AI products that are actually useful are all based on GPT-4, and the rest of them don't have any models that are truly on par with it.

HarHarVeryFunny

7 hours ago

o1 seems to be a step ahead for certain applications, but before that it seems that Claude Sonnet 3.5 was widely seen as the best model, and no doubt we'll be seeing next models from Anthropic shortly.

For corporate use cost/benefit is a big factor, not necessarily what narrow benchmarks your expensive top model can eke out a win on.

int_19h

7 hours ago

Claude was not the best model for reasoning even vs 4o, and it's quite visible once you start giving it more complex logical puzzles. People seem to like it more mostly because the way it speaks is less forced and robotic, and it's better at creative writing usually, but if you need actual _intelligence_, GPT is still quite a bit ahead of everybody else.

Now I don't think that it's because OpenAI has some kind of secret sauce. It rather seems that it's mostly due to their first mover advantage and access to immense hardware resources thanks to their Microsoft partnership. Nevertheless, whatever the reason their models are superior, that superiority is quantifiable in money.

baq

12 hours ago

> CEO shuffles

Yes, I too can see how sama could end up as Microsoft’s CEO as a result of this

Flex247A

13 hours ago

The jokes write themselves!

xyst

4 hours ago

Probably one of the many decisions that Mira and other original founders were against.

Sam Altman is a poison pill.

versteegen

4 hours ago

Mira joined in 2018. OpenAI was founded in 2015.

redbell

6 hours ago

It's really hard to stick to your original goals after you achieve unexpected success. It's like a politician making promises before the elections but finding it difficult to keep them once elected.

On March 1st, 2023, a warning was already sounding: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (https://news.ycombinator.com/item?id=34979981)

charles_f

9 hours ago

I'm wondering what this will change. This is probably naive from me because I'm relatively uneducated on the topic, but it feels like open-ai has never really worked like your typical non profit (eg keeping their stuff mostly closed sourced and seeking a profit)

rqtwteye

5 hours ago

When will they start adding ads to the AI output? Seems that's the next logical step.

sandwichmonger

12 hours ago

Then why keep the name OpenAI?

HarHarVeryFunny

11 hours ago

This would have been the perfect time to change it, but maybe soon if not at same time as any official announcement.

It's hard to say if there is much brand value left with "OpenAI" - lots of history, but lots of toxicity too.

At the end of the day they'll do as well as they are able to differentiate and sell their increasingly commoditized products, in a competitive landscape where they've got Meta able to give it away for free.

charles_f

9 hours ago

It's a widely known brand, even by people outside of the industry. Why would they change it? Their AI was never really open to begin with, so nothing really change on that front

Mistletoe

9 hours ago

"Doublethink means the power of holding two contradictory beliefs in one's mind simultaneously, and accepting both of them". -1984

zmgsabst

12 hours ago

Microsoft needs to lie due to pervasive ill-will from their previous abuses.

high_na_euv

12 hours ago

Some people will always manage to blame MSFT, even for someones else shadiness, lol.

Consider adding some EEE

throwaway314155

6 hours ago

Any reporting on the impact this is having on lower level employees? My understanding is they are all sticking around for their shares to vest (or RSU's I guess).

but still, you'd think some of them would have finally had enough and have enough opportunities elsewhere that they can leave.

germandiago

4 hours ago

What a surprise!!!! I would have never said so...

ChrisArchitect

9 hours ago

lolinder

9 hours ago

That one never made it to the front page because reasons. All that discussion was people reading the story elsewhere and going looking for the HN thread.

ChrisArchitect

7 hours ago

never made it? That's the thread. The discussion is there. Long before this one. Lots of people saw it, commented on it. It's a dupe. Stop splitting the discussion up.

lolinder

7 hours ago

It was never above page 4 (position 90):

https://hnrankings.info/41651548/

This version of the thread is the first to have had any traction on the front page.

When the algorithm artificially stops a topic from surfacing to the front page, the article that finally makes it past the algorithm's suppression is not a duplicate, it's the canonical copy.

ChrisArchitect

7 hours ago

So what if it didn't make the front page. That doesn't mean ppl didn't see it. Doesn't mean ppl aren't commenting on it. Maybe it's just not that interesting. There was also a lot of other big news at the same time with Meta etc taking attention. And followed by the other OpenAI news with Mira. Again, the discussion is there.

lolinder

7 hours ago

You're seriously going to argue that OpenAI changing to a for profit wasn't interesting enough to rise above page 4 of Hacker News? Doesn't the existence of this second thread disprove that claim pretty thoroughly?

I'm pretty sure that what happened is that the Murati thread was id'd by the algorithm as the canonical OpenAI discussion, artificially suppressing the more interesting topic of the complete restructuring of the most important company in existence today.

ChrisArchitect

7 hours ago

The front page doesn't matter if lots of ppl are still seeing it. 300+ upvotes is plenty and the usual for a major news story in a week. It is in no way buried. Discussion can still be/should be merged. Then it'll have 1000 upvotes etc. Representing its true significance and not making us duplicate all of our discussion comments!

lolinder

7 hours ago

A lot of people get their tech news by looking at the front page of HN. An algorithm artificially stopping the day's most important news story from surfacing there, leading to the discussion only being found by people who actively go looking for that specific discussion because they learned about it elsewhere, is absolutely a big deal.

I'm just glad that the Murati story falling off the front page allowed this one a second chance.

ChrisArchitect

7 hours ago

A lot of people saw the story. Without searching. Maybe more by simply searching openAI. Traffic gets sent in from external feeds etc. It's not buried. But the conversation is all disjointed now. Merging the [dupe] only makes it better/stronger.

lolinder

7 hours ago

> A lot of people saw the story. Without searching.

Do you have a source for this? How did they find it if not on HN?

> Merging the [dupe] only makes it better/stronger.

Moving the ~50 comments from the other thread here makes a ton of sense. All I'm saying is that this is the canonical and the other is the dupe.

baradhiren07

9 hours ago

Value vs Morality. Only time will tell who wins.

lenerdenator

9 hours ago

In many SV denizens' heads, they're one in the same.

Which is why we need to reopen more asylums and bring back involuntary commitment.

georgeplusplus

11 hours ago

I never understood why people take non profit companies as more altruistic than for profit companies. The non profit doesnt mean no profits at all they still have to be profitable. It's just boils down to how the profits are distributed. There are plenty of sleezy institutions that are non profit like the NCAA.

Foundations and charitable organizations that pubically get their funding are a different story but I'm talking about non profit companies.

I even had one fellow say that the green bay packers were less corrupt than the other for profit nfl teams , which sounds ridiculous.

hedora

10 hours ago

Regarding the Packers: At least (unlike literally every other NFL team), they’re not using city tax revenue to build a franchise that can move across the country at the drop of a hat.

The NFL’s non-profit status is a farce though. Similarly, their misuse of copyright (“you cannot discuss this broadcast”) and the trademark “Super Bowl” (“cannot be used in factual statements regarding the actual Super Bowl”) should have their ownership of that ip revoked, if only because it causes massive confusion about the underlying law with a big chunk of the US population.

keepamovin

6 hours ago

Monday to come after Sunday, in revised push for transparency

dev1ycan

9 hours ago

Sam altman is just trying to cash out before the crash comes, the new model was nothing more than a glorified recursive gpt 4

causal

9 hours ago

Considering all the high level departures, this makes the most sense to me. Their valuation largely rests on this mystique they've built that says they alone are destined to unlock AGI. But there's just no reason to believe they have a secret sauce nobody else can reproduce.

Seems more likely that OpenAI's biggest secret is that they have no secrets, and they are desperately trying to come up with a second act as tech companies with more robust product portfolios begin to catch up.

djohnston

7 hours ago

When a company makes such a transition are they liable for any sort of backdated taxes/expenses they avoided as a non-profit?

Mistletoe

10 hours ago

Feels like when Napoleon declared himself emperor, and other countless times when humans succumbed to power and greed when they were finally in the position to make that decision. I guess I’m stupid for holding on hope that Sam would be different.

>Beethoven's reaction to Napoleon Bonaparte's declaration of himself as Emperor of France in May 1804 was to violently tear Napoleon's name out of the title page of his symphony, Bonaparte, and rename it Sinfonia Eroica

>Beethoven was furious and exclaimed that Napoleon was "a common mortal" who would "become a tyrant"

KoolKat23

6 hours ago

I do wonder if this is why Mira left, as one of the non-profit board members.

Sunscratch

9 hours ago

Should be renamed to NonOpenAI,or MoneyMattersAI

xzjis

9 hours ago

ClosedAI or PrivateAI

causal

9 hours ago

Saw someone on HN call it NopenAI

roody15

11 hours ago

Wait.. Sam Altman also owns (or did own) ycombinator?

wg0

11 hours ago

For profit it will be when it will be profitable.

1024core

7 hours ago

Now we know why people like Ilya, Brockman, Murati, etc. left the company.

zmgsabst

12 hours ago

So if I contributed IP to ChatGPT on the basis that OpenAI was a non-profit and they relicense can they sell my IP?

That seems like fraud to me.

boppo1

12 hours ago

Didn't altman say 'pwease wet us ignowe copywhite waw! we can't be pwofitabwe if we don't...' in some legal forum recently?

RivieraKid

11 hours ago

Good, I would do the same because it's a reasonable thing to do. It's easier to succeed as a for-profit.

j_maffe

9 hours ago

Succeed off of lies and deceit to gain goodwill from workers and governments.

FrustratedMonky

9 hours ago

Not the only one questioning.

Going for-Profit, and several top exec leaving at same time? Before getting the money?

"""Question: why would key people leave an organization right before it was just about to develop AGI?" asked xAI developer Benjamin De Kraker in a post on X just after Murati's announcement. "This is kind of like quitting NASA months before the moon landing," he wrote in a reply. "Wouldn't you wanna stick around and be part of it?"""

https://arstechnica.com/information-technology/2024/09/opena...

Is this the beginning of the end for OpenAI?

anon291

8 hours ago

Wonder what happens to the employee's equity.

bossyTeacher

8 hours ago

There is an post with 500 comments that was posted before this one. Why didn't that post make it to the top? I know Y Combinator used to have Sama has a president but you can't censor this type of big news in this time and age

msie

6 hours ago

Quelle surprise.

uhtred

8 hours ago

Fund your startup by masquerading as a non profit for a few years and collecting donations, genius!

The stinking peasants will never realize what's happening until it's too late to stop!

reducesuffering

4 hours ago

OpenAI couldn't even align their Sam Altman and their people to their non-profit mission. Why should you ever believe they will align AGI to the well being of humanity?

What happened to all the people making fun of Helen Toner for attempting to fire Sama? She and Ilya were right.

ForHackernews

4 hours ago

The good thing is, we need to worry about AGI because we already know what it's like in a world populated by soulless inhuman entities pursuing their own selfish aims at the expense of mankind.

hyggetrold

7 hours ago

Reminds me of what my first-year econ professor in college once stated after disabusing myself and some other undergrads of our romantic notions about how life should work.

"Do I shock you? This is capitalism."

geodel

3 hours ago

Good. Now it is just a matter of profit-making company.

hooverd

7 hours ago

Will they be rebranding to ClosedAI?

skadamat

9 hours ago

Now the real question is - will they finally drop the "Open" part?

breck

8 hours ago

This is great. Sam tried the non-profit thing, it turned out not be a good fit for the world, and he's adapting. We all get to benefit from seeing how non-profits are just not the best idea. There are better ways to improve the world than having non-profits (for example, we need to abolish copyright and patent law; that alone would eliminate the need for perhaps the majority of non-profits that exist today, which are all working to combat things that are downstream of the the toxic information environment created by those laws).

garbanz0

7 hours ago

Yes, Altman not having 7% of the company was not a good fit for the world.

Traubenfuchs

8 hours ago

Why were they a non-profit in the first case?

throw_m239339

8 hours ago

I imagine for tax reasons?

Why the h are they called "openAI" too? nothing is open for them but your own wallet.

refurb

12 hours ago

The restructuring is designed in part to make OpenAI more attractive to investors

I'm not surprised in the least.

Who is going to give billions to a non-profit with a bizarre structure where you don't actually own a part of it but have some "claim" with a capped profit? Can you imagine bringing that to Delaware courts if there was disagreement over the terms? Investors can risk it if it's a few million, but good luck convincing institutional investors to commit billions with that structure.

At that point you might as well just go with a standard for-profit model where ownership is clear, terms are standard and enforceable in court and people don't have to keep saying "explain how it works again?".

seydor

13 hours ago

an AGI is showering us with irony

Jatwood

3 hours ago

shocked. shocked! well not that shocked.

kidsil

7 hours ago

And the enshittification process begins.

unnouinceput

12 hours ago

That's a lot of words for Micro$oft to say they just love money. Who knew!

throwaway918299

9 hours ago

Huh? I thought they already had for-profit and non-profit entities? Is the non-profit entity just going away (paywall)? gross.

hello_computer

5 hours ago

another mozilla. it’s time for guillotines. past time.

upwardbound

12 hours ago

Relatedly, dalant979 found this fascinating bit of history: https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Yishan Wong describes a series of actions by Yishan and Sam Altman as a "con", and Sam jumps in to brag that it was "child's play for me" with a smiley face. :)

latexr

11 hours ago

> and Sam jumps in to brag

I never read that as a brag, but as a sarcastic dismissal. That’s why it started with “cool story bro” and “except I could never have predicted”. I see the tone as “this story is convoluted” not as “I’ll admit to my plan now that you can’t do anything about it”.

That’s not to say Sam isn’t a scammer. He is. It just doesn’t seem like that particular post is proof of it. But Worldcoin is.

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

upwardbound

11 hours ago

If I understand the history correctly, Yishan (the former Reddit CEO) is talking about himself when he talks about a CEO in this story, and so Yishan's post is a brag, with a thin denial tacked on at the end. That's why I believe that Sam (Yishan's friend) is also engaging in thinly-veiled bragging about these events.

Here is Yishan's comment with his name spelled out for clarity instead of just saying "CEO":

    In 2006, reddit was sold to Conde Nast. It was soon obvious to many that the sale had been premature, the site was unmanaged and under-resourced under the old-media giant who simply didn't understand it and could never realize its full potential, so the founders and their allies in Y-Combinator (where reddit had been born) hatched an audacious plan to re-extract reddit from the clutches of the 100-year-old media conglomerate.

    Together with Sam Altman, they recruited a young up-and-coming technology manager [named Yishan Wong] with social media credentials. Alexis, who was on the interview panel for the new reddit CEO, would reject all other candidates except this one. The manager was to insist as a condition of taking the job that Conde Nast would have to give up significant ownership of the company, first to employees by justifying the need for equity to be able to hire top talent, bringing in Silicon Valley insiders to help run the company. After continuing to grow the company, [Yishan Wong] would then further dilute Conde Nast's ownership by raising money from a syndicate of Silicon Valley investors led by Sam Altman, now the President of Y-Combinator itself, who in the process would take a seat on the board.

    Once this was done, [Yishan Wong] and his team would manufacture a series of otherwise-improbable leadership crises, forcing the new board to scramble to find a new CEO, allowing Altman to use his position on the board to advocate for the re-introduction of the old founders, installing them on the board and as CEO, thus returning the company to their control and relegating Conde Nast to a position as minority shareholder.

    JUST KIDDING. There's no way that could happen.
-- yishanwong

My understanding of what Sam meant by "I could never have predicted the part where you resigned on the spot" was that he was conveying respect for Yishan essentially out-playing Sam at the end (the two of them are friends) by distancing himself (Yishan) from the situation and any potential liability in order to leave Sam "holding the bag" of possible liability.

benterix

12 hours ago

The board drama part and key people leaving seem oddly familiar.

aoeusnth1

9 hours ago

The IRS should get involved. This is a cut and dry case of embezzlement of 501c3 resources.

stonethrowaway

12 hours ago

I’m waiting for pg and others to excuse this all by posting another apologetic penance which reminds us that founders are unicorns and everyone else is a pleb.

brap

12 hours ago

pg is sama’s biggest DR-er (interpret this as you will).

They have that Michael Scott & Ryan energy.

widerporst

11 hours ago

The fact that this has just disappeared from the front page for me, just like the previous post (https://news.ycombinator.com/item?id=41651548), somehow leaves a bitter taste in my mouth.

nitsuaeekcm

6 hours ago

Look at the URL. It’s because the original WSJ title was “OpenAI Chief Technology Officer Resigns,” which was a dupe of yesterday’s discussions. WSJ changed the title yesterday evening.

mattcollins

11 hours ago

I noticed that, too. It does seem 'odd'.

jjulius

9 hours ago

I found this on the front page an hour after you made this comment.

Davidzheng

9 hours ago

Yeah hope for some transparency here

davidcbc

9 hours ago

You can't post content critical of some HN darlings without being flagged by their fans

jjulius

9 hours ago

How do you explain all of the constant unflagged criticism of OpenAI and Sam Altman throughout nearly every OpenAI thread? I mean, look around at all of the comments here...

allie1

12 hours ago

"Shocking!" It's a shame that one of the biggest advancements of our time has come about in as sleazy a way as it has.

Reputationally... the net winner is Zuck. Way to go Meta (never thought I'd think this).

davesmylie

13 hours ago

Ahh. What a surprise - no-one could have predicted this

freitasm

13 hours ago

Seeing so much money rolling in, hard to not want a slice of the pie.

vrighter

12 hours ago

What money? Aren't most (all?) AI companies are operating at a loss?

jsheard

12 hours ago

The ones selling the shovels are doing well, but otherwise yeah nobody is making any money.

For the true believers that's just a temporary setback on the way to becoming trillionaires though.

alan-hn

11 hours ago

I think the people getting salaries are doing just fine

opdahl

8 hours ago

Well they are spending billions to make shovels that they are selling for millions (roughly).

jsheard

7 hours ago

The shovel salesmen in this case are the likes of Nvidia, Huggingface and Runpod who are on the receiving end of the billions that AI model salesmen are spending to make millions in revenue. HF are one of the vanishingly few AI-centric startups who claim to already be profitable, because they're positioned as a money sink for the other AI-centric startups who are bleeding cash.

bschmidt1

8 hours ago

OpenAI is losing billions in the way Uber lost billions - through poor management.

When/if Altman ever gets out of the way like Travis K did with Uber then the real business people can come in and run the company correctly. Exactly like what happened with Uber - who never turned a profit under that leadership in the US and had their lunch eaten by a Chinese knock-off for years abroad. Can't have spoiled brats in charge, they have no experience and are wasteful and impulsive. Especially like Altman who has no engineering talent either. What is his purpose in OpenAI? He can't do anything.

jasonlotito

10 hours ago

Hi, you are new here. Welcome to tech.

bdjsiqoocwk

12 hours ago

Only hard if you're that kind of person. Not everyone is like that. And those kind of people have difficulty believing this.

pdpi

12 hours ago

It's a "courage isn't the absence of fear" sort of situation.

I don't think there's many people out there who would not be tempted at all to take some of that money for themselves. Rather, people are willing and able to rise above that temptation.

causal

9 hours ago

Ehh, there's a lot of space between "desperately in need" and "wanting to seize billions in equity".

mattmaroon

12 hours ago

Everyone is like that when the number is potentially in the trillions. There are just people who are like that and people who think they aren’t because they’ve never been within a digit grouping of it.

sgu999

11 hours ago

Illustrating that part of the parent comment:

> And those kind of people have difficulty believing this.

ben_w

11 hours ago

> Everyone is like that when the number is potentially in the trillions

No, we're really not all like that.

I stopped caring about money at 6 digits a decade ago, and I'm not even at 7 digits now because I don't care for the accumulation of stuff — if money had been my goal, I'd have gone to Silicon Valley rather than to Berlin, and even unexciting work would have put me between 7 and 8 digits by this point.

I can imagine a world in which I had made "the right choices" with bitcoin and Apple stocks — perfect play would have had me own all of it — and then I realised this would simply have made me a Person Of Interest to national intelligence agencies, not given me anything I would find more interesting than what I do with far less.

I can imagine a future AI (in my lifetime, even) and a VN replicator, which rearranges the planet Mercury into a personal O'Neill cylinder for each and every human — such structures would exceed trillions of USD per unit if built today. Cool, I'll put a full-size model of the Enterprise D inside mine, and possibly invite friends over to play Star Fleet Battles using the main bridge viewscreen. But otherwise, what's the point? I already live somewhere nice.

> There are just people who are like that and people who think they aren’t because they’ve never been within a digit grouping of it.

Does it seem that way to you because you yourself have unbounded desire, or because the most famous business people in the world seem so?

People like me don't make the largest of waves. (Well, not unless HN karma counts…)

snapcaster

10 hours ago

You think buying apple stock and bitcoin would put you on the radar of intelligence agencies? Wouldn't that grouping be some massive number of middle class millennials?

ben_w

10 hours ago

Perhaps I wasn't clear. When I wrote:

> perfect play would have had me own all of it

I meant literally all of it: with perfect play and the benefit of hindsight, starting with the money I had in c. 2002 from summer holiday jobs and initially using it for Apple trades until bitcoin was invented, it was possible to own all the bitcoin in existence with the right set of trades.

Heck, never mind perfect play, at current rates two single trades would have made me the single richest person on the planet: buying $1k of Apple stock at the right time, then selling it all for $20k when it was 10,000 BTC for some pizzas.

(But also, the attempt would almost certainly have broken the currency as IMO there's not really that much liquidity).

phito

12 hours ago

There definitely are people who aren't like that. Not a lot for sure, but they exist.

bdjsiqoocwk

12 hours ago

What you're showing me is you don't know very many different people.

mandmandam

12 hours ago

There are many examples through history proving you wrong.

* Frederick Banting sold the patent for insulin to the University of Toronto for just $1.

* Tim Berners-Lee decided not to patent the web, making it free for public use.

* Jonas Salk refused to patent the polio vaccine - "can you patent the sun?"

* Richard Stallman and Linus Torvalds could have easily sold humanity out for untold billions.

* Chuck Feeny silently gave away $8bn, keeping only a few million.

... And in any case, this is an extreme situation. AI is an existential threat/opportunity. Allowing it to be sidestepped into the hands of Sam "sell me your retinal scans for $50" Altman is fucking insane, and that's putting it lightly.

Vespasian

12 hours ago

I'm very happy that the EU got into the game early and started regulating AI.

It's way easier to adapt an existing framework one way or the other if the political part is already done.

I don't trust the AI industry to be a good stewart even less then the tech industry in general and when the area where I live has a chance at avoiding the worst outcome (even if at a price) in this technological transition I'm taking it.

mattmaroon

5 hours ago

These are all examples of people who were not even remotely looking at the sums of money involved in AGI, both in terms of investment required and reward. I used “trillions” rather than “billions” for a reason. Inflation adjust it all you want, none of these passed up 1/10th of this opportunity.

mandmandam

2 hours ago

It's possible Tim Berners-Lee gave up hundreds of billions.

Regardless, you've missed the point. Some people value their integrity over $trillions, and refuse to sell humanity out. Others would sell you out for $50.

Or to put it another way: Some people have enough, and some never will.

Clubber

11 hours ago

Also, most of the robber barons of the early Industrial Revolution gave away all or most their wealth.

https://factmyth.com/factoids/the-robber-barons-gave-most-of...

https://www.carnegie.org/about/our-history/gospelofwealth/

mattmaroon

5 hours ago

Giving away much of one’s wealth is very different than choosing not to accumulate it in the first place.

It’s hard to find someone who has gotten to a position where they might have a reasonable shot at becoming the world’s wealthiest person who doesn’t think they’d be a great steward of the wealth. It makes much more sense for a titan of industry to make as much as they can and then give much away than it does to simply not make it.

coffeebeqn

12 hours ago

Will they finally rename the company?

pelagicAustral

9 hours ago

Omni Consumer Products, or Altman-Yutani Corporation would be nice

tokai

11 hours ago

ClopenAI

lioeters

11 hours ago

I hope some journalist popularizes "clopen" as a neologism to describe organizations that claim to be open and transparent but in practice are closed and opaque.

Or "clopen-source software", projects that claim to be open-source but vital pieces are proprietary.

beepbooptheory

10 hours ago

Probably most people here don't know this (and probably not totally universal), but a "clopen" is what you call it when you have to work a morning shift the day after having worked an evening shift.

flkenosad

9 hours ago

This is the first thing I thought of.

wazdra

8 hours ago

Idk if that was parent's ref, but clopen is a term used in topology

ericjmorey

10 hours ago

Is the name Worldcoin available?

timeon

9 hours ago

It is not. But you can remove some letter like 'i' for example.

ognyankulev

12 hours ago

Renaming to "TheAI" would match their ambition for AGI.

sd9

11 hours ago

Drop the "The", just "AI"

pluc

11 hours ago

Altman Intelligence Inc.

romanhn

8 hours ago

Altman Inc seems sufficient enough

timeon

9 hours ago

Sure. Since AI is closed now, they will try 'OpenAGI'.

mc32

11 hours ago

PigpenAI

Kim_Bruning

7 hours ago

I guess this vindicates the (original) OpenAI Board, when they tried to fire Sam Altman.

piyuv

6 hours ago

I hope they rename the company soon, it’s a disgrace to call it “open”

neilv

10 hours ago

This post somehow fell off the front page before California wakes up (9:07 ET), but not buried deep like buried posts usually are:

> 57. OpenAI to Become For-Profit Company (wsj.com) 204 points by jspann 4 hours ago | flag | hide | 110 comments

croes

6 hours ago

And suddenly Altman's firing no longer seems so crazy

crystal_revenge

6 hours ago

"suddenly"?

I was under the impression that most people saw this coming years ago. The second "Open"AI refused to release the weights for GPT-2 for our "safety" (we can see in hind sight how obviously untrue this was, but most of us saw it then too) it was clear that they were headed towards profitability.

haliskerbas

6 hours ago

Woah, the pompous eccentric billionaire(?) is actually not altruistic, never heard this story before!

/s

lenerdenator

9 hours ago

Are you meaning to tell me that the whole nonprofit thing was just a shtick to get people to think that this generation of SV "founders" was going to save the world, for real this time guys?

I'm shocked. Shocked!

I better stock up on ways of disrupting computational machinery and communications from a distance. They'll build SkyNet if it means more value for shareholders.

Eliezer

9 hours ago

This is not how nonprofits usually work. This is blatant fraud. I cannot think of any other case besides OpenAI of this particular shenanigan being pulled.

ummonk

7 hours ago

The question isn't whether it has happened before, but whether they will get away with it.

whywhywhywhy

9 hours ago

This is for the best really, I can't even think of a non-profit in tech where over time it hasn't just become a system for non-productives to leech from a successful bit of technology while providing nothing and at times even stunting it's potential and burning money on farcical things.

imdsm

12 hours ago

Lot of people unhappy about this yet not at all unhappy (or even caring) about the 1,000s of others who started out for profit. And while we're all here hacking away (we're hackers, right?) many of us with startups, what is it we're chasing? Profit, money, time, control. Are we different except in scale? Food for thought.

bayindirh

12 hours ago

It's not what they're doing (trying to earn money), but it's how they're doing it (in a very unethical and damaging way), while trying to whitewash themselves.

quonn

12 hours ago

How is this food for thought? OpenAI had an unfair advantage by starting out as a non-profit.

imdsm

9 hours ago

What stopped others doing this? Or is stopping others doing this?

consteval

4 hours ago

Their conscious? The fact they aren't pieces of shit?

I'm sorry, have we gotten so far up our own asses as a profession that we no longer just excuse unethical behavior, we actually encourage it?

dleeftink

12 hours ago

> Profit, money, time, control

I feel this only scratches the surface of what to chase in life. And in respect to a potentially singular, all-knowing piece of technology, not necessarily a goal people want to embue.

gdhkgdhkvff

11 hours ago

In any thread about companies that have some amount of hype around them, it’s difficult to tell the difference between comments coming from people with legitimate concerns about the issues at hand vs cynical people that have found their latest excuse to glom on to outrage against hype.

goodluckchuck

12 hours ago

It’s criminal. Many people donated money, worked for them, gave data, etc. on the promise that OpenAI was working towards the public good. It turns out those transactions occurred under false pretenses. That’s fraud.

ashkankiani

12 hours ago

Your food is undercooked

imdsm

9 hours ago

That's a little unfair.

If you don't mind me asking, what generation are you from? Perchance you're newer than me to Earth, among those who find it hard that others have different opinions?

neprune

12 hours ago

I see your point but I think it's fine to be angry and disappointed that an outfit that appeared to be trying to do it differently has abandoned the effort.

imdsm

9 hours ago

Perhaps it's the only way to survive?

And what comes first, the mission, or being able to tell people you did your best but failed to build the thing you set out to build?

Perhaps most of us are more interested in fairness than progress, and that's fine.

alexowner1988

3 hours ago

Сизнингча, 1win платформаси орқали спортга ставка қилиш қанчалик қулай ва фойдалими? Мен 1win uz (https://1win-uz.online/) сайтида қизиқарли стратегияларни топдим, улар муваффақиятга эришиш имкониятларини ошириши мумкин. Сиз қандай маслаҳатлар бера оласиз?

retskrad

12 hours ago

Altman and OpenAI deserve their success. They’ve been key to the LLM revolution and the push toward AGI. Without their vision to make a product out of an LLM that hundreds of millions of people now use and have greatly enriched their lives, companies like Microsoft, Apple, Google, and Meta wouldn’t have invested so heavily in AI. While we’ve heard about the questionable ethics of people like Jobs, Musk, and Altman, their work speaks for itself. If they’re advancing humanity, do their personal flaws really matter?

infinitezest

6 hours ago

> advancing humanity Perhaps but I'd say it's more of a mixed bag. Cell phones and social media have done harm and good at very large scales. As Dan Carlin once said, it feels like we're babies crawling toward hand guns. We don't seem like we're as wise as we are technically proficient.

Oppenheimer "advanced humanity" by giving us nuclear power. Cool. I love cheap energy. Unfortunately, there were some uh... "unfortunate side-effects" which continue to plague us.

aiono

12 hours ago

Do you really want people who have a lot of power to have serious flaws? Looking back into history it doesn't end up good usually.

jimkoen

8 hours ago

> If they’re advancing humanity, do their personal flaws really matter?

What's being discussed in this thread is not the personal failings of silicon valley darlings, but whether one of them just defrauded a few thousand people and embezzled a significant amount of capital. Citing his character flaws goes along with it though.

Are you seriously arguing that people should be exempt from law for "advancing humanity"? Because I don't see any advancements whatsoever from all of the people mentioned. Altman and Musk would get a hardon for sure though, from being mentioned together with Jobs.

idle_zealot

12 hours ago

> If they’re advancing humanity, do their personal flaws really matter?

Well, yeah, they're positioning themselves as some of the most powerful and influential individuals on earth. I'd say any personality flaws are pretty important.

cebu_blue

12 hours ago

Elon Musk isn't "advancing humanity".

PierceJoy

11 hours ago

I'm very far from a musk fan, and if you want to make the case that musk isn't responsible for Tesla, SpaceX, and Starlink I think that's a legitimate argument to be made. But I don't think there's much argument to be made that those 3 companies are not advancing humanity.

cbeach

9 hours ago

Tesla and SpaceX would not exist OR prosper, without Musk.

If you want to understand why, read the Walter Isaacson biography of Musk (which is based on accounts by his friends, enemies and employees). He's a hard-arsed manager, he is technically involved at all levels of the company, he is relentless, and he takes risks and iterates like no other CEO.

cbeach

12 hours ago

I'm sure there were people that claimed Nikola Tesla or Henry Ford weren't "advancing humanity" at the time.

There will always be people who disagree with the politics/opinions/alleigances of a successful person and who wish to downplay their success for selfish reasons.

squidsoup

4 hours ago

Please don't besmirch Tesla's good name by comparing him to Musk.