Sam Altman's pants are on fire

175 pointsposted 14 hours ago
by toomuchtodo

93 Comments

nostrebored

13 hours ago

No particular love for Sam Altman, but the article reads as “I was right about X, so my interpretation of Y is correct.”

I don’t see any particular contradiction. Moving fabs onshore is absolutely in the interest of both parties.

Jedd

13 hours ago

That wasn't my take - it was more broadly a 'given past statements and accompanying behaviour, we can't trust future statements to align with future behaviour'.

Can you more explicitly describe what the X and Y points you allude to are?

nostrebored

13 hours ago

Every “6 months ago I said this!” added little to the article. In a real investigative report it would lay out the facts and fairly objectively state “6 months ago, Sam Altman was accused of …”

The article as a whole just seems libelous? Almost personal?

chrisco255

12 hours ago

It's not a report, it's a blog post. Of course it's personal.

nostrebored

11 hours ago

You regularly make personal attacks against individuals on the internet?

Maybe I’m just old, but I don’t see the appeal?

If you’re trying to convince people, then you should probably have a convincing argument. Otherwise it feels like kiwifarms-posting with a megaphone.

chrisco255

8 hours ago

A blog is an opinion piece. The subject of interest, Sam Altman, is a public figure and CEO of one of the fastest growing tech companies. He's testified in front of Congress on AI regulation and has a lot of pull and influence on regulators. Some of the things and actions he's taken in the past are controversial, thus, thinkpieces get written. The AI industry is quickly en route to trillion dollar plus territory (already there if you count Nvidia as an AI company). There's a lot of money and emotions at stake for the AI gold rush. When someone is at the forefront of these types of things, like other public business figures with controversial tactics (Musk, Gates, Jobs, Kalanick, etc) it draws attention.

113

2 hours ago

> You regularly make personal attacks against individuals on the internet?

Isn't that basically the history of the internet?

dvfjsdhgfv

3 hours ago

Well, sama managed to convince a lot of people to give his company billions, is making apocalyptic predictions that some CEOs take seriously etc. Making sure people at large realize the guy has a very loose relationship with truth, for many years, seems like public service. It's only libel if you spread false statements which Marcus is careful not to make.

milowata

11 hours ago

Maybe you’re missing the context of who Gary Marcus is

samrus

13 hours ago

The crux of the article was altman trying to get a bailout, and when people called him on his bullshit, lying thay he never wanted a bailout. You cant trust the man

ineedasername

12 hours ago

It looks like the actual crux was Altman's plea that the backtops be granted... Not to OpenAI? The document linkedfrom the article that had the actual ask, rather than cleanup over deliberate misatributions by others, was to... "Extend eligibility", for the AMIC money already carved out, to companies that are producers of things such as "grid components such as transformers and specialized steel".

So: Altman did not ask for OpenAI loans to be guaranteed, nor did the CFO. It was on behalf of others drawn into the needs of the industry the AMIC grant was supposed to support. Self interested by OpenAI? Sure! And also not about to make the top 10k leaderboard for "sleezy things companies do".

samrus

16 minutes ago

You cant be this naive. You have to read between the lines. Altman has laready shown hes willing to boil us frogs with his attempts IPO a non profit. Hes slimy, he doesnt just say things and stand by them, he tests the waters and lies about ever wanting to swim if its too cold.

He called for general bailouts, which would benefit OAI far far more than others since its spending the most, and then he got backlash, he lied and said that OAI doesnt want bailouts at all.

You cant be so literalist to just take the man at his exact word and not try to determine his motivations. Especially this guy, whos made a career out of dancing around silicon valley's "no assholes" policy

mieses

7 hours ago

He might be fun to play a board game with but not in real life.

hiddencost

13 hours ago

Gary Marcus has been writing a piece like this roughly every 2 months for 20 years, and he gets a lot of attention because he seems respectable (he is not), and many papers are looking for a respectable source that has this opinion.

an0malous

13 hours ago

Why is he not respectable?

ants_everywhere

12 hours ago

Gary Marcus has a very particular view of human psychology that was popular for a while, especially in the early 2000s. Major proponents include Steven Pinker, Noam Chomsky, Jerry Fodor. This view was heavily influenced by symbolic computers when symbolic computers were new and so held some mental share leading into the dotcom boom. For several reasons, one of which is the replication crisis, this view is no longer nearly as popular.

One of the major beliefs of this view is that LLMs are essentially impossible because there's not enough information in language to learn it unless you have a special purpose language-learning module built into the brain by evolution. This is Chomsky's "poverty of the stimulus" argument.

Marcus still defends this view and because of this bias is committed to trying to prove to everyone that LLMs are not possible or at least that they're some kind of illusion. There's a sense in which they threaten his fundamental concept of how the brain works.

In proposing and defending these views he appears to me and others to be having a sort of internal crisis that's playing out publicly where his need to be right about this is coloring his judgment and objectivity.

pinnochio

12 hours ago

> LLMs are essentially impossible

> trying to prove to everyone that LLMs are not possible or at least that they're some kind of illusion

This is such poor phrasing I can't help but wonder if it was intentional. The argument is over whether LLMs are capable of AGI, not whether "LLMs are possible".

You also 100% do not have to buy into Chomsky's theory of the brain to believe LLMs won't achieve AGI.

ants_everywhere

12 hours ago

We weren't talking about AGI. It sounds like that's a topic you are interested in. It's a fine topic just not the one we're discussing.

And my phrasing was wonderful and perfect.

dvfjsdhgfv

3 hours ago

> And my phrasing was wonderful and perfect.

Do you use LLMs to post on HN? I'm asking seriously.

ants_everywhere

2 hours ago

Nope.

I would appreciate if you and the GP not personally insult me when you have a question though. You may feel that you know Marcus to be into one particular thing but some of us have been familiar with his work long before he pivoted to AI.

dvfjsdhgfv

3 hours ago

> LLMs are essentially impossible

This is the worst misrepresentation of Marcus' argument I've seen so far.

ants_everywhere

2 hours ago

Nothing in my comment is a misrepresentation. It may help to look up some of the words I mention in the comment if you're unsure what they mean.

Also Gary Marcus doesn't make an "argument" in the singular. He's a researcher with decades of public work and private work.

foldr

5 hours ago

> One of the major beliefs of this view is that LLMs are essentially impossible because there's not enough information in language to learn it unless you have a special purpose language-learning module built into the brain by evolution. This is Chomsky's "poverty of the stimulus" argument

The argument is that there is not enough information available to a child to do this. So even if we grant the dubious premise that LLMs have learned to speak languages in a manner analogous to humans, they are not a counterexample to Chomsky’s poverty of the stimulus argument because they have been trained on a vast array of linguistic data that is not available within a single human childhood.

If you want to better understand Chomsky’s position, it’s easiest to do so in relation to other animals. Why are other intelligent animals not able to learn human languages? The rather unsurprising answer, in Chomsky’s view, is that humans have a built-in linguistic capacity, rather in the way that e.g. bats have a built in capacity for echolocation. The claim that bats have a built-in capacity for echolocation is not refuted by the existence of sonar. Likewise, our ability to construct machines that mimic some aspects of human linguistic capacity does not automatically refute the hypothesis that this is a specific human capacity absent in other animals.

Imagine if sonar engineers were constantly shitting on chiropterologists because their silly theory of bats having evolved a capacity for echolocation has now been refuted by human-constructed sonar arrays. The argument makes so little sense that it’s difficult to even imagine the scenario. But the argument against Chomsky from LLMs doesn’t really make any more sense, on reflection.

Chomsky hasn’t helped his case in recent years by tacking his name on some dumb articles about LLMs that he didn’t actually write. (A warning to us all that retirement is a good idea.) So I don’t blame people who are excited about LLMs for seeing him as a bit of rube, but the supposed conflict between Chomsky and LLMs is entirely artificial. Chomsky is (was) trying to do cognitive science. People experimenting with LLMs are not, on the other hand, making any serious effort to study how humans acquire language, and so have very little of substance to argue with Chomsky about. They are just taking opportunistic pot shots at a Big Name because it’s a good way to get attention.

For the record, Chomsky himself has never made any very specific claims about a dedicated module in the brain or about the evolutionary origins of human linguistic capacity (except for some skeptical comments on it being a gradual adaptation).

ants_everywhere

2 hours ago

There was a large literature on language acquisition prior to the invention of LLMs that showed that Chomsky's argument likely wasn't correct. This is in addition to the fact that he significantly underestimated the amount of linguistic input children receive.

There's too much to hash it out here in HN. You can try to save the LAD argument by a strategic retreat, but it's been in retreat for decades now and keeps losing ground. It's clear that neural networks can learn the rules of grammar without specifically baking grammatical hierarchies into the network. You can retreat to saying it's about setting hyper parameters or priors but even the evidence for that is marginal.

There are certainly features of the brain that make language learning easier (such as size) but POS doesn't really provide anything to guide research and is mostly of historical interest now. It's a claim that something is impossible, which is a strong claim. And the evidence for it is poor. It's not clear it would have any adherents if it were proposed anew today. And this is all before LLMs enter the picture.

The research from neuroscience and learning theory and machine learning etc have all pointed toward a view of the brain as significantly different from the psychological nativism view. When many prominent results in the nativist camp failed to replicate during the replicability crisis, most big name researchers pivoted to other fields. Marcus is one of the remaining vocal holdouts for nativism. And his beliefs about AI align very closely with all the old debates about statistical learning models vs symbolic manipulation etc.

> Why are other intelligent animals not able to learn human languages?

Animals and plants do communicate with each other in structured ways. Animals can learn to communicate with humans. This is one of those areas where you can choose to try to see the continuities with communication or you can try to define a vision of language that isolates human language as completely set apart. I think human language is more like an outlier in complexity to the communication animals do rather than a fundamentally different thing. In that sense there's not much of a mystery given brain size, number of neurons, sociality etc.

> The argument is that there is not enough information available to a child to do this

Yes, but children are the humans who earn language in the typical case. So you can replace "child" with "human" especially with all the hedging I did in my first post (e.g. "essentially"). As I said above Chomsky is known to have underestimated the amount of input babies receive. Babies hear language from the moment they're born until they learn to speak. Also, as a parent, I often correct grammatical and other mistakes as toddlers learn to talk. Other parents do the same. Part of the POS is based on the premise that children don't get their grammar corrected often.

foldr

an hour ago

Yes, lots of people have argued that Chomsky is wrong about various things for various reasons and at various times. The point of my post was not to get into all of those historical arguments, but to point out that recent developments in LLMs are largely irrelevant. But I'll briefly respond to some of your broader points.

You mention 'neural networks' learning rules of grammar. Again, this is relevant to Chomsky's argument only to the extent that such devices do so on the basis of the kind of data available to a child. Here you implicitly reference a body of research that's largely non-existent. Where are the papers showing that neural networks can learn, say, ECP effects, ACD, restrictions on possible scope interpretations, etc. etc., on the basis of a realistic child linguistic corpus?

Your 'continuities' argument cuts both ways. There are continuities between human perception and bat perception and between bat communication and human communication; but we still can't echolocate, and bats still can't hold conversations. The specifics matter here. Is bat echolocation just a more complex variant of my very slight ability to sense whether I'm in an enclosed location or an outdoor space when I have my eyes closed? And is the explanation for why bats but not humans have this ability that bat cognition is just more sophisticated than human cognition? I'm sure neural networks can be trained to do echolocation too. Humans can train an artificial network to do echolocation, therefore it can't be a species-specific capacity of bats. << This seems like a terrible argument, no?

Poverty of the stimulus arguments don't really depend at all on the assumption that parents don't correct children, or that children ignore such corrections. If you look at specific examples of the kind of grammatical rules that tend to interest generative linguists (e.g. ACD, ECP effects, ...) then parents don't even know about any of these, and certainly aren't correcting their children on them.

Chomsky has never made any specific estimate of the 'amount' of input that babies receive, so he certainly can't be known to have underestimated it. Poverty of the stimulus arguments are at heart not quantitative but rather are based on the assumption that certain specific kinds of data are not likely to be available in a child's input. This assumption has been validated by experimental and corpus studies (e.g. https://sites.socsci.uci.edu/~lpearl/courses/readings/LidzWa...)

> Babies hear language from the moment they're born until they learn to speak

I can assure you that this insight is not lost on anyone who works on child language acquisition :)

ants_everywhere

5 minutes ago

A realistic child linguistic corpus for a 2 year old starting to form sentences would be about 15 million words over the course of their lifetime. Converted to LLM units that's maybe about 20 million tokens. There are small language models trained on sets that small.

Some LLMs are specifically trained on child-focused small corpora in the 10 million range, e.g. BabyLM: babylm.github.io.

Keep in mind that before age 2, children are using individual words and getting much richer feedback than LLMs are.

Humans can and do echolocate: https://en.wikipedia.org/wiki/Human_echolocation. There are also anatomical differences that are not cognitive that affect the abilities like echolocation. For example, the positioning and frequency response of sensors (e.g. ears) can affect echolocation performance.

pols45

12 hours ago

Cause he hasn't built anything or found a job since he sold his last company ages ago.

Supermancho

12 hours ago

Respect can be earned through many means, not just through goalposts set by financiers.

pols45

12 hours ago

Gary Marcus is not changing the story financiers are telling each other about AI. He has been telling the same story without making a dent in their stories. And that is because he is not in the arena. What he says, does and thinks doesn't matter.

anothernewdude

12 hours ago

So upon finding success, he has escaped the struggle that people are forced into against their will. How unreasonable of him.

nextworddev

13 hours ago

He’s playing the Nouriel Roubini game (another NYU adj prof).

Last time I checked Nouriel was partying up in the Hamptons so being a permabear is lucrative.

Neywiny

13 hours ago

There was also some woman (maybe a high up exec, I don't know their full roster) in one of those live audience interviews saying they were looking towards a government backstop on the loans. They know they have no way of ponying up 1.4T. That's an insane amount of money. Honestly if more investing and startups were "we have x front-runners, if he dies he dies" I think we'd be in s better spot. Maybe don't invest in capturing-helium-in-a-colander.ai just because it ends in "AI"

mitthrowaway2

13 hours ago

I think you must mean Sarah Friar, CFO of OpenAI.

Neywiny

13 hours ago

Most likely. Who better to know their financial status is fried than the CFO

bpodgursky

12 hours ago

You can criticize OpenAI for many things but this was blatantly misquoted.

They were talking about backstops for chips, to get fabs constructed in the US. If anyone else said this, it would have been considered a great idea, both republicans and democrats talked about the same thing to get TSMC production into the US, but everyone pretended that OpenAI was talking about data centers in this context. They weren't/

Neywiny

3 hours ago

My goodness is there a huge difference between TSMC and OpenAI. TSMC moving to the US is a benefit to all US fabless companies and fabless companies that want to do business in any sensitive US industry, and they have an incredible track record as an industry leader. Open AI's chip manufacturing is much more risky and much less payoff. I wouldn't want the government to pay a penny towards it. Especially because an estimated cost for a leading edge chip is maybe $500 million. If they can commit to $1.4T, they can do it on their own. It's almost an omen they don't feel comfortable spending that without a guarantor.

siliconc0w

12 hours ago

That agro response to a perfectly reasonable question, "if you don't want your shares, we'll find you a buyer" instantly reminded me of that Bernie Madoff movie.

chaosprint

13 hours ago

OpenAI's Bailout Blunder: How a CFO's Words Ignited a Firestorm

https://entropytown.com/articles/2025-11-06-openai-cfo/

"If you want to sell your shares, I'll find you a buyer." OpenAI and Microsoft Detail Landmark Partnership, Navigate Future of AI and Compute

https://founderboat.com/interviews/2025-11-01-openai-sam-sat...

Crazy sequences in a week...

lumost

13 hours ago

The "I don't need to answer your simple questions on profit and loss statements" sentiment was odd. Likely odd enough to spur institutional investors to dig deeper or attract short interest.

superconduct123

13 hours ago

I don't understand, what prompted OpenAI recently to need this 1.4T investment?

an0malous

12 hours ago

All of Sam’s shenanigans go back to that one popular post on here many months ago about how AI has no moat. LLMs are a commodity, Chinese companies are just releasing them for free. Sam has gone all in on OpenAI and needs to secure his company, he knows it could be another decade until they discover an innovation on par with tranformers and there’s absolutely no way they can go from burning tens of billions of dollars a year to profitable by selling a rapidly commoditized technology.

sumedh

12 hours ago

LLMs are a commodity but ChatGpt is a brand, for most people AI means ChatGpt. They are not going to use Chinese models.

ChatGpt is also building other products/brands like Sora to capture more mindshare.

Linux is free and yet people use Windows.

an0malous

11 hours ago

All this transformer tech is a commodity, no one will care about the brand if an alternative is free so it’s a race to the bottom.

Yes people use Windows. Go look up the history of how that came to be, it had nothing to do with their brand. Sam is looking for his IBM.

woooooo

11 hours ago

An operating system is a lot stickier than a website URL.

Google has stayed on top for 25 years because they're better and free. LLM providers will have to compete on price doing expensive inference.

jhanschoo

6 hours ago

If a foundation model provider allows fine-tuning on confidential data but the result is locked into the platform, wouldn't that be extremely sticky?

sumedh

10 hours ago

Google stayed on the top because of business deals with Apple and Firefox and then they got the biggest marketing tool called Chrome.

Chatgpt can become an Ad company just like Google probably bigger than Google.

tarsinge

8 hours ago

I don’t buy this argument about brand, in tech history has showed customers will switch overnight if a better option appears and there is no network effects or ecosystem lockin. BlackBerry had a brand too.

Gigachad

13 hours ago

They are going all in, promising so many deals to so many companies that if OpenAI fails, the entire US economy will explode. Expecting the government will bail them out to prevent such a disaster.

therein

12 hours ago

That would do nothing to prevent such a disaster. It would guarantee it.

omnimus

6 hours ago

That's not really true. It may be legit strategy. If they realize they overhyped the whole industry, and everyone is tied to them, and they have strong political allies… if you also realize it's going to crumble. You might just see bailout as only viable option so you make yourself even bigger to fail.

If it turns like this it's extremely cynical and one would wonder how could Altman stay ou of prison (i am sure he will not go to prison).

goatlover

13 hours ago

Billionaires and corporations too big too fail are bad for democracy. They have an oversized impact on society and the government.

JumpCrisscross

13 hours ago

> what prompted OpenAI recently to need this 1.4T investment?

Capital denial to competitors.

hoppp

13 hours ago

I think every company would like to have that. OpenAI is in a hyped up position that it can get it, so they go for it.

I don't buy that they can create AGI by investing trillions in training models and infrastructure.

If you ask me, this is just more money spent on pollution.

The need to replace humans to be profitable, just sounds like the end goal is to destroy the planet with datacenters and hurt people generally.

Sounds like a net negative for the planet.

ineedasername

12 hours ago

Absolutely nothing. They don't, they didn't, it's a poorly stitched together attack job.

The 1.4 T amounts to a broad nearly decade long capex plan, not liabilities.

The loans and backstops etc were a request, not for OpenAI, but on behalf of manufacturers of grid equipment, manufacturers that OpenAI wouldike the government to consider as eligible for money already carved out by the AMIC national investment in chips and AI, and also probably more money as well-- it's a separate group of tangential industries that weren't initially considered, so why not ask? Sure it would help keep the wheels moving in OpenAI and the broad AI and US semiconductor industry, but it's far away from and ask by Altman for a bailout of his company.

techblueberry

11 hours ago

Narrator: it was in fact a bailout they were asking for.

Spooky23

12 hours ago

The need to say “fuck you” to Elon and one-up him.

neuvarius

13 hours ago

For sake of HN can we get a rundown that isn’t Gary Marcus flinging shit on people? I’d like something a bit more objective.

samrus

13 hours ago

You could just read the primary sources. The things hes showing people to have said.

The gist is altman was trying to be slick about requiesting a government bailout, got called out on his bullshit, and then decided to lie and say he never wanted a bailout

The implication being the bubble is getting closer and closer to popping, since even altman is thinking about how to survive the pop

ungreased0675

13 hours ago

Somehow people like him seem to survive just fine. Adam Neumann still has infinite money and Billy McFarland is still getting investors for various schemes.

this_user

13 hours ago

Altman definitely has enough wealth of his own to survive no matter what. But if you look at how he operates, it is pretty clear that what he really wants is power, and he is usually extremely good at getting it. But OpenAI is rapidly losing their leadership in the space, and it turns out that their product has no moat. Meanwhile, new Chinese open source SOTA models just keep coming on an almost daily basis.

NBJack

13 hours ago

Yep; it will just screw the employees, the investors (or at least those that didn't suddenly get clairvoyant and find an exit right before it hits the fan), and many, many apps relying on their APIs.

neuvarius

11 hours ago

Unfortunately the shit flinging is recursive. I’m not looking to read more conjecture. And people are pulling an Elon v2 with Altman and the “any day now he’ll make a fatal mistake!” nonsense.

I’m not holding my breath for hot takes, but I got what I came for: sama said some stuff, the thread.

BluSyn

12 hours ago

I'm confused about language, as "loans" to me do not equal "bailout". The equating of the two seems odd, as many government incentives use loans that pay back with high interest, so governments MAKE money on those kinds of deals.

Also clear that the 1.4T figure includes some accounting for spend that does not come directly from OpenAI (grid/power/data infra for example). Obviously some government involvement is needed, but more at EPA/State/Local level to fast track construction permits, more-so than financial help from Treasury.

I'm confused why this generates such sensational headlines.

tim333

6 hours ago

I'm with you on that - people use the wrong terms. Bailouts are supporting things like GM or failing banks because the government is worried about GM workers losing jobs or bank depositors losing money.

Altman's 1.4T isn't like that - it's a proposed new investment in stuff that doesn't exist yet and there would be no job losses or the like if it fails to exist. They have been talking about potential government support for the new ventures, partly to keep up with China which uses similar government support. I'm not sure if it's a good idea but it would not be a bailout, more a subsidy.

octoberfranklin

12 hours ago

This is the same kind of bullshit rationalization they used to say that the bank bailouts of 2009 weren't really bank bailouts.

They were bank bailouts.

Unsecured government loans are either bailouts, entitlements in disguise, or (usually misguided) attempts at broad economic stimulus. This definitely isn't either of the latter two.

BluSyn

11 hours ago

Unsecured government loan to a successful company to fund acceleration and growth is a "handout".

A "bailout" is what happened in 2009, in the sense the banks would literally have collapsed without it (and they probably should have).

OpenAI is not going to collapse without these loans. Huge difference.

Also for the record, not rationalizing, because I'm not in favor of either handouts or bailouts.

option

13 hours ago

This isn't a comment about Sam Altman, but given Gary's track record, why would anyone listen to him?

tim333

6 hours ago

Gary is a bit of a stopped clock that always says the same time and is right occasionally. His basic position is these neural network things are not much good, they won't do whatever the current claim is.

spiderice

13 hours ago

Care to enlighten those of us that know nothing about Gary's track record?

hiddencost

13 hours ago

He posts some variant of this kind of anti AI piece roughly every two months for over 20 years. He's been wrong so far. Eventually he'll get lucky, but his track record is abysmal.

goatlover

13 hours ago

Has he been wrong about everything? I doubt that. It's also a logical fallacy to dismiss his current argument because he's made wrong arguments before. What he's saying about Altman lying is true or false independent of anything else he's said prior.

prodigycorp

12 hours ago

point here is that his signal/noise is very poor, and the reason why people upvote him is because they agree with the headlines

palmotea

8 hours ago

> Gary Marcus was blocked on X by Kara Swisher in November 2022 for saying that the board did not trust Sam Altman. Marcus remains blocked by Ms. Swisher to this day.

There was an Ezra Klein podcast awhile back where Swisher (a journalist) made it sound like she'd been buddy-buddy with Elon Musk for some time. What is/was her relationship to Altman?

JumpCrisscross

13 hours ago

“Altman, sensing that he had massively blundered…”

Altman is in bed with Trump. He isn’t all in. But Tuesday evidencing an electoral shift makes Friar’s comments exceptionally ill timed.

rhetocj23

14 hours ago

I personally called it within my circle that when Friar was hired she was going to commit a tremendous blunder. It may have been innocent, but the trend is now reversing in regards to its mantlehold of mindshare of consumers.

Someone I know who has been an absolute staunch advocate of chatgpt,and has been for the best part of the past 1.5 years suddenly changed their tune earlier this week. That is my signal that things are turning south.

There will be a reprieve for some time, with increasing revenue. But eventually that will jam to a halt and all the doubts will intensify.

an0malous

13 hours ago

The VCs and bloggers have also shifted tone and are hedging to save themselves from embarrassment. It used to be “AGI is just around the corner” and now it’s “Just because we haven’t figured it out yet doesn’t mean we never will” (Andreesen) or “Actually bubbles are good” (Stratechery).

Gary called it a long time ago but I think the Dwarkesh and Karpathy podcast is when the shift started.

rhetocj23

13 hours ago

Something to note about this craze and Crypto is that the world is full of bozos who do not think deeply whatsoever.

an0malous

13 hours ago

I’d also add these lessons I’ve personally learned:

- Greed is blinding even to intelligent people, especially greedy people in groups

- Society is incredibly vulnerable to lying, we mostly rely on the disincentive that people usually feel bad about it, but the ones who don’t can get away with anything.

- There’s really only a subtle difference between many successful startups and Ponzi schemes. Sam Altman’s grift is only one level more sophisticated than SBF or Elizabeth Holmes

goatlover

13 hours ago

- Thinking the present is somehow special and unlike similar past events or trends.

pinnochio

13 hours ago

So many goons on here who keep flagging submissions like these, or trying to discredit them with superficial criticism ("he keeps saying the same thing!"). It's sad this behavior doesn't get modded.

Edit: lol at the downvotes. You're just proving my point, goons.

rhetocj23

13 hours ago

I pity them. They are not as far ahead in life and understanding as they want to be or are supremely deluded.

I just wished the bozos actually replied to the post instead of hiding behind a button.

Spooky23

12 hours ago

The Ben Thompson take attempting to find a lemonade in the lemons is the most terrifying: the bubble will pop and leave us with… excess electric generation capacity? (And bankrupt power producers?)

We truly live in the dumbest timeline.

Animats

12 hours ago

Loan guarantees? Where did that come from?

The AI bubble must really be about to pop.

octoberfranklin

12 hours ago

s/pants/hair/

Somebody please tell me how to short this. I'm going all in.

rsync

12 hours ago

You buy puts on TQQQ.