Is Sora the beginning of the end for OpenAI?

158 pointsposted 13 hours ago
by warrenm

175 Comments

zerosizedweasle

13 hours ago

"Whether Sora lasts or not, however, is somewhat beside the point. What catches my attention most is that OpenAI released this app in the first place.

It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."

That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.

Karrot_Kream

12 hours ago

What signals have you seen that point to investment being predicated around AGI? Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts. That's a much more "sober" outlook than AGI.

In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.

port3000

12 hours ago

"Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts"

AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday

.....so where is this 100X increase in inference demand going to come from?

Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...

Karrot_Kream

10 hours ago

Integrations and inference costs aren't necessarily 1:1. Integrations can use more AI, reasoning models can cause token explosion, Jevons Paradox can drive more inference tokens, big businesses and government agencies (around the world, not just the US) can begin using more LLMs. I'm not sure integrations are that simple. A lot of integrations that I know of are very basic integrations.

> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...

While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.

hyperpape

10 hours ago

My experience suffering with JIRA daily is that the AI is useless and fairly easy to ignore. If it were actually helpful, I could imagine using it more, and the costs would increase proportionately.

bfLives

6 hours ago

On the other hand, I’ve found the integration in Confluence quite helpful, particularly for making sense of acronyms.

mola

12 hours ago

I think the motivation for someone like Altman is not AGI, it's power and influence. And when he wields billions he has power, it doesn't really matter if there's AGI coming.

hu3

6 hours ago

Yep, he just wants to become too big to fail at this point.

I view OpenAI like a pyramid scheme: taking in increasing amounts of money to pursuit ever growing promisses that can be dangled like a carrot to the next investor.

If you owe investors $100 million, that's your problem. If you owe investors $100 billion, that's their problem.

tmaly

11 hours ago

We were promised AGI and all we are getting is Bob Ross coloring on the walls of a Target store.

The app is fun to use for about 10 minutes then that is it.

Same goes for Grok imagine. All people want to do is generate NSFW content.

What happened to improving the world?

qwery

10 hours ago

I apologise for talking past the point you're making, but, Bob Ross was a human being, you know, with thoughts and stuff. How could any of these AI toys possibly compare?

I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.

cratermoon

13 hours ago

What was predicted to be next: AGI

What we got next: porn

quantified

12 hours ago

Porn has driven everyday tech. Online payment systems, broadband adoption.

Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.

There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.

Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.

noir_lord

12 hours ago

> There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.

Attractive people in sexually fulfilling relationships still look at porn.

It's just human.

bossyTeacher

10 hours ago

>Attractive people in sexually fulfilling relationships still look at porn.

How do you know those relationships are "sexually fulfilling"?

skylurk

9 hours ago

I, uh, have a friend, who tells me watching porn with their partner once in a while, can be pretty hot.

jrflowers

8 hours ago

> How do you know those relationships are "sexually fulfilling"?

You either believe what people report, the clearly-stated position on erotic material of the American Association of Sexuality Educators, Counselors and Therapists (1), or you can just imagine in your head what you think other people’s sex lives are like and just believe whatever you come up with.

1. https://www.aasect.org/our-mission.html

quantified

9 hours ago

Works for me. You end up travelling for a week away from your partner, they're sick for a while, etc.

alganet

an hour ago

I don't see how that explanation helps.

Promise: AI will change the world.

Delivery: 1000 year old vice.

rchaud

11 hours ago

This is a meme I see online often (and in the show Silicon Valley), but I don't think it holds up in practice.

Re: payment systems, Visa and MC are notoriously unfriendly to porn vendors, sending them into the arms of crooked payment processors like Wirecard. Paypal grew to prominence because it was once the only way to buy and sell on Ebay. Crypto went from nerd hobby to speculative asset, shipping the "medium of exchange for porn purchases" entirely.

As for broadband adoption, it's as likely to have occurred for MP3 piracy and being 200X faster than dialup, as it was for porn.

c0balt

12 hours ago

To be very fair here, a long time before gpt-5 porn was already being produced with stable diffusion (and other open models). Civitai in particular was an open playground for this with everything from NSFW loras, prompts to fined tuned models.

I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.

droptablemain

12 hours ago

to be fair we also got Stephen Hawking bungee jumping | snowboarding | wrestling | drag racing | ice skating | bull-fighting | half-pipe

blibble

12 hours ago

I can't imagine the republican party is going to be particularly happy about AI being used for mass porn generation

overfeed

5 hours ago

The party of grindr-crashing sexual repression[1] outwardly denounces such depravity, but inwardly rejoices at all the shameful images they intend to generate.

1. Red states are way ahead on porn consumption, based on past annual reports by Aylo.

neonnoodle

12 hours ago

prompt records = mass blackmail generation

layer8

11 hours ago

At least the valuations make sense now. ;)

standardUser

10 hours ago

AGI is like L5 automated driving - academic concepts that have no bearing on the ability of these technologies to transform the economy.

hollerith

10 hours ago

And no bearing on the ability of these technologies to thoroughly screw us.

xwowsersx

12 hours ago

This take feels like classic Cal Newport pattern-matching: something looks vaguely "consumerish," so it must signal decline. It's a huge overreach.

Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.

You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.

The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.

So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.

BurpIntruder

2 hours ago

It seems they are going to try to maximize their installed base, build the infrastructure, and try to own everything in between, whether it’s LLM or some other architecture that arises. Owning data centers and an installed base sounds great in theory, but it assumes you can outbuild hyperscalers on infrastructure and that your users will stick around. Data centers are a low margin grind and the installed base in AI isn’t locked in like iPhones. Apple and Google still control the endpoints, and I think they’ll ultimately decide who wins by what they integrate at the OS level.

impossiblefork

9 hours ago

There are also interesting things one could do with models like Sora, depending how it actually performs in practice: prompting to segment, for example; and the thing could very possibly, if it's fast enough etc. become a foundation for robotics.

softwaredoug

6 hours ago

I don't think that's fair.

ChatGPT clearly is "for consumers". Whereas Sora is a kind of enshitification to monetize engagement. It's right to question the latter.

bossyTeacher

10 hours ago

I agree. My bet is that OpenAI will not fullfill its mission of developing AGI by 2035. And I would be surprised if they ever did. As much as they might want to, there is only so many dreams you can whisper into rich people's ears before they tell you to go away. And without rich people's money, OpenAI will fall like a house of cards. The wealthy won't have infinite patience

schnable

12 hours ago

OpenAI is making a wild number of product plays at once, trying to leverage the value of the frontier model, brand value, and massive number of eyeballs they own. Sora is just one of many. Some will fail and maybe some will succeed.

It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.

FloorEgg

12 hours ago

On some level they know that LLMs alone won't lead to AGI so they have to take a shotgun approach to diversify, and also because integrating some parts of all these paths is more likely to lead to the outcome they want than going all in on one.

Also because they have the funding to do it.

Reminds me a bit of the early Google days, Microsoft, Xerox, etc,

This is just what the teenage stage of the top tech startup/company in an important new category looks like.

mortsnort

8 hours ago

The massive cost of this product is unique though (not even counting the copyright lawsuits/settlements coming). I can't think of any side projects that require this level of investment.

furyofantares

12 hours ago

> It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them.

Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.

I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.

truelson

12 hours ago

It's to their benefit to try everything right now. And quickly.

xnx

10 hours ago

> OpenAI is making a wild number of product plays at once

It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.

OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.

bossyTeacher

10 hours ago

>It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.

It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision

kbos87

12 hours ago

I could see Sora having a significant negative impact on short form video products like TikTok if they don’t quickly and accurately find a way to categorize its use. A steady stream of AI generated video content hurts the value prop of short form video in more than one way… It quickly desensitizes you and takes the surprise out that drives consumption of a lot of content. It also of course leaves you feeling like you can’t trust anything you see.

kulahan

12 hours ago

Do people on the dopamine drip really care how real their content is? Tons and tons of it is staged or modified anyways. I'm not sure there's anything Real™ on TikTok anyways.

kbos87

3 hours ago

I think a lot of them actually do. It's easy to see TikTok users as mindless consumers, but the more you consume the more you develop a taste for unique content. Over the past few years the content that seems to truly do well at a global scale very often has markers of authenticity. Once something becomes easy to produce it becomes commonplace and you become sick of it quickly.

bemmu

11 hours ago

I find Sora refreshing in that I don't have to worry about being tricked by something fake. It's just a fun multiplayer slopfest.

duxup

9 hours ago

It certainly seems there are some who don't care.

You always get the "who cares if it is fake" folks and even on reddit folks will point out something is AI and inevitably folks "who cares".

But I'm not sure how many people that is or what kind of content they care or don't care about.

janwl

9 hours ago

I mean, it's entertainment content. It's like saying a movie is fake, they are actors playing roles. Of course. Who cares?

duxup

9 hours ago

Depends on if the content is expected to be real or not.

techblueberry

3 hours ago

I mean, I mostly prefer documentaries to fiction.

wobfan

12 hours ago

Thought the same. The human-generated content is just as brainless as the AI-generated slop. People who watched the first will also watch the latter. This will not change a lot, I think.

huevosabio

12 hours ago

Didn't explicitly think about this, but you're right. I already dismiss off the bat a lot of surprising video content because I don't trust it.

ToucanLoucan

12 hours ago

I mean, this is basically already status quo for YouTube Shorts. Tons and tons of shorts are AI-voice over either AI video or stock video covering some pithy thing in no actual depth, just piggybacking off of trending topics. And TikTok has had the same sort of content for even longer.

The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.

techblueberry

3 hours ago

When fast take-off starts to be evident later next year, it’s the fact that OpenAI has built its hoard on a diverse set of product lines with a broad surface area to operate on that will be the differentiator allowing them to lead on the exponential before anyone else.

ilickpoolalgae

12 hours ago

> It’s unclear whether this app will last. One major issue is the back-end expense of producing these videos. For now, OpenAI requires a paid ChatGPT Plus account to generate your own content. At the $20 tier, you can pump out up to 50 low-resolution videos per month. For a whopping $200 a month, you can generate more videos at higher resolutions. None of this compares favorably to competitors like TikTok, which are exponentially cheaper to operate and can therefore not only remain truly free for all users, but actually pay their creators .

fwiw, there's no requirement to have a subscription to create content.

bilekas

12 hours ago

I got the feeling when this was released that it was just another metric to justify further investment, they were guaranteed to have a lot of users, they can turn around and say "well we have 2 huge applications and were just getting started" investors don't care too much about product quality we've seen, just large numbers.

softwaredoug

6 hours ago

The counter argument is that OpenAI has to make fairly bolder moves.

Social was _already_ becoming the domain of AI generated content. In the benign sense, there's been social content of people sharing their silly AI content since early DALL-e. Its a good idea to make a social app that's actually _about that_, because you can remix and play with the content in a novel way.

The first Sora was sort of already going in this direction.

1vuio0pswjnm7

9 hours ago

"It wasn't that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb, and many commentators took Dario Amodei at his word when he proclaimed* 50% of white collar jobs might soon be automated* by LLM-based tools.

A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn't be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn't be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy"

freefaler

9 hours ago

They might be forced to do so because the current inference pricing is not really covered by the 20$ monthly fee. Who knows what they have promised to the investors and the real cashflow is hard to be certain about with the circular nature of cross-investing between the biggest players.

f33d5173

8 hours ago

> wouldn’t be seeking to make a quick buck selling ads against deep fake videos

This isn't a money making venture for them, and you basically admitted as much. They poured no doubt massive amounts of money into developing this and have little hope of earning it back soon. This is an attempt to keep up with other ai companies also developing video models in order to not look behind to investors. Making it available to users is similarly about increasing active user counts in order to look more successful. If people incidentally get off to it that's not their concern

qustrolabe

an hour ago

That a funny title to give to OpenAI's side gig that went to top downloads in days

mentalgear

12 hours ago

The fact that OpenAI is pushing Sora, and Altman now even hinting at introducing "erotic roleplay"[0] makes it obvious: openAI has stopped being a real AI research lab. Now, they’re just another desperate player in a no-moat market, scrambling to become the primary platform of this hype era and imprison users onto their platform, just like Microsoft and Facebook did before in the PC and social era.

[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...

gilfoy

12 hours ago

Why is it one or the other? They have enough money to do both.

mentalgear

12 hours ago

But if you followed them, there are focusing only on product for the last 2 years. The grand GPT-5 and their scaling laws, from which all their LLM AGI hopes originated, turned out to be a dud.

sixothree

12 hours ago

The amount of animal abuse videos I've seen is a bit disturbing. It only demonstrates how careless they have been, possibly intentionally. I know people on HN have been describing the various reasons why OpenAI has not been a good player, but seeing it first-hand is visceral in a way that makes me concerned about them as a company.

Sohcahtoa82

12 hours ago

I'm more concerned about Sora (and video-generating AI in general) being the final pour that cements us into our post-truth world.

People will be swayed by AI-generated videos while also being convinced real videos are AI.

I'm kinda terrified of the future of politics.

hombre_fatal

12 hours ago

The problem is that we're already post truth.

Just consider how a screenshot of a tweet or made-up headline already spreads like a wildfire: https://x.com/elonmusk/status/1980221072512635117

Sora involves far more work than what is required to spread misinfo.

Finally, people don't really care about the truth. They care about things that confirm their world view, or comfortable things. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.

nothrabannosir

12 hours ago

> Finally, people don't really care about the truth.

That same link has two “reader notes” about truth.

The lie is half way around the world etc, but that can also be explained by people’s short term instincts and reaction to outrage. It’s not mutually exclusive with caring about truth.

Maybe I’m being uncharitable — did you mean something like “people don’t care about truth enough to let it stop them from giving into outrage”? Or..?

mat_b

12 hours ago

> Finally, people don't really care about the truth. They care about things that confirm their world view. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.

People have always been this way though. The tribes are just organized differently in the internet age.

LexiMax

12 hours ago

I strongly suspect future generations are going to look back on the age of trying to cram the entire world into one of several shared social spaces and say "What were those idiots thinking?"

mcphage

9 hours ago

In our defense (slightly), it was never really possible before, so we didn't previously have an opportunity to learn what a civilization-shatteringly bad idea it was.

raw_anon_1111

11 hours ago

Oh well, I’ll put it out there. If people cared about verified provable truths, religion of any kind wouldn’t exist.

philipallstar

8 hours ago

> If people cared about verified provable truths, religion of any kind wouldn’t exist.

Can you provide a verified proof of this statement please?

raw_anon_1111

8 hours ago

Really? Everything about religion is fantasy and unprovable. Unless you believe that the earth is only 6500 years old, created in 7 days then a few centuries later someone built a boat that took two of each animal in the entire world to save them.

Then fast forward to a man being born from a virgin that rose from the dead three days after being crucified.

marshfarm

11 hours ago

We're probably post-narrative and post-lexical (words) but haven't become aware of what to possibly update these tools with. Post-truth is an abstraction rooted in the arbitrary.

Reality is specific. Actions, materials. Words and language are arbitrary, they're processes, and they're simulations. They don't reference things, they represent them in metaphors, so sure they have "meaning" but the meanings reduce the specifics in reality which have many times the meaning possibility to linearity, cause and effect. That's not conforming to the reality that exists, that's severely reducing, even dumbing down reality.

AnimalMuppet

11 hours ago

There is a reality which exists. Words have meaning. Words are more or less true as the meaning they convey conforms more or less well to the reality that exists. So no, truth is not rooted in the arbitrary. Quite the opposite.

Or at least, words had meaning. As we become post-lexical, it becomes harder to tell how well any sequence of words corresponds to reality. This is post truth - not that there is no reality, but that we no longer can judge the truth content of a statement. And that's a huge problem, both for our own thought life, and for society.

FloorEgg

12 hours ago

Assuming this is all true, what's the most optimistic view you can take looking ~20 years out?

How could all of this wind up leading to a much more fair, kind, sustainable and prosperous future?

Acknowledging risks is important, but where do YOU want all this to go?

Eisenstein

12 hours ago

As adults already, we grew up with things that are either not relevant or give us the wrong responses to our heuristics.

But the kids who grow up with this stuff will just integrate into their life and proceed. The society which results from that will be something we cannot predict as it will be alien to us. Whether it will be better or not -- probably not.

Humans evolved to spend most of their time with a small group of trusted people. By removing ourselves from that we have created all sorts of problems that we just aren't really that equipped to deal with. If this is solvable or not has yet to be seen.

FloorEgg

10 hours ago

I don't disagree with any points you made, but I do find it interesting you refused a prompt to imagine a better future, articulate your wants, and practice optimism. That's your choice, but a telling one.

Eisenstein

4 hours ago

Treating people like children is patronizing and I implore you to stop doing it.

FloorEgg

3 hours ago

You’re right, and I’m sorry.

The way I phrased that was patronizing. It wasn't my intention, but I see now how it comes across.

It seems to me like the attention economy's bias towards threatening novel news is pushing everyone into a negative, cynical, feedback loop, and I am trying clumsily to resist that. There are many real problems and many things seem to be going in the wrong direction, but I don't see how we all get ourselves out of this mess if we can't start talking about what the other side (of the despair) looks like.

I suspect that another mistake I made was the timing/context. For some reason, in the moment, I thought redirecting the cynicism at it's source (a Sora thread) was a good idea. It probably wasn't. I guess there is a time and place to try and inspire hope, and this wasn't it. And judging you for not engaging in it deserves a facepalm in hindsight.

Please accept my apology, and if you think my stance itself is misguided (not just my tone and timing), I would like to understand why.

danaris

10 hours ago

It's really hard to imagine how "truth is harder to find, more people lie with impunity, and convince others that their lies are true" could have a positive outcome.

Moreover, I think it's really hard overall to imagine a better future as long as all of this technology and power is in the hands of massively wealthy people who have shown their willingness to abuse it to maintain that wealth at our expense.

The optimistic future effectively requires some means of reclaiming some of that power and wealth for the rest of us.

FloorEgg

8 hours ago

Yes, but, overwhelmingly we go where we look. Usually what is meaningful and worthwhile is hard. Also we get better at hard things when we practice doing them.

There is a concept in racing when taking a corner to "keep your eyes off the wall", and instead look where you want the car to go.

Imo the most scary part of the problems we face isn't what you or GP are talking about, it's everyone else's reactions to them. The staring at the wall while screaming or giving up, and refusing to look where you want to go.

It's harder to satisfy our wants if we cant articulate them.

danaris

8 hours ago

Well, sure—and I am, in general, a very positive person.

But there's a huge difference between (a) "given that this thing exists that seems very bad, can you imagine a way to a better future?" and (b) "can you imagine ways that this thing that seems very bad could actually be very good?"

The ways to a better future are in spite of these developments, not because of them, and I don't think it's at all helpful to act like that's not the case or be all disappointed (and, frankly, a bit condescending) at people who refuse to play along with attempts to do so.

And it's possible that (a) above is what you meant, but your wording very much sounded like (b).

FloorEgg

5 hours ago

When I read your comment the first time I felt uneasy about your (a) vs (b) framing, but didn't know how to address it head on. A while later I remembered this story told by Alan Watts. It seems relevant...

The Chinese Farmer Story

Once upon a time there was a Chinese farmer whose horse ran away. That evening, all of his neighbors came around to commiserate. They said, “We are so sorry to hear your horse has run away. This is most unfortunate.” The farmer said, “Maybe.”

The next day the horse came back bringing seven wild horses with it, and in the evening everybody came back and said, “Oh, isn’t that lucky. What a great turn of events. You now have eight horses!” The farmer again said, “Maybe.”

The following day his son tried to break one of the horses, and while riding it, he was thrown and broke his leg. The neighbors then said, “Oh dear, that’s too bad,” and the farmer responded, “Maybe.”

The next day the conscription officers came around to conscript people into the army, and they rejected his son because he had a broken leg. Again all the neighbors came around and said, “Isn’t that great!” Again, he said, “Maybe.”

The whole process of nature is an integrated process of immense complexity, and it’s really impossible to tell whether anything that happens in it is good or bad — because you never know what will be the consequence of the misfortune; or, you never know what will be the consequences of good fortune.

— Alan Watts

On a personal level, I have experienced some pretty catastrophic failures that taught me important lessons which I was able to leverage into even greater future success.

So honestly, I am fine with (a) or (b) and I think either are reasonable questions. Really all I am trying to do is encourage you to aim up and articulate that aim. I am not doing a great job, but I am trying.

FloorEgg

8 hours ago

I asked three questions. Two were about the kind of future we want. One was about how we might get there. I know the “how” question can feel overwhelming. It often does for me too, and I think about it a lot.

What I find curious is that no one has really engaged with any of these questions yet. Not even to reflect personally on why. That’s not a criticism, it’s an observation. I think it’s worth asking what makes this kind of conversation so difficult.

When I said that declining to imagine a better future was telling, I didn’t mean it as a put-down. I meant it as a challenge. Because when we stop trying to define what better looks like, we give up our power to those who will define it for us. History shows where that leads. That’s how authoritarianism takes root; not only through force, but through the quiet surrender of imagination and personal responsibility.

If my earlier tone came across as condescending, that wasn’t my intent. My intention is tough love. I believe that acknowledging problems matters, but it’s not enough. If we stop there, we trade agency for frustration. I’d rather see us wrestle with what we want, even if it’s hard, than resign ourselves to cynicism.

So I’ll ask again: what kind of future would you actually want?

EDIT: I just realized that I missed part of an answer in your earlier comment, which I commend you for now. I apologize for not recognizing it before.

You said:

The optimistic future effectively requires some means of reclaiming some of that power and wealth for the rest of us.

Kudos. That's a start.

code4life

11 hours ago

> Finally, people don't really care about the truth.

What is truth? Pontius Pilate

highwaylights

12 hours ago

I'm surprised this isn't a bigger concern given that:

For over a year now we've been at the point whereby a video of anyone saying or doing anything can be generated by anyone and put on the Internet, and it's only becoming more convincing (and rapidly)

We've been living in a post-truth world for almost ten years, so it's now become normalized

Almost half of the population has been conditioned to believe anything that supports their political alignment

People will actually believe incredibly far-fetched things, and when the original video has been debunked, will still hold the belief because by that point the Internet has filled up with more garbage to support something they really want to believe

It's a weird time to be alive

cruffle_duffle

12 hours ago

Absolutely! And don’t kid yourself into thinking you are immune from this either. You can find support of basically anything you want to believe. And your friendly LLM will be more than happy to confirm it too!

Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?

Sohcahtoa82

10 hours ago

> Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?

Truth absolutely is a thing. But sometimes, it's nuanced, and people don't like nuance. They want something they can say in a 280-character tweet that they can use to "destroy" someone online.

Eisenstein

12 hours ago

People forget that critical thinking means thinking critically about everything, even things you already think are true because they fit into your worldview.

antod

9 hours ago

Let alone the hordes who think "critical thinking" just means disagreeing with things.

bonoboTP

12 hours ago

We will adjust. And guess what, before photography, people managed somehow. People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not. The way people dealt with it was witness testimony and physical evidence.

afavour

12 hours ago

We'll have to adjust, certainly. But that doesn't mean nothing bad will happen.

> People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not.

And there were things like witch trials where people were burnt at the stake!

The resolution was a shared faith in central authority. Witness testimony and physical evidence don't scale to populations of millions, you have to trust in the person getting that evidence. And that trust is what's rapidly eroding these days. In politics, in police, in the courts.

mola

12 hours ago

Yes, that adjustment could well be monarchy.

I can't see how functioning democracy can survive without truth as shared grounds of discussion.

safety1st

11 hours ago

The media's been lying to us for as long as it has existed.

Prior to the Internet the range of opinions which you could gain access to was far more limited. If the media were all in agreement on something it was really hard to find a counter-argument.

We're so far down the rabbit hole already of bots and astroturfing online, I doubt that AI deepfake videos are going to be the nail in the coffin for democracy.

The majority of the bot, deepfake and AI lies are going to be created by the people who have the most capital.

Just like they owned the traditional media and created the lies there.

bonoboTP

12 hours ago

I don't think the US was a monarchy for its first hundred years.

JadeNB

12 hours ago

> > I can't see how functioning democracy can survive without truth as shared grounds of discussion.

> I don't think the US was a monarchy for its first hundred years.

Did the US not have truth as shared grounds of discussion for its first hundred years?

timschmidt

10 hours ago

https://en.wikipedia.org/wiki/Yellow_journalism has been a thing for a very long time.

JadeNB

10 hours ago

Right, by which standard truth has never been a shared grounds of discussion. I think that there's a big difference between "some people lie" and "there's no agreement on shared truth."

bonoboTP

9 hours ago

That has nothing to do with generated videos though.

Teever

12 hours ago

Of course we will adjust. That is a truism that is besides the point.

What matters is how many people will suffer during this adjustment period.

How many Rwandan genocides will happen because of this technology? How many lynchings or witch burnings?

bonoboTP

12 hours ago

It's not beside the point.you can lie with words, you can lie with cartoons and drawings and paintings. You can lie with movies.

We will collectively understand that pixels on a screen are like cartoons or Photoshop on steroid.

Teever

9 hours ago

Marconi demonstrated radio in 1895 and the first broadcast radio station started in 1920. By the 1930s Adolf Hitler was routinely using the medium to broadcast vile propaganda about Jews and others which lead to the holocaust in the 1940s.

About 40 years later the Rwandan genocide took place and many scholars attribute a preceding radio-based propaganda campaign as playing a key role in increasing ethnic violence in the aware.

Since then the link between radio and genocide seems to have decreased over time but it's likely that this isn't so much because humans have a better understanding of the medium but more so because propaganda has moved to more effective mediums like the internet.

Given that we didn't actually solve the problems with radio before moving onto the next medium it isn't likely that we'll figure out the problems with these new mediums before millions die.

bonoboTP

9 hours ago

It was easy to spread lies through print and through good old fashioned word of mouth too. No radio needed.

And apropos radio, the War of the Worlds radio drama in 1938 is know to have made quite some afraid that it's real. And plenty of people collected money in communist Hungary for the sake of the enslaved Isaura (protagonist of a Brazilian soap opera). But most people adjusted and understand that radio dramas are a thing, movies are a thing, and will adjust to the fact that pixels on a screen are just that.

Teever

7 hours ago

You seem to be suggesting that there's no noteworthy difference in the speed and effectiveness of different communication mediums like spoken, written, or radio and as such there's no noteworthy difference in the outcome of their deployment.

Is that a fair assessment of your comment? Is there a way to test your assertion?

bonoboTP

6 hours ago

No I'm saying that people adapt, society adapts. Most people today don't shit themselves in the cinema thinking that the monster will appear among them, they understand that characters in TV series are not real and only the mentally ill will berate the actor in the street for yesterday's episode.

It will take some time but it's in fact quite easy to explain it to older relatives if you make a few custom examples.

The bigger point is that realism is a red herring. You can spread propaganda with hand drawn caricatures just as well or even better. It's a panic over nothing. The real lever of control is what news to report on and how to frame it, what quotes to use, which experts to ask and which ones not to. The bottleneck never was at HD realism.

Teever

6 hours ago

> they understand that characters in TV series are not real

They do not which is why a reality TV star who is 'good at business' is the current US President.

Reality TV is the old media and people are still falling for it and the consequences of them falling for it will be felt for decades. It will be the same with newer technologies but worse.

The novel threat that something like Sora poses isn't just from realism, it's also from the fast turn around and customized messaging. It will enable the exact things you caution about but at an unprecedented scale.

This idea that it all new media is going to be just another case of 'meet the new boss same as the old boss' is ahistorical and shortsighted.

bonoboTP

5 hours ago

If that's how you view the majority, you simply can't simultaneously be for democracy. It takes some impressive mental gymnastics to redefine democracy as one where somehow people can vote but all truth-production is centralized to the Expert Consensus narrative. I mean, maybe that's right. In fact basically no society till the 20th century has absolute full 1 person 1 vote election based democracy. It's an odd development. In most historic societies they restricted political affairs to certain "intellectually qualified" classes. Of course we see that as deeply unjust and exclusionary. But I'm not sure how else to interpret your type of complaint than as a wish for some kind of restriction on who can have decision power in political matters. But at the se time this is also called defending democracy. It's weird.

sofixa

12 hours ago

> The way people dealt with it was witness testimony and physical evidence.

Which are inapplicable today.

> We will adjust

Will we? Maybe years later... per event. It's finally now dawning on the majority of Britons that Brexit was a mistake they were lied about.

wongarsu

12 hours ago

Brexit is a great example how you can just lie by writing stuff on the side of a bus, no fake photos or videos required

sofixa

11 hours ago

Exactly, it proves how easy it is to influence people. Which would be even easier with fake photos and videos.

schnable

12 hours ago

> Maybe years later...

It is a concern... it took a few centuries for the printing press to spur the Catholic/Protestant wars and then finally resolve them.

bonoboTP

12 hours ago

That has nothing to do with GenAI.

sofixa

11 hours ago

Yep, it's only made worse by it.

pessimizer

12 hours ago

> Which are inapplicable today.

No, they are not.

roadside_picnic

11 hours ago

> the final pour that cements us into our post-truth world.

I find it a bit more concerning that anyone would not already understand how deeply we exist in a "post-truth" world. Every piece of information we've consumed for the last few decades has increasingly been shaped by algorithms optimizing someone else's target.

But the real danger of post-truth is when there is a still enough of a veneer of truth that you can use distortions to effectively manipulate the public. Losing that veneer is essentially a collapse of the whole system, which will have consequences I don't think we can really understand.

The pre and early days of social media were riddled with various "leaks" of private photos and video. But what does it mean to leak a nude photo of a celebrity when you can just as easily generate a photo that is indistinguishable? The entire reason leaks like that were so popular is precisely because people wanted a glimpse into something real about the private life of these screen personalities (otherwise 'leaks' and 'nude scenes' would have the same value). As image generation reaches the limit, it will be impossible to ever really distinguish between voyeurism and imagination.

Similarly we live in an age of mass surveillance, but what does surveillance footage mean when it can be trivially faked. Think of how radicalizing surveillance footage has been over the past few decades. Consider for example the video of the Rodney King beating. Increasingly such a video could not be trusted.

> I'm kinda terrified of the future of politics.

If you aren't already terrified enough of the present of politics, then I wouldn't be worried about what Sora brings us tomorrow. I honestly think what we'll see soon is not increasingly more powerful authoritarian systems, but the break down of systems of control everywhere. As these systems over-extend themselves they will collapse. The peak of social media power was to not let it go further than it was a few years ago, Sora represents a larger breakdown of these systems of control.

uvaursi

11 hours ago

Agreed, but this is mostly coming from people who would normally discredit you bashing MSM as a kook/conspiracy theorist.

People forget, or didn’t see, all the staged catastrophes in the 90s that were shortly afterwards pulled off the channel once someone pointed out something obvious (f.e. dolls instead of human victims, wrong location footage, and so on).

But if you were there, and if you saw that, and then saw them pull it off and pretend like it didn’t happen for rest of the day, then this AI thing is a nothing burger.

pmontra

12 hours ago

Maybe they'll have to tour and meet people in person because videos will be devoid of trust.

On the other side we want to believe in something, so we'll believe in the video that will suit our beliefs.

It's an interesting struggle.

Sohcahtoa82

12 hours ago

> Maybe they'll have to tour and meet people in person

That doesn't scale.

During campaign season, they're already running as many rallies as they can. Outside the campaign train, smaller Town Hall events only reach what, a couple hundred people, tops? And at best, they might change the minds of a couple dozen people.

EDIT: It's also worth mentioning that people generally don't seek to have their mind changed. Someone who is planning on voting for one candidate is extremely unlikely to go to a rally for the opposition.

tinfoilhatter

12 hours ago

Most members of the US congress and the current presidential administration, are already devoid of trust. I can't speak for other countries governments, but it seems to be a fairly common situation.

bilekas

12 hours ago

Yeah I'm just as annoyed with the AI slop that's coming out as anyone, but the next generation of voters won't believe a thing and so they will be pushed towards believing what they see in real life like campaigners who go door to door etc. It could be a great thing and would give meaning to he electoral system again ironically!

kulahan

12 hours ago

Honestly I can't see a solution beyond concentrating power to highly localized regions. Lots more mayors, city councils, etc. so there is a real chance you can meet someone who represents you.

I don't fully believe anything I see on the internet that isn't backed up by at least two independent sources. Even then, I've probably been tricked at least once.

bilekas

12 hours ago

It may come to that, where the federal power is less influential and there to mainly manage overall services of the nation etc, and then the local states let's say manage themselves, if I'm not wrong that was kind of the original idea of the great experiment. It doesn't sound inherently wrong untill you add tribalism into the mix where people are not working with eachother, but that seems to be the major push these days, at least that's the sentiment I get.

Would that change, maybe not, but maybe it would lessen the power grabs that some small few seem to gravitate towards.

I know if I wanted to influence the major elections, OpenAI, Google and Meta would be the first places I would go. That's a very small group of points of failure. Elections recently seem to be quite narrow, maybe they were before too though, but that kind of power is a silent soft power that really goes unchecked.

If people are more in tune with being mislead, that power can slowly degrade.

mentalgear

12 hours ago

Well, maybe it's less about Sora, but how they push the world towards making their next product essential: WorldCoin [0], Altman's blockchain token system (the one with the alien orb) to scan everybody's biometric fingerprint and serve as the only Source of Truth for the World - controlled by one private company.

It's like the old saying: They create their own ecosystem. Circular stock market deals being the most obvious, but the WorldCoin has been for years in the making and Altman often described it as the only alternative in a post-truth world (the one he himself is making of course).

[0] https://www.forbes.com.au/news/innovation/worldcoin-crypto-p...

shnp

11 hours ago

Flawless AI generated videos will result in video footage not being trusted.

This will simply take us back about 150 years to the time before the camera was common.

The transition period may be painful though.

OJFord

12 hours ago

It just makes trusted/verified sources more important, and more people to care about it. I wouldn't be terrified for politics so much as the raised barrier to entry (and concentration) of the press - people will pay attention to the BBC, Guardian, Times, but not (even less so) independentjourno.com; those sources will be more sceptical of whistleblowers and freelance investigative contributions, etc.

jmkni

11 hours ago

For sure.

I consider myself pretty on the ball when it comes to following this stuff, and even I've been caught off guard by some videos, I've seen videos on Reddit I thought were real until I realised what subreddit I was on

daxfohl

12 hours ago

Most people's minds are already made up. All this does is add some confirmation bias so they can feel better about what they were already certain of. I don't think it fundamentally changes anyone's opinions.

robofanatic

12 hours ago

If you don’t like any thing digital (image/video/text) then it’s definitely AI generated. I guess AI has kind of killed the “democratization of news” introduced by social media.

ozgrakkurt

11 hours ago

Or people will just stop believing random things they see online. You are underestimating people imo.

kazinator

12 hours ago

Being convinced that real videos are AI is arguably a better position than being convinced that real videos convey the iron-clad truth.

Everything is manipulated or generated until proven otherwise.

csallen

12 hours ago

People have been tricked by counterfeits ever since the invention of writing (or even drawing) first made it possible for a person to communicate without being physically present.

At that moment, it simultaneously became possible to create "deep fakes" by simply forging a signature and tricking readers as to who authored the information.

And even before that, just with speaking, it was already possible to spread lies and misinformation, and such things happened frequently, often with disastrous consequences. Just think of all the witch hunts, false religions, and false rumors that have been spread through the history of mankind.

All of this is to say that mankind is quite used to dealing with information that has questionable authorship, authenticity, or accuracy. And mankind is quite used to suffering as a result of these challenges. It's nothing particularly new that it's moving into a new media format (video), especially considering that this is a relatively new format in the history of mankind to begin with.

(FWIW, the best defense against deep fakes has always been to pay attention to the source of information rather than just the content. A video about XYZ coming from XYZ's social media account is more likely to be accurate than if it comes from elsewhere. An article in the NYTimes that you read in the NYTimes is more likely to be authentic than a screenshot of an article you read from some social media account. Etc. It's not a perfect measure -- nothing is -- but I'd say it's the main reason we can have trust despite thousands of years of deep fakes.)

IMO the fact that social media -- and the internet in general -- have decentralized media while also decoupling it from geography is less precedented and more worrisome.

kazinator

12 hours ago

Here is the thing: we should never have trusted photographs and motion pictures.

Fakery isn't new, only the product of scale and quality at which it is becoming possible.

pdntspa

12 hours ago

Just yesterday I saw a Sora-generated video that purported to be someone filming a failed HIMARS missile failing and falling on stopped traffic and exploding on the 5 in Camp Pendleton on Saturday. (IRL they were doing some kind of live-fire drill and it did actually involve projectiles flying over the freeway.)

While there were some debris instances IRL the freeway was completely shut down per the governors orders and nobody was harmed. (Had he not done this, that same debris may have hit motorists, so this was a good call on his part)

You could see the "Sora" watermark in the video, but it was still popular enough to make it in my reels feed that is normally always a different kind of content.

In this case whoever made that was sloppy enough to use a turnkey service like Sora. I can easily generate videos suitable for reels using my GPU and those programs don't (visibly) watermark.

We are in for dark times. Who knows how many AI-generated propaganda videos are slipping under the radar because the operator is actually half-skilled.

Sohcahtoa82

10 hours ago

> I can easily generate videos suitable for reels using my GPU and those programs don't (visibly) watermark.

Curious what you used. I have an RTX 5090 and I've tried using some local video generators and the results are absolute garbage unless I'm asking for something extremely simple and uncreative like "woman dancing in a field".

truelson

12 hours ago

We are having a huge amount of technological change (we haven't even learned how to handle social media as a society yet...). We're experiencing a global loss in trust, and things may fall apart for a bit until our society develops a better immune system to such ills. It is scary.

I think we may revert back to trusting only smaller groups of people, being skeptical of anything outside that group, becoming a bit more tribal. I hope without too many deleterious effects, but a lot could happen.

But humans, as a species, are survivors. And we, with our thinking machines will figure out ultimately how to deal with it all. I just hope the pain of this transition is not catastrophic.

overvale

12 hours ago

Other people have said this, but I don’t think it’s going to be any different than living in a world where people can spread rumors orally or print lies with a printing press. We’ve been dealing with those challenges for a long time.

Our ways of thinking and our courts understand that you can’t trust what people say and you can’t trust what you read. We’ve internalized that as a society.

Looking back, there seems to have been a brief period of time when you could actually trust photographs and videos. I think in the long run, this period of time will be seen as a historical anomaly, and video will be no more trusted than the printed or spoken word is today.

renewiltord

12 hours ago

This is a non-concern. You can see videos where a specific thing happens where people will describe a different thing happening. Not some eyewitness off memory. You can look at a video and there will be people on Reddit saying stuff that didn’t happen.

Then you can see any conversation about the video will be even more divorced from reality.

None of this requires video manipulation.

The majority of people are idiots on a grand scale. Just search any social media for PEMDAS and you will find hordes of people debating the value of 2 + 3 / 5 on all sorts of grounds. “It’s definitely 1. 2+3 =5 then by 5 is 1” stuff like that.

CaptainOfCoit

12 hours ago

"See it to believe it" will once again be more important.

anarticle

12 hours ago

I think their game theoretic aim was to completely discredit video online. Just as we don't accept text in general as truth or image when we see it, we are being flooded with completely fake vids so people can shake the idea that videos are truth.

It smells of e/acc, effective altruist ethics which are not my favorite, but I don't work at OpenAI so I don't have a say I can only interpret.

I agree, but we will likely continue down this road...

ajuc

12 hours ago

Every breakthrough in information technology caused disruption in the historical sense (i.e. millions of deaths).

From the writing, through organized religion, printing press, radio and tv, internet and now ai.

Printing press and reformation wars is obvious, radio and totalitarianism is less known, internet and new populism is just starting to be recognized for what it is.

Eventually we'll get it regulated and adjust to it. But in the meantime it's going to be a wild ride.

sixothree

12 hours ago

The willingness of people to believe misinformation today is astounding. They are already choosing to surround themselves with voices of hate and anger. I don't want to see what's next for them.

Razengan

12 hours ago

Well, usually when there's a mass problem, some technology eventually eliminates it.

Like cars making horse manure in cities a non-issue (https://www.youtube.com/watch?v=w61d-NBqafM)

Maybe the solution to everybody lying would be some way to directly access a person's actual memories from their brains..

rpjt

12 hours ago

I have no FOMO when it comes to Sora. None whatsoever. Authenticity is becoming more important day by day.

bdcravens

9 hours ago

More likely it's the beginning of the end for TikTok, since the amount of posts that use seems to be flooding the platform, lowering trust and credibility in each video.

fishmicrowaver

7 hours ago

Hmm I have my doubts. I don't really understand the appeal of hyper-consumerism, brand focus, and big SM platform engagement. But I look over at my wife and she is clearly plugged in to a giant worldwide cultural movement of womens interests, product recommendations, and politics. It also appears to be the case that women are less likely to engage with AI. It's one of the big reasons I think it's a massive blunder to pursue AI erotica. I think a big part of the TikTok user base (women) will have to be persuaded to jump on the AI bandwagon and I'm not sure the industry is pursuing any products or features to woo that market.

JohnMakin

12 hours ago

It's made my non-sora feeds nearly inconsumable, which I admitted to myself was probably a good thing.

truelson

12 hours ago

I love Cal. I really, really take his thoughts on many things to heart for well over a decade now.

I think he's being a bit harsh here. And there are some confounding factors why.

Yes, we have an AI bubble. Yes there's been a ton of hype that can't be met with reality in the short term. That's normal for large changes (and this is a large technological change). OpenAI may have some rough days ahead of it soon, but just like the internet, there's still a lot of signal here and a lot of work to still be done. Going through Suna+Sora videos just last night was still absoutely magical. There's still so much here.

But, OpenAI is also becoming, to use a Ben Thompson term, an aggregator. If it's where you go to solve many problems, advertising and more is a natural fit. It's not certain who comes out on top of the space (or if it can be shared), but there are huge rewards coming in future years, even after a bubble has popped.

Cal is having a very strong reaction here. I value it, but I wish it was more nuanced.

kulahan

12 hours ago

LLMs are only valuable to me at the moment explicitly because ads are not part of the scene. Maybe for a while others will use it, but it will be on the exact same treadmill as anything else ad-based: you will become the product, any recommendations are worthless trash, and it is oriented to show as many ads as possible, rather than providing useful content.

Ads destroy... pretty much everything they touch. It's a natural fit, but a terrible one.

marshfarm

12 hours ago

Was a mistake from the beginning to use language as the basis for tokens and embedded spaces between them to generate semantics. It wasn't thought out, it was a snowball trial and error that went out of control.

bongodongobob

12 hours ago

Lol ok. We'll wait for your game changing technology, keep us posted.

marshfarm

8 hours ago

Action patterns in syntax? They already exist, the binary chose to forgo that level emulation for arbitrary words, in arbitrary symbols, predicted, geometrically arranged in space as "meaning" as tokens.

I'd suggest comp sci caught the low fruit, whatever comes out of thekeyboard as a basis, non too smart.

kick_in_the_dor

12 hours ago

OP has a point. Are these type of embeddings the best way to model thought?

mentalgear

12 hours ago

Certainly not the best, just the most practical / commercially sell-able. And once a pattern, like LLM text embedding is established as the way to "AGI", it takes years for other more realistic approaches to gain funding again. Gary Marcus wrote about this extensively how legitimate AGI research is actually being put back years due to the LLM superficial AGI hype.

marshfarm

12 hours ago

I think the best way would have been to assume thought is wordless (as the science tells us now), and images and probability (as symbols) are still arbitrary. That was the threshold to cross. Neither neurosymbolic, nor neuromorphic get there. Nor will any "world model" achieve anything as models are arbitrary.

Using the cybernetic to information theory to cog science to comp sci lineage was an increasingly limited set of tools to employ for intelligence.

Cybernetics should have been ported expansively to neurosci, then neurobio, then something more expansive like eco psychology or coodination dynamics. Instead of expanding, comp sci became too reductive.

The idea a reductive system that anyone with a little math training could A/B test vast swaths of information gleaned from existing forms and unlock highly evolved processes like thinking, reasoning, action and define this as a path to intelligence is quite strange. It defies scientific analysis. Intelligence is incredibly dense in biology, a vastly hidden, parallel process in which one affinity being removed (like the emotions) and the intel vanished into zombiehood.

Had we looked at that evidence, we'd have understood that language/tokens/embedded space couldn't possibly be a composite for all that parallel.

brokensegue

12 hours ago

it's the best way we've found so far?

bongodongobob

8 hours ago

No one knows, especially not parent with his one liner quip.

marshfarm

an hour ago

Thought is wordless. It's made in action-spatial syntax. As these are the defined states of intelligence, this would have been a far better approach to emulate. Words and images are the equivalent of junk code. Semantics can;t be specifically extracted from them.

FrustratedMonky

10 hours ago

When you have this much money, you can afford to chase AGI, and assign a few people to make an app. The app might be frivolous, but it keep OpenAI in the public view. It's money well spent on marketing.

micromacrofoot

12 hours ago

OpenAI is steering significant amounts of traffic away from Google and ChatGPT is a fairly common name that extends beyond awareness of the company (or even what the word it means specifically).

Not nearly on the level of "Kleenex" or "Google" as a term, but impressive given that other companies have spent decades trying to make a similar dent.

rchaud

11 hours ago

So what though? At best they will stuff it with ads like Google has done and become what is basically Bing w/ Copilot. That isn't going to pay back the $500 billion or however much they're trying to raise.

micromacrofoot

11 hours ago

it means I highly doubt it's the "beginning of the end" as postulated... unless it's a very very long end

as another example: tesla has strung along a known overvaluation for a long time now and there's no end in sight despite a number of blunders

Nimitz14

8 hours ago

Sora is about data gathering. This take is too simplistic.

righthand

12 hours ago

Any Llm generation tools are spam generation tools. AI-slop === Spam. How big is the market for spam again?

kulahan

12 hours ago

This isn't even remotely true. In its current iteration, it's one of the best jumping-off points for getting up to speed on the basics of a new topic.

righthand

11 hours ago

So is spam if you only need a summarized version of the financial difficulties of princes in Nigeria. Or the kinds of people doctors can’t stand.

Your jumping off point is a cliff into a pile of leaves. It looks correct and comfy but will hurt your butt taking it for granted. You’re telling people to jump and saying “it’ll get better eventually just keep jumping and ignore the pain!”

kulahan

11 hours ago

Nope, spam is specifically unwanted. Also, I'm saying "jump in the leaves, it's fun if you don't try leaping in from a mile up" and you're saying "NO LEAVES KILL PEOPLE ALL THE TIME" lol.

righthand

6 hours ago

What if I don’t want to use Llms, but people keep telling me that it’s fine and I should. Isn’t that spam?

What if my reason isn’t, “I like typing code” but instead, “we don’t need more Googles doing Google things and abusing privacy.”

Then personally the whole thing is spam.

Regardless of what the reasons for and against Llms maybe doesn’t obfuscate that the primary use case for generated content has been to scam and spam people.

That spam scam can be simply getting people to pour their personal private info into an Llm or it can be ads or a generated lie. Regardless it’s unwanted to a lot of people. And the history of that tech and attempt at normalizing it are founded in spam techniques.

Even the gratuitous search for AGI is spammed on us as these companies take tax payer money and build out infrastructure that’s actually available to 0% of the public for use.

Like I discredit in my mind anyone that cites Chatgpt as a source.

mentalgear

12 hours ago

Unfortunately due to the generation / verification ratio, spam and misinformation are indeed their most low-hanging fruits. It's so much easier to generate LLM output than to verify it, which is probably why Google held back the Transformer Architecture.

sosodev

12 hours ago

This article makes the claim that OpenAI, and AI in general, is massively overhyped because OpenAI is looking to sell slop. I'm not sure I can agree with that basic premise.

Whether AGI does or does not materialize sometime soon doesn't matter. OpenAI, like every company who wants to raise massive amounts of money, needs to show huge growth numbers now. It seems like the unfortunate, simple truth is that slop is a growth hack.

gdulli

12 hours ago

> A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling.

But also, a company that earnestly believes that it's about to disrupt most labor is going to want to grab as many of those bucks as possible before people no longer have income.