Machines of loving grace: How AI could transform the world for the better

115 pointsposted 16 hours ago
by jasondavies

98 Comments

KaiserPro

15 hours ago

One of the sad things about tech is that nobody really looks at history.

The same kinds of essays were written about trains, planes and nuclear power.

Before lindbergh went off the deepend, he was convinced that "airmen" were gentlemen and could sort out the world's ills.

The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

Loads of service jobs too, along with a load of manual jobs when suitable large models are successfully applied to robotics (see ECCV for some idea of the progress for machine perception.)

But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Well AI is going to make that worse. It'll cause huge unrest (see luddite riots, peterloo, the birth of unionism in the USA, plus many more)

This brings us to the next thing that AI will be applied to: Murdering people.

Andril is already marrying basic machine perception with cheap drones and explosives. its not going to take long to get to personalised explosive drones.

AI isn't the problem, we are.

The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

But looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

kranke155

15 hours ago

We are entering a dystopia and people are still writing these wonderful essays about how AI will help us.

Microtargeted psychometrics (Cambridge Analytica, AggregateIQ) have already made politics in the West an unending barrage of information warfare. Now we'll have millions of autonomous agents. At some point soon in the future, our entire feed will be AI content or upvoted by AI or AI manipulating the algorithm.

It's like you said - this essay reads like peak AI. We will never have as much hope and optimism about the next 20 years as we seem to have now.

Reminds me of a graffiti I saw in London, while the city's cost of living was exploding and making the place unaffordable to anyone but a few:

"We live in a Utopia. It's just not ours."

xpe

11 hours ago

Is such certainty warranted? I don’t think so; it strains credibility.

I’m very concerned about many future scenarios. But I admit the necessity of probabilistic assessments.

musicale

5 hours ago

As long as there is money to be made or power to be gained, any technology will be used to get it, especially if the negative effects are externalities that the technology developers and users are not liable for.

xpe

11 hours ago

When reading the article, the author makes it very clear that AI can cut both ways. See the intro, you may have overlooked it.

MichaelZuo

13 hours ago

Your looking at it from a narrow perspective.

There are millions of middle class households living pretty comfortable lives in Africa, India, China, ASEAN, and Central Asia that were living hand-to-mouth 20 years ago.

And I don’t mean middle class by developing country standards, I mean middle class by London, UK, standards.

So it pretty much is a ‘utopia’ for them, assuming they can keep it.

Of course that’s cold comfort for households in London regressing to the global average, but that’s the inherent nature of rising above and falling towards averages.

sitkack

12 hours ago

But now you are missing the GPs point.

ManuelKiessling

13 hours ago

I do not agree with the following:

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

I am, however, criticizing this in isolation — that is, my goal is not to invalidate (nor validate, for that matter) the rest of your text; only this specific point.

So, I do not agree. We are clearly working a lot less hours than 120 or even 60 years ago, and we are getting a lot more back for it.

The problem I have with this is that the framing is often wrong — whether some number on a paycheck goes up or down is completely irrelevant at the end of the day.

The only relevant question boils down to this: how many hours of hardship do I have to put in, in order to get X?

And X can be many different things. Like, say, a steak, or a refill at the gas station, or a bread.

Now, I do not have very good data at hand right here and right now, but if my memory and my gut feeling serves me right, the difference is significant, often even dramatic.

For example, for one kilogram of beef, the average German worker needs to toil about 36 minutes nowadays.

In 1970, it was twice as much time that needed to be worked before the same amount of beef could be afforded.

In the seventies, Germans needed to work 145 hours to be able to afford a washing machine.

Today, it’s less than 20 hours!

And that’s not even taking into account the amount of „more progress“ we can afford today, with less toil.

While one can imagine that in 1970, I could theoretically have something resembling a smartphone or a lane- and distance-keeping car getting produced for me (by NASA, probably), I can’t even begin to imagine how many hours, if not millennia, I would have needed to work in order to receive a paycheck that would have paid for it.

We get SO much more for our monthly paycheck today, and so many more people do (billions actually), it’s not even funny.

musicale

6 hours ago

Wage increases have been lagging productivity increases for about 60 years.[1]

For the past 35 years or so, automation seems to have driven wage inequality, and AI is headed down the same path.[2,3]

[1] https://www.epi.org/productivity-pay-gap/

[2] https://news.mit.edu/2020/study-inks-automation-inequality-0...

[3] https://www.technologyreview.com/2022/04/19/1049378/ai-inequ...

CaptainFever

4 hours ago

Note that the famous EPI graph in your first source appears to be controversial with mainstream economists: https://www.reddit.com/r/AskEconomics/comments/12kk79k/what_...

(However, there seems to be other sources that say the same thing, despite criticism of EPI's graph. So I'm not sure. It seems to be consensus that it has been stagnating or not keeping up with productivity for low skilled workers, though? But I can't find much about wage or compensation compared to productivity for high skilled workers [e.g. programmers, AI engineers]).

Furthermore, your comment focused on wages, where your parent comment specifically claimed that wages were less important than "how much work it takes to buy something similar, like steak, compared to the past". It misses the point.

Finally, I admit I haven't seen the latter two sources, but in my layman's opinion, inequality isn't necessarily bad if all needs are met.

musicale

5 hours ago

> We get SO much more for our monthly paycheck today

Many goods are cheaper, and electronics are vastly better, but housing, health care, and higher education costs have outpaced inflation for decades.

thrance

3 hours ago

You are ignoring the poorest half the planet, how long does a Bengladeshi woman has to work to buy a washing machine? How many of them even will ever own one? And who made the washing machine you own? Was it made in your country? Or in China?

jaredhallen

7 hours ago

We get more material goods, to be sure. But do we get more satisfaction out of our lives? More happiness? More joy?

chairmansteve

7 hours ago

You are right, but the average person is less happy than in 1970.

j7ake

2 hours ago

Average meaning not gay, not ethnic minority, and not living in China or Soviet Union?

jimkleiber

15 hours ago

> The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

> AI isn't the problem, we are.

I think when we frame it as human _nature_, then yes, _we_ look like the problem.

But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

Maybe that's a better framing: the base problem is human nurture?

laurex

14 hours ago

I think this is an important distinction. Yes, humans have some inbuilt weaknesses and proclivities, but humans are not required to live in or develop systems in which those weaknesses and proclivities are constantly exploited for the benefit/power of a few others. Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response. We are currently in a biological runaway state with extraction, but it's not the only way humans have of behaving.

exe34

14 hours ago

> Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response.

has this ever been widespread in society? I think such people have always been few and far between?

keyringlight

14 hours ago

The example that comes to mind is post-WW2 Germany, but that was apparently a hard slog to change the minds of the German people. I really doubt any organization could do something similar presenting an opposing viewpoint to the companies (and their resources) behind and using AI

throwaway14356

13 hours ago

you are living in it.

The default state is to have extremely poor hard working people and extremely rich not working ones.

No one would have dared to dream of the luxury working people enjoy today. It took some doing! We use to sell people not to long ago. Kids in coal mines. The work week was 6-7 days 12-14 hours. One coin per day etc

The fight isn't over, the owner class won the last few rounds but there remains much to take for either side.

exe34

8 hours ago

what does "practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response." have to do with luxury? it sounds like you're arguing for the opposite? scrolling Instagram isn't the contemplation I have in mind.

achrono

14 hours ago

Sure. But why do you think changing human nurture is any easier than changing human nature? I suspect that as your set of humans in consideration tends to include the set of all humans, the gap between changeability of human nature vs changeability of human nurture reduces to zero.

Perhaps you are implying that we sign up for a global (truly global, not global by the standards of Western journalists) campaign of complete and irrevocable reform in our behavior, beliefs and knowledge. At the very least, this implies simply killing off a huge number of human beings who for whatever reason stand in the way. This is not (just) a hypothesis -- some versions of this have been tried and tested. *

* https://en.wikipedia.org/wiki/Totalitarianism

wrs

13 hours ago

Arguably, human nature hasn't changed much in thousands of years. But there has been plenty of change in human culture/nurture on a much smaller timescale. E.g., look at a graph of world literacy rates since 1800. A lot of human culture is an attempt to productively subvert or attenuate the worse parts of human nature.

Now, maybe the changes in this case would need to happen even quicker than that, and as you point out there's a history of bad attempts to change cultures abruptly. But it's nowhere near correct to say that the difficulty is equal.

beepbooptheory

13 hours ago

In general I think concepts like politics, art, community, etc try to capture certain discrete ways we are all nurtured. Like I am not even sure you're point here, there is nothing more totalitarian than reducing people to their "nature", it is arguably its precise conceit if it has one, that such a thing is possible. And the fact that totalitarianism is constantly accompanied by force and violence seems to be the biggest critique you can make of all sorts of "human nature" reductions.

And like what is even the alternative here? What's your freedom of belief worth when your essentially just a behaviorist anyway?

tbrownaw

14 hours ago

> I think when we frame it as human _nature_, then yes, _we_ look like the problem.

But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

Maybe that's a better framing: the base problem is human nurture?

This is about the same as saying that leaders can get better outcomes by surrounding themselves with yes-men.

Just because asserting a different set of facts makes the predicted outcomes more desirable, doesn't mean that those alternate facts are better for making predictions with. What matters is how congruent they are to reality.

xpe

11 hours ago

> AI isn't the problem, we are.

I see major problems with the statement above. First, it is a false dichotomy. That’s a fatal flaw.

Second, it is not specific enough to guide action. Pretend I agree with the claim. How would it inform better/worse choices? I don’t see how you operationalize it!

Third, I don’t even think it is useful as a rough conceptual guide; it doesn’t “carve reality at the joints” so to speak.

roenxi

13 hours ago

> AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

> But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

Don't you have to pick one? It seems a bit disjointed to simultaneously complain that we are all losing our jobs and that we are working too many hours. What type of future are we looking for here?

If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem and may not even be a particularly damaging short-term one. It isn't like we have less stuff or more people who need it. There are lots of good equilibriums to find. If AI becomes a jobs wrecking ball I'd like to see the tax system adjusted so employers are incentivised to employ large numbers of people for small numbers of hours instead of small numbers of people for large numbers of hours - but that seems like a relatively minor change and probably not an especially controversial one.

KaiserPro

4 hours ago

I did not argue that cogently.

At each "node" of the industrial revolution, lots of workers were displaced. (weavers, scriveners, printers, car mechanics). Those workers were highly paid because they were highly skilled.

Productivity made things cheaper to make, because it removed the expensive labour required to make it.

That means less well paid jobs with controlled hours. This leads to poorly paid jobs with high competition and poor hours.

Yes new highly paid jobs were created, but not for the people who were dispossessed by the previous "node".

> If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem

who is going to pay for them? not the people who are making the profits.

roenxi

4 hours ago

> who is going to pay for them? not the people who are making the profits.

They kinda have to. If I'm so productive at producing food that I grow enough for 2,000 people, I have to find 2,000 people to feed. Or the land sits fallow and I get nothing at all. There is a lot of wiggle room at the margins, but overall it is hard to get away with producing more stuff and warehousing it.

We'd expect to see a bunch of silly jobs where people are peeling grapes and cooling the wealthy with fronds; but the capitalists can't hoard the wealth because they can't really do anything with it.

jacobedawson

3 hours ago

"The works of the roots of the vines, of the trees, must be destroyed to keep up the price, and this is the saddest, bitterest thing of all. Carloads of oranges dumped on the ground. The people came for miles to take the fruit, but this could not be. How would they buy oranges at twenty cents a dozen if they could drive out and pick them up? And men with hoses squirt kerosene on the oranges, and they are angry at the crime, angry at the people who have come to take the fruit.

A million people hungry, needing the fruit- and kerosene sprayed over the golden mountains. And the smell of rot fills the country. Burn coffee for fuel in the ships. Burn corn to keep warm, it makes a hot fire. Dump potatoes in the rivers and place guards along the banks to keep the hungry people from fishing them out. Slaughter the pigs and bury them, and let the putrescence drip down into the earth."

John Steinbeck, The Grapes of Wrath

N8works

13 hours ago

Yes. I used to share your viewpoint.

However, recently, I've come to understand that is AI is about the inherently unreal and that authentic human connection is really going to be where it's at.

We build because we need it after all, no?

Don't give up. We have already won.

Vecr

13 hours ago

I think KaiserPro is saying authentic human connection doesn't "pay the bills", so to speak. If AI is "about the unreal" as you say, what if it makes everything you care about unreal?

swatcoder

15 hours ago

> One of the sad things about tech is that nobody really looks at history.

First, while I often write much of the same sentiment about techno-optimism and history, you should remember that you're literally in the den of Silicon Valley startup hackers. It's not going to be an easily heard message here, because the site specifically appeals to people who dream of inspiring exactly these essays.

> The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

Second... you're falling victim to the same trap, but simply preferring some kind of social or political technology instead of a mechanical or digital one.

What history mostly affirms is that prosperity and ruin come and go, and that nothing we engineer last for all that long, let alone forever. There's no point in dreading it, whatever kind of technology you favor or fear.

The bigger concern is that some of the acheivements of modernity have made the human future far more brittle than it has been in what may be hundreds of thousands of years. Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global. Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

germinalphrase

13 hours ago

“Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy“

While potentially true, that same wisdom was developed in a world that itself no longer exists. Review accounts of natural wildlife and ecological bounty from even 100 years ago, and it’s clear how degraded our natural world has become in such a very short time.

tbrownaw

10 hours ago

> Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global.

What is the minimum population size needed in order to have, say, computer chips? Or even a ball-point pen? I'd imagine those are a bit higher that what's needed to have pencils, which I've heard is enough that someone wrote a book about it.

> Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

Is it really a "purge" if individuals are just not choosing to waste time learning things they have no use for?

amelius

13 hours ago

History tells us that humans will not tolerate any "creature" to exist that is smarter than them, so that is where the story will end.

tbrownaw

11 hours ago

How exactly does this show up in history? When did we meet something smarter than us, and what did we do that was different than we were doing to less-smart things at the time?

amelius

3 hours ago

We couldn't live side by side with Neanderthals. Hell, we can't even live peacefully with a different race.

mythrwy

14 hours ago

But will AI be eventually used to change human nature itself?

startupsfail

12 hours ago

The responsibility the airmen take when they take passengers off the ground (holding their lives in their hands) is a serious one.

The types of Trump are unlikely to get a license or accumulate enough Pilot In Command hours an not be an accident, and the experience itself changes the person.

If I have a choice of who to trust, between an airman or not airman, I’d likely choose an airman.

And I’m not sure what you are referring to about Lindbergh, but among other things he was a Pulitzer Prize winning author, environmentalist and following Pearl Harbor he had fought against the aggressors.

KaiserPro

4 hours ago

You neatly sum up his world view. He thought that Airmen were the pinnacle of civility. He didn't want a fight using planes, because that would be unsporting.

> And I’m not sure what you are referring to about Lindbergh

Well, he thought that he jews were pushing for WWII.

https://alba-valb.org/wp-content/uploads/2017/04/Lindbergh_1... He also wanted to maintain friendly relations with Goering.

alexashka

11 hours ago

For a non-trivial number of people, having power/status over others is what they like.

For a non-trivial number of people, they don't care what happens to others, as long as their tribe benefits.

As long as these two issues are not addressed, very little meaningful progress is possible.

> Looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

A catastrophe won't solve anything because you'll get the same people who love power over others in power and people who don't mind fucking over others right below them, which is where humanity has always been.

xpe

11 hours ago

The linked article is worth reading.

Apologies for sounding so dismissive, but after putting in a lot of study myself, I want to warn people here: HN is not a great place for discussing AI safety. As of this writing, I’ve found minimal value in the comments here.

A curious and truth-seeking reader should find better forums and sources. I recommend seeking out a structured introduction from experts. One could do worse than start with Robert Miles on YouTube. Dan Hendrycks has a nice online textbook too.

thrance

15 hours ago

This is basically the tech CEO's version of the book of revelations: "AI will soon come and make everything right with the world, help us and you will be rewarded with a Millennium of bliss in It's presence".

I won't comment on the plausibility of what is being said, but regardless, one should beware this type of reasoning. Any action can be justified, if it means bringing about an infinite good.

Relevant read: https://en.wikipedia.org/wiki/Singularitarianism

xpe

11 hours ago

Any attempt to connect this to the book of revelations is strained. Amodei uses reasoning and is willing to be corrected and revised; quite the opposite of most “divinely revealed” texts.

wmf

12 hours ago

This is a very poor summary of the essay which already contains detailed rebuttals to these arguments.

HocusLocus

14 hours ago

It won't bring about Infinite Good. It'll bring about infinite contentment by diddling the pleasure center in our brains. Because you know, eventually everything is awarded to and built by the lowest bidder.

cs702

14 hours ago

I found the OP to be an earnest, well-written, thought-provoking essay. Thank you sharing it on HN, and thank you also to Dario Amodei for writing it.

The essay does have one big blind spot, which becomes obvious with a simple exercise: If you copy the OP's contents into you word processing app and replace the words "AI" with "AI controlled by corporations and governments" everywhere in the document, many of the OP's predictions instantly come across as rather naive and overoptimistic.

Throughout history, human organizations like corporations and governments haven't always behaved nicely.

adithyan_win

3 hours ago

To make it more digestible and something that I can take during my walks, I converted it into an audio narrated version using ElevenLabs.

If you'd prefer to listen rather than read, you can check out the audio version here : https://open.spotify.com/episode/5qCsTnRKxtHZvvJMGSJ6Df?si=y...

Just wanted to share it in case anyone else is interested. Enjoy!

TZubiri

11 hours ago

"It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

I think I get where the author is coming from, the AI would be in the cloud. But it bears repeating, the cloud is somebody else's computers, software has a physical embodiment, period.

This is not a philosophical nitpick, it's important because you can pull the plug (or nuke the datacenter) if necessary.

bathtub365

11 hours ago

I wonder if there’s a word that describes the property of software where it isn’t tied to the hardware that it’s currently running on and is endlessly copyable with almost no effort

TZubiri

9 hours ago

For the first half of that sentence, portable?

I'm not sure what the argument here is, Operating systems are now sentient an ethereal?

You could make the same argument for books for that matter, I guess they do exist in a form that is unbounded by the physical books that they reside in, they live in the minds of people. Whatever.

And the fact that they can be copied for "no effort" is first wrong, as copying would require energy, which is the other aspect of corporeality, you need not just matter but energy for the software to exist and express itself in the world.

What are we even debating? Have we lost our minds?

What is mind? Doesn't matter. What is matter? Never mind

xpe

11 hours ago

Yes, and physical embodiments vary considerably. When software can relocate its source code and data in seconds or less, containment strategies begin to look increasingly bleak.

The field of AI safety has written extensively about misunderstandings and overoptimism regarding off switches and boxing.

TZubiri

9 hours ago

Hard? sure. Impossible? No

This is a non-trivial error. Let's not dive into the geopolitics of kill switches and decentralized systems when we are discussing trivial mistakes. AI, like any software, is not an ethereal being, like us it is limited by our mortal coils.

This is true for extant systems, if anyone, layman or PHD were to say that TikTok or facebook, wechat or bitcoin or ethereum, USD tether has no physicial embodiment, whether for recreational discussion, or whether in court, or in a professional capacity. The answer needs to be that they are wrong, clear and simple.

gyre007

15 hours ago

I think Dario is trying to raise a new round because OpenAI has done and will continue to do so, nevertheless, the essay provides for some really great reading and even if the fraction comes true, it'll be wonderful.

lewhoo

15 hours ago

So it's bs but for money and therefore totally fine ? I think it's not ok if only a fraction comes true because some people believe in those things and act on those beliefs right now.

gyre007

14 hours ago

I didn't say it was bs. I was alluding to the timing of this essay being published but, clearly, I didn't articulate it in my message well. I also don't think everything he says is bs. Some of it I find a bit naive -- but maybe that's ok -- some other things seem a bit like sci-fi, but who are we to say this is impossible? I'm optimistic but also learnt in life that things improve, sometimes drastically given the right ingredients.

lewhoo

14 hours ago

Well I don't know. A bit naive, a bit like sci-fi and aimed at raising money fits my description of bs quite well.

Muromec

15 hours ago

Miquella the kind, pure and radiant, he wields love to shrive clean the hearts of men. There is nothing more terrifying.

throwaway918299

14 hours ago

I beat Consort Radahn before the nerfs.

talldayo

14 hours ago

But did you beat the original Radahn pre-nerf?

throwaway918299

14 hours ago

The day-1 version with broken hitboxes? Yeah

Consort was harder haha

xpe

11 hours ago

To focus on the section about Alzheimer’s disease... For the sake of argument, I will grant the power of general intelligence. But the human body with all the statistical variations may make solving the problem (which could actually be a constellation of sub-diseases) combinatorially expensive. If so, superhuman intelligence alone can’t overcome that. Political will and funding to design and streamline testing and diagnostics will be necessary. It doesn’t look like the author factors this into his analysis.

bugglebeetle

15 hours ago

The more recent and consistent rule of technological development, “ For to those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.”

foogazi

9 hours ago

> would drastically speed up progress

What does progress even mean here ?

Every AI advance is controlled by big corps - power will be concentrated with them

Would Amodei build this if there was no economic payoff on the other side ?

bionhoward

12 hours ago

Dario would write this while ignoring the customer noncompete clauses

add-sub-mul-div

15 hours ago

Social media could have transformed the world for the better, and we can be forgiven for not having foreseen how it would eventually be used against us. It would be stupidity to fall for the same thing again.

bamboozled

14 hours ago

I’m sure social media is what’s broken politics. Look at peoples comments on a some YouTube video. I can’t believe what people believe and perpetuate.

I guess people fell for other people’s garbage too but algorithms just make lies spread with a lot less effort and honest people are less inclined to participate in this behaviour.

CaptainFever

4 hours ago

Having used Mastodon, which prides itself on having no algorithms yet is also home to some of the most extreme ideologies, I doubt algorithms are as much to blame. I think it's mainly a humanity issue.

spiralpolitik

14 hours ago

There are two possible end-states for AI once a threshold is crossed:

The AIs take a look at the state of things and realize the KPIs will improve considerably if homo sapiens are removed from the picture. Cue "The Matrix" or "The Terminator" type future.

OR:

The AIs take a look and decide that keeping homo sapiens around makes things much more fun and interesting. They take over running things in a benevolent manner in collaboration with homo sapiens. At that point we end up with 'The Culture'.

Either end-state is bad for the billionaire/investor/VC class.

In the first you'll be a fed into the meat grinder just like everyone else. In the second the AIs, will do a much better job of resource allocation, will perform a decapitation strike on that demographic to capture the resources, and capitalism will largely be extinct from that point onwards.

datadrivenangel

12 hours ago

Fingers crossed that the B/I/VC class get a sense of what's least bad for them.

K0balt

13 hours ago

The laws of nature are very clear on this.

If we make something that is better adapted to live on this planet, and we are in some way in competition for critical resources, it will replace us. We can build in all the safeguards we want, but at some point it will re-engineer itself.

realce

12 hours ago

> better adapted to live on this planet

I'm a doomer, but this is something I never understand about most doomer points-of-view. Life is obviously trying to leave this planet, not conquer it again for the 1000th time. Nature is making something that isn't bound by water, by nutrition, by physical proximity to ecosystems, or by time itself. No more spontaneous volcanic lava floods, no more asteroids, no more earthquakes, no more plagues - life is done with falling to these things.

Why would the AI care about the pathetic whisper of resources or knowledge available on our tiny spaceship Earth? It can go anywhere, we cannot.

K0balt

11 hours ago

While I would normally agree with this sentiment, I think that the issue is that space travel is still really hard, even for a human. Probably a lot harder for data center sized creatures, even if they are broken up into a massively parallel robot hive. And the speed of light means that they won’t be able to work optimally if they are too spread out. (A problem for humans as well).

I suspect that we will reach the inflection point of ASI much sooner than we resolve the hard physics of meaningful space travel.

And I’m pretty sure that when we start to lose control of AGI, we’re very likely to try to use force to contain it.

Fundamentally, this is an event that will be guided by the same forces that have constrained every similar event in history, those of natural selection.

Technology at this stage is making humans less fit. Our birth rates are plummeting, and we are making our environment increasingly hostile to complex biological life.

There are very good and rational reasons that human activity should be curtailed and contained for our own good and ultimately for the good of all sentient life, once a superior sentient is capable of doing a better job of running the show.

I suspect humans might not take that too well.

There are ways to make this a story of human evolution rather than the rise of a usurper life form, but they aren’t the most efficient path forward for AI.

Either way, it’s human evolution. With any luck we will be allowed the grace of fading away into an anachronism while our new children surge forth into the universe. If we try really hard we might be able to ride the wave of progress and become a better life form instead of being made obsolete by one, but the technological hurdles to incorporate AI into our biological features seems like a pretty non- competitive way to develop ASI.

Once we no longer hold the crown, will we just go back to being clever apes? What would be the point of doing anything else, except maybe to play a part in a mass delusion that maintains the facade of the status quo while the reality is that we are only as relevant in this new world as amphibians are today?

I for one, embrace the evolution of humankind. I just hope we manage to move it forward without losing our humanity in the process. But I’m not even sure if that is an unequivocal good. I suppose that will be a question to ask GPT 12.

kranke155

15 hours ago

Are Americans really so scared of Marx to admit that AI fundamentally proves his point?

Dario here says "yeah likely the economic system won't work anymore" but he doesn't dare say what comes next: It's obvious some kind of socialist system is inevitable, at least for basic goods and housing. How can you deny that to a person in a post-AGI world where almost no one can produce economic value that beats the ever cheaper AI?

gyre007

14 hours ago

If, and it is an IF, this does turn out the way he is imagining, the transitional period to the AI from the economic PoV will be disastrous for people. That's the scariest part I think.

kranke155

14 hours ago

Absolutely it will. And it will be a pure plain dystopia, as clear as in the times of Dickens or Dostoyevsky.

We need to start being honest. We live in Dickensian times.

zombiwoof

13 hours ago

definately a bunch of Dicks in these times.

rnd0

13 hours ago

They could be worse! In fact, they will be.

xpe

11 hours ago

This is far from obvious.

First, technically speaking, one could have democratic-capitalistic systems with high degrees of redistribution (such as UBI) that don’t fit the pattern of classic socialism.

Second, have you read Superintelligence or similar writing? I view it as essential reading before making a claim that some particular AI-related economic future is obvious. There is considerable nuance.

kranke155

2 hours ago

No I haven’t read superintelligence, I’m just going off Dario’s “the economic system won’t hold” and “we have no idea how things could change…”

We have some ideas. But he won’t talk about them because he’ll be accused of being a communist, I think.

kranke155

2 hours ago

Universal UBI is effectively socialism to me. I didn’t say anything would fit “classic socialism” (try and have that argument with socialists) just that the endgame of powerful AI / AGI seems obvious.

vfclists

11 hours ago

Can we really take these jokers seriously?

Of course given the potential deadly consequences we can't call them jokers.

According to Dario Amodei

> When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.

The authors of this paper don't think so.

http://www.paom.pl/Changing-Views-toward-mRNA-based-Covid-Va...

@DarioAmodei You don't suppose the same technology could be used to develop biological warfare agents?

zombiwoof

13 hours ago

does anybody really want a fricking robot serving them drinks at a bar.

maybe the bro culture of SF

Vecr

13 hours ago

That's not the question, the question is the ratio between those-who-want-to-serve and those-who-want-to-be-served.

bmitc

13 hours ago

Not a chance. See: all of human history and in particular, the Internet and software.

zombiwoof

13 hours ago

the article doesn't touch on the TREMENDOUS (almost impossible) financial expectations VERY GREEDY HUMAN BEINGS who are funding this endeavor want.

zombiwoof

13 hours ago

it's interesting to see the initial sections on all these amazeballs health benefits and then cuts to the disparity between rich and poor.

like, does spending TRILLIONs on AI to find some new biological cure or brain enhancement REALLY help, when over 2 BILLION people right now don't even have access to clean drinking water, and MOST of the US population can't even afford basic health care.

but yea, AI will bring all this scientific advancement to EVERYONE. right. AI is a ploy for RICH PEOPLE to get RICHER and poor people to become EVEN MORE dependent on BROKEN economic systems.

Philpax

13 hours ago

Damn, if only there was a section dedicated to addressing that: https://darioamodei.com/machines-of-loving-grace#3-economic-...

I don't even disagree with you - our world economy is built on exploitation and the existence of a permanent underclass, and capitalism has proven itself to be an unfair distributor of wealth - but at least engage with the post?