A slow guide to confronting doom

16 pointsposted 7 days ago
by namanyayg

8 Comments

michaelhoney

7 days ago

people winding themselves up about artificial superintelligence, when it’s organic stupidity that’s actually the problem

look around you. the world is burning. fix that

csense

7 days ago

Consider the following two statements (a) and (b):

- (a) If we don't contain AI, it will go FOOM and kill all the humans. All intelligent life on Earth will be dead forever, except precisely one survivor -- the AI.

- (b) If we don't fix our political system, the world will suffer terrible poverty and tyranny for the next thousand years.

If you truly believe (a) and (b) are both true, there's a good argument to be made that it is better to dedicate your life to fixing problem (a). Even if we're doomed for poverty and tyranny, even if the survivors have to eke out a miserable post-apocalyptic existence for hundreds of years until we again produce enough resources that the light of civilization and freedom returns -- having some hope that eventually mankind will flourish again is better than a complete extermination of humanity with no possibility of coming back.

Personally I'm skeptical (a) is true but I can't definitively refute it. Some people might say that implies I ought to assign a non-zero probability to (a), which in turn applies I ought to dedicate my life to stopping AI. But that borders on Pascal's Wager-type logic, which I don't accept. (Why Pascal's Wager is irrational is quite an interesting question, I'll leave it as an exercise to the reader.)

xg15

6 days ago

There is also a small but nonzero chance that some asteroid will crash into the planet. Or that the current political madness will escalate into nuclear war. Or that climate change will become runaway and exponential and turn the earth into Venus. All of those also have the potential to end all life on the planet. Why only dedicate your life to AI?

Also, how does "dedicate your life to stopping AI" actually look in practice? All I've seen so far was either:

1) Writing grand pamphlets and sprawling (but entirely abstract) frameworks how you'll separate the good from the bad AI.

or:

2) Actually being the driving force behind the AI revolution you're ostensibly trying to stop. (While at the same time telling yourselves that it cannot possibly be stopped, like in this article) Weirdly enough, the "AI doomer" narrative seems to be most readily shared by exactly the same tech tycoons who are right now busy building massive datacenters, DDoSing the internet for training data and training evermore gargantuan models.

The latter part is why the whole discourse always seems so "pyromanic" to me: Those people are weirdly obsessed with "defending" against a relatively improbable scenario and at the same time seem to do all they can to make that scenario a reality.

urbandw311er

7 days ago

What is the doom that the author refers to? It’s very unclear.

whoisthemachine

7 days ago

AGI, I believe, given the prior article the author cited in the beginning.

qgin

7 days ago

I probably have a higher P(doom) than most, but I also feel that it’s important to remember that we don’t even have close to enough information to predict what’s going to happen with any level of certainty. The scenarios I can imagine are not the best, but I also acknowledge that the real future will most likely be a scenario that I can’t possibly imagine from here.

user

7 days ago

[deleted]

xg15

6 days ago

Aaand, full-on "Rationalist" cult territory right ahead. This particular one seems like inspired by Longtermism, but I'm not quite sure.

I also don't quite understand the mathematical certainty with which they assert that "everything you know and love will be gone in the next decades". Yes, the world is a scary place right now with some fundamental changes going on - and a lot if the trajectory seems to be distinctly in the wrong direction. But to see a definitive, unavoidable end if all things in a few decades seems more like a classical doomsday cult for me.

(...ah, OK, and of course the certain doom isn't about trivialities like the reemergence of fascism, impending WWIII or unchecked climate change but about the mathematically inevitable rise of godlike super-AIs that will wipe out humanity. Got it.)

But on the chance of catching an actual Longtermist here, I'd like to ask a question that I'd never understood so far:

> One's approach to living, deep down if not at the surface level algorithms, should cash out to trying to accumulate as much value as you can. That doesn't change just because doom is likely.

We can split the value one pursues into the value one is accruing right now (I like to call this "harvesting") and the value one is preparing to harvest in the future (I call this "sowing").

Most of the value, or expected value, would likely be in the future (because there's so much of it!) but for two reasons it makes sense to harvest now and not just sow for the future.

... followed by three relatively circumstantial and mostly psychological reasons that feel like they are more "exceptions to the rule" for practical reasons and shouldnt exist in an ideal world.

But just approaching this from a theoretical perspective: If we ignored our messy, imperfect human desires for a moment and assumed we'd all be perfect AIs or whatever, the logical conclusion would be we should "accumulate" all our value in the future? (As, in the limit, the present vanishes compared to the future, I guess). So 100% sowing, 0% harvesting? But the problem is if course that there will always be a future and it will never be the present. So wouldn't this actually lead to a world full of suffering, because no value at all is ever realized, as all of it would be stashed away for an eternal future that by definition can never arrive?