michaelhoney
7 days ago
people winding themselves up about artificial superintelligence, when it’s organic stupidity that’s actually the problem
look around you. the world is burning. fix that
csense
7 days ago
Consider the following two statements (a) and (b):
- (a) If we don't contain AI, it will go FOOM and kill all the humans. All intelligent life on Earth will be dead forever, except precisely one survivor -- the AI.
- (b) If we don't fix our political system, the world will suffer terrible poverty and tyranny for the next thousand years.
If you truly believe (a) and (b) are both true, there's a good argument to be made that it is better to dedicate your life to fixing problem (a). Even if we're doomed for poverty and tyranny, even if the survivors have to eke out a miserable post-apocalyptic existence for hundreds of years until we again produce enough resources that the light of civilization and freedom returns -- having some hope that eventually mankind will flourish again is better than a complete extermination of humanity with no possibility of coming back.
Personally I'm skeptical (a) is true but I can't definitively refute it. Some people might say that implies I ought to assign a non-zero probability to (a), which in turn applies I ought to dedicate my life to stopping AI. But that borders on Pascal's Wager-type logic, which I don't accept. (Why Pascal's Wager is irrational is quite an interesting question, I'll leave it as an exercise to the reader.)
xg15
6 days ago
There is also a small but nonzero chance that some asteroid will crash into the planet. Or that the current political madness will escalate into nuclear war. Or that climate change will become runaway and exponential and turn the earth into Venus. All of those also have the potential to end all life on the planet. Why only dedicate your life to AI?
Also, how does "dedicate your life to stopping AI" actually look in practice? All I've seen so far was either:
1) Writing grand pamphlets and sprawling (but entirely abstract) frameworks how you'll separate the good from the bad AI.
or:
2) Actually being the driving force behind the AI revolution you're ostensibly trying to stop. (While at the same time telling yourselves that it cannot possibly be stopped, like in this article) Weirdly enough, the "AI doomer" narrative seems to be most readily shared by exactly the same tech tycoons who are right now busy building massive datacenters, DDoSing the internet for training data and training evermore gargantuan models.
The latter part is why the whole discourse always seems so "pyromanic" to me: Those people are weirdly obsessed with "defending" against a relatively improbable scenario and at the same time seem to do all they can to make that scenario a reality.