A_D_E_P_T
5 days ago
You might want to live there, but I wouldn't. Virtually all humans in the books -- and I'm aware of the fact that they're not Earth humans but a wide variety of humanoid aliens -- are kept as pets by the ships, for amusement, basically as clowns. Everything important about the flow of human life is decided by the mighty ship minds; humans are left to nibble at the margins and dance to the tune of their betters. There are a small subset of elites, in organizations like Special Circumstances, that are granted a modicum of independent agency, but even this is rather difficult to justify under the circumstances.
Most of the drama in the books comes to pass when the ship-dominated Culture interacts with a "backwards and benighted," but still vital and expansionist, species.
It's just not a human future. It's a contrived future where humans are ruled by benign Gods. I suppose that for some people this would be a kind of heaven. For others, though...
In a way it's a sort of anti-Romanticism, I guess.
austinl
5 days ago
Banks' work assumes that AI exceeding human capabilities is inevitable, and the series explores how people might find meaning in life when ultimately everything can be done better by machines. For example, the protagonist in Player of Games gets enjoyment from playing board games, despite knowing that AI can win in every circumstance.
For all of the apocalyptic AI sci-fi that's out there , Banks' work stands out as a positive outcome for humanity (if you accept that AI acceleration is inevitable).
But I also think Banks is sympathetic to your viewpoint. For example, Horza, the protagonist in the first novel, Consider Phlebas, is notably anti-Culture. Horza sees the Culture as hedonists who are unable to take anything seriously, whose actions are ultimately meaningless without spiritual motivation. I think these were the questions that Banks was trying to raise.
elihu
5 days ago
I suppose its ainteresting that in the Culture, human intelligence and artificial intelligence are consistently kept separate and distinct, even when it becomes possible to perfectly record a person's consciousness and execute it without a body within a virtual environment.
One could imagine Banks could have described Minds whose consciousness was originally derived from a human's, but extended beyond recognition with processing capabilities far in excess of what our biological brains can do. I guess as a story it's more believable that an AI could be what we'd call moral and good if it's explicitly non-human. Giving any human the kind of power and authority that a Mind has sounds like a recipe for disaster.
idiotsecant
5 days ago
https://theculture.fandom.com/wiki/Gzilt
Banks did consider this. The Gzilt were a quite powerful race who had no AI. Instead they emulated groups of biological intelligences on faster hardware, in a sort of group mind type machine.
stoneforger
5 days ago
The Meatfucker acts as a vigilante and is unpopular because of the privacy invasions. The Zetetic Elench splintered off. The Culture's morals were tested in the Idiran war. They might not have greed as a driver because it's unnecessary but they do have freedom of choice so they're not exactly saints.
theptip
5 days ago
Yes, the problem is that from a narrative perspective a story about post-humans would be neither relatable nor comprehensible.
Personally I think the transhumanist evolution is a much more likely positive outcome than “humans stick around and befriend AIs”, of all the potential positive AGI scenarios.
Some sort of Renunciation (Butlerian Jihad, and/or totalitarian ban on genetic engineering) is the other big one, but it seems you’d need a near miss like Skynet or Dune’s timelines to get everybody to sign up to such a drastic Renunciation, and that is probably quite apocalyptic, so maybe doesn’t count as a “positive outcome”.
tialaramex
5 days ago
I don't see why post-humans can't be relatable even though they'd be very distant from our motivations.
Take Greg Egan's "Glory". I don't think we're told the Amalgam citizens in the story are in some sense human descendants but it seems reasonable to presume so. Our motives aren't quite like theirs, I don't think any living human would make those choices, but I have feelings about them anyway.
theptip
4 days ago
I haven’t read that one, will check it out. If we take his “Permutation City”, I think the character Peer is quite unrelatable, and only then because they give some human background. A story consisting only of creatures hacking their own reward functions makes motivations more alien than “not quite like ours” IMO.
I assume post-humans will be smarter and unlock new forms of cognition. For example BCI to connect directly to the Internet or other brains seems plausible. So in the same way that a blind person cannot relate to a sighted person on visual art, or an IQ 75 person is unlikely to be able to relate to an IQ 150 person on the elegance of some complex mathematical theorem, I assume there will be equivalent barriers.
But I think the first point around motivation hacking is the crux for me. I would assume post-humans will fundamentally change their desires (indeed I believe that conditional on there being far more technologically advanced post-humans, they almost certainly _must_ have removed much of the ape-mind, lest it force them into conflict with existential stakes.)
akira2501
5 days ago
> AI exceeding human capabilities is inevitable
It can right now. This isn't the problem. The problem is the power budget and efficiency curve. "Self-contained power efficient AI with a long lasting power source" is actually several very difficult and entropy averse problems all rolled into one.
It's almost as if all the evolutionary challenges that make humans what we are will also have to be solved for this future to be remotely realizable. In which case, it's just a new form of species competition, between one species with sexual dimorphism and differentiation and one without. I know what I'd bet on.
adriand
5 days ago
> the series explores how people might find meaning in life when ultimately everything can be done better by machines.
Your comment reminds me of Nick Land's accelerationism theory, summarized here as follows:
> "The most essential point of Land’s philosophy is the identity of capitalism and artificial intelligence: they are one and the same thing apprehended from different temporal vantage points. What we understand as a market based economy is the chaotic adolescence of a future AI superintelligence," writes the author of the analysis. "According to Land, the true protagonist of history is not humanity but the capitalist system of which humans are just components. Cutting humans out of the techno-economic loop entirely will result in massive productivity gains for the system itself." [1]
Personally, I question whether the future holds any particular difference for the qualitative human experience. It seems to me that once a certain degree of material comfort is attained, coupled with basic freedoms of expression/religion/association/etc., then life is just what life is. Having great power or great wealth or great influence or great artistry is really just the same-old, same-old, over and over again. Capitalism already runs my life, is capitalism run by AIs any different?
1: https://latecomermag.com/article/a-brief-history-of-accelera...
Vecr
5 days ago
Or Robin Hanson, a professional economist and kind of a Nick Land lite, who's published more recently. That's where the carbon robots expanding at 1/3rd the speed of light comes from.
BriggyDwiggs42
5 days ago
I just want to add that I think you might be missing an component of that optimal life idea. We often neglect to consider that in order to exercise freedom, one must have time in which to choose freely. I’d argue that a great deal of leisure, if not the complete abolition of work, would be a major prerequisite to reaching that optimal life.
johnnyjeans
5 days ago
Banks' Culture isn't capitalist in the slightest. It is however, very humanist.
If you want a vision of the future (multiple futures, at that) which differs from the liberal, humanist conception of man's destiny, Baxter's Xeelee sequence is a great contemporary. Baxter's ability to write a compelling human being is (in my opinion) very poor, but when it comes to hypothesizing about the future, he's far more interesting of an author. Without spoilers, it's a series that's often outright disturbing. And it certainly is a very strong indictment to the self-centered narcissism that the post-enlightenment ideology of liberalism is anything but yet another stepping stone on an eternal evolution of human beings. The exceptionally alien circumstances that are detailed undermine the idea of a qualitative human experience entirely.
I think the contemporary focus on economics is itself a facet of modernism that will eventually disappear. Anything remotely involving the domain rarely shows up in Baxter's work. It's really hard to give a shit about it given the monumental scale and metaphysical nature of his writing.
adriand
5 days ago
> I think the contemporary focus on economics is itself a facet of modernism that will eventually disappear. Anything remotely involving the domain rarely shows up in Baxter's work. It's really hard to give a shit about it given the monumental scale and metaphysical nature of his writing.
I’m curious to check it out. But in terms of what I’m trying to say, I’m not making a point about economics, I’m making a point about the human experience. I haven’t read these books, but most sci-fi novels on a grand scale involve very large physical structures, for example. A sphere built around a star to collect all its energy, say. But not mentioned is that there’s Joe, making a sandwich, gazing out at the surface of the sphere, wondering what his entertainment options for the weekend might be.
In other words, I’m not persuaded that we are heading for transcendence. Stories from 3,000 years ago still resonate for us because life is just life. For the same reason, life extension doesn’t really seem that appealing either. 45 years in, I’m thinking that another 45 years is about all I could take.
BriggyDwiggs42
5 days ago
Glad to see someone else who liked those books. I’m only a few in, but so far they’re pretty great.
johnnyjeans
5 days ago
The ending of Ring, particularly having everything contextualized after reading all the way to the end of the Destiny's Children sub-series, remains one of the most strikingly beautiful pieces I've ever seen a Sci-Fi author pull off.
Easily the best "hard" Sci-Fi I've read. Baxter's imaginination and grasp of the domains he writes about is phenomenal.
calf
5 days ago
But OP and Horza's viewpoints are the same strawman argument. The sci-fi premise is that superhuman AIs coexist with humans which are essentially ants.
The correct question is, then what ought to be the best outcome for humans? And a benevolent coexistence where the Culture actually gives humans lots of space and autonomy (contrary their misinformed and wrong view that the Culture takes away human autonomy) is indeed the most optimal solution. It is in fact in this setting that humans nevertheless retain their individual humanity instead of taking some transhumanist next step.
jonnypotty
5 days ago
The way I interpret the philosophy of the minds is a bit different.
Some seem to conform to your analysis here, but many seem deeply compassionate toward the human condition. I always felt like part of what banks was saying was that, no matter the level of intelligence, humanity and morality had some deep truths that were hard to totally trancend. And that a humam perspective could be useful and maybe even insightful even in the face of vast unimaginable intelligence. Or maybe that wisdom was accessible to lower life forms than the minds.
danenania
5 days ago
> And that a humam perspective could be useful and maybe even insightful even in the face of vast unimaginable intelligence.
I think something like this is likely in the reality where we have ASI, just because biological brains are so different. Even if AI is vastly beyond humans in raw intelligence, humans will probably still be able to come up with novel insights due to the fundamental architectural differences.
Of course when we start reverse engineering biological brains then this gets fuzzier.
matthewdgreen
5 days ago
How much of this is because it’s a bad future, and how much of this is because in any future with super-powerful artificial intelligences the upside for human achievement is going to be capped? Or to put it differently: would you rather live in the Culture or in one of the alternative societies it explores (some within the Culture itself) where they opt for fewer comforts, but more primitive violence and warfare —- knowing at the end of the day, you’re still never going to have mastery of the universe?
jaggederest
5 days ago
> knowing at the end of the day, you’re still never going to have mastery of the universe?
Why is that assumption implicit? I can imagine a world in which humans and superhuman intelligences work together to achieve great beauty and creativity. The necessity for dominance and superiority is a present day human trait, not one that will necessarily be embedded in whatever comes around as the next order of magnitude. Who is to say that they won't be playful partners in the dance of creation?
jimbokun
5 days ago
That’s like you and your cat collaborating on writing a great novel.
jaggederest
5 days ago
I'm not sure that's too outre. My cats know many things that I do not. I'm working on giving them vocabulary, to boot.
Over and under on the first uplifted-cat-written novel, 500 years.
geysersam
5 days ago
What if the dance of creation mentioned is the every day life of a cat and his person. A positive example of collaboration across vast differences. A cats life is probably not as incomprehensible as ours are to them, but they are still pretty mysterious. Would we be transparent and uninteresting in they eyes of AIs? Maybe not.
GeoAtreides
5 days ago
No, it's not, cats are not sapient. Sapient-sapient relationships are different than sapient-sentient relationships.
jaggederest
5 days ago
Why are cats not sapient, and for how long will they be non-sapient? What do you think the likelihood that we will uplift cats to sapience is? Is it zero?
Ten thousand years is a long time.
jhbadger
5 days ago
I like David Brin's novels where humans uplift various primates and cetaceans. Do many other authors do that? It seems uncommon as compared to AI-driven futures.
jaggederest
5 days ago
I'm rereading that series lately actually, there have been a few others who riffed on the concept but none as in depth sadly.
The best I've read recently were Adrian Tchaikovsky's books, all quite excellent and fairly centered around uplift.
whimsicalism
4 days ago
the obvious other recommendation is Adrian Tchaikovsky’s books
hoseja
5 days ago
From Minds' perspective, you are less sapient than a cat is from human one.
whimsicalism
5 days ago
really? current-day anatomically humans and superhuman AI “working together” in the future seems naïve. what would humans contribute?
geysersam
5 days ago
Who knows. Depends on. The Devil is in the details. Is it really unthinkable?
What if future AIs are not omnipotent, but bounded by some to us right now unknown limitations. Just like us, but differently limited. Maybe they appreciate our relative limitlessness just as we do theirs.
whimsicalism
5 days ago
it is unthinkable to me, frankly
geysersam
5 days ago
I'm curious. What assumptions about the nature of the human mind and the nature of future superintelligence lead you to that conclusion?
jaggederest
5 days ago
Why do you assume that what we call humans in the future will be current-day anatomically human? I assume, for example, the ability to run versions of yourself virtually and merge the state vectors at will. Special purpose vehicles designed for certain tasks. Wild genetic, cybernetic, and nanotech experimentation.
I'm talking about fundamentally novel superhuman intelligences working with someone who has spent a few millennia exploring what it means to truly be themselves.
whimsicalism
5 days ago
I am not assuming that, I am just stating the view I believe is indefensible. Yours is fine, by contrast
ben_w
5 days ago
Even just building a silicon duplicate of a human brain, one transistor per synapse and with current technology*, the silicon copy would cognitively outpace the organic original by about the same ratio to which we ourselves outpace continental drift while walking.
* 2017 tech, albeit at great expense because half a quadrillion transistors is expensive to build and to run
jaggederest
5 days ago
Yes, of course, but would they be different in a way that goes beyond "merely faster"? I think the qualitative differences are more interesting than the quantitative ones.
For example, I can easily picture superhuman intelligences that have neither the patience nor interest in the kinds of things that humans are interested in, except in so far as the humans ask politely. A creature like that could create fabulous works of art in the human mode, but would have no desire to do so besides sublimating the desire of the humans around.
jaggederest
5 days ago
Here's an analogy.
saati
5 days ago
It's not the future, the Culture aren't humans.
OgsyedIE
5 days ago
There's a counterargument to this conception of freedom; what are we supposed to compare the settings of Banks' novels to? Looking at the distribution of rights and responsibilities, humans are effectively kept as pets by states today and we just don't ascribe sapience to states.
tomaskafka
5 days ago
The concept is called egregore, and yes, any “AI alignment” discussion I read blissfully ignores that we have been unable to align neither states nor corporations with human goals, while both are much dumber egregores than AI.
pavlov
5 days ago
I would argue that today’s states and corporations are much more aligned with human goals than their equivalents from, say, 500 years ago.
I’ll much rather have the Federal Republic of Germany and Google than Emperor Charles V and the Inquisition.
Who’s to say that we can’t make similar progress in the next 500 years too?
MichaelZuo
5 days ago
Why does the alignment relative to a prior point matter?
e.g. A small snowball could be nearly perfectly enmeshed with the surrounding snow on top of a steep hill but that doesn’t stop the small snowball from rolling down the hill and becoming a very large snowball in a few seconds, and wrecking some unfortunate passer-by at the bottom.
A few microns of freezing rain may have been the deciding factor so even a 99.9% relative ‘alignment’ between snowball and snowy hill top would still be irrelevant for the unlucky person. Who may have walked by 10000 times prior.
ben_w
5 days ago
"Alignment" can be taken literally, cosine similarity of the vectors <what you want> and <what the system does>.
The more powerful the system is compared to you, the more any small difference is amplified.
AI that's about as powerful as an intern, great, no big deal for us. AI that's capable enough to run a company? If it's morally "good" then great; if not trade unions and strikes are a thing, as are "union busters". AI that's capable enough to run a country? If it's morally "good" then great; if not…
OgsyedIE
5 days ago
Isn't this perceived alignment a mere instrumental goal carried out in the short-term?
ben_w
5 days ago
I see the difficulty of aligning corporations being mentioned a few times, and I've brought it up also.
Between SLAPP cases, FUD, lobbying, and the way all the harms occur despite being made out of humans, there's already a bunch of non-AI ways for powerful entities that harm us to make it difficult to organise ourselves against the harm.
gary_0
5 days ago
Or corporations: https://www.zylstra.org/blog/2019/06/our-ai-overlords-are-al...
Vecr
5 days ago
Corporations aren't AIs, they aren't as powerful as AIs, and they don't think like AIs. I have mathematical proof. Show me a corporation that, as a whole, satisfies both invulnerability to dutch book attacks and has a fully total ordered VNM compliant utility function.
wnoise
5 days ago
That merely makes them stupid AIs.
Vecr
5 days ago
I guess I failed to understand the point. What I mean is that arguing that AIs can't be a problem (something that I'd like to be true, but probably isn't) because companies already are superhuman does not make sense, for some pretty simple mathematical reasons.
gary_0
5 days ago
The point is a philosophical argument about what constitutes a powerful non-human agent. Nobody is arguing that corporations are literal thinking computers.
> arguing that AIs can't be a problem ... because companies already are superhuman
Quite the opposite, actually: corporations can potentially be very destructive "paperclip optimizers".
kwhitefoot
5 days ago
What makes you think that AIs would be VNM rational?
Vecr
5 days ago
They should either be VNM rational or have surpassed VNM rationality. Anything else is leaving utils on the table (though I suppose that's kind of tautological).
lxe
5 days ago
I think this exact sentiment is explained over and over why people leave the Culture in the books. And why they don't actually have to -- full freedom to do literally anything is given to you as an individual of the Culture. There's effectively no difference in what freedom of personal choice you're afforded whether you're a part of the Culture or whether you leave it.
TeMPOraL
5 days ago
I've seen this sentiment summarized as humans becoming NPCs in their own story.
wpietri
5 days ago
That doesn't seem right to me. The closest I could come is seeing humanity, or perhaps the human species, becoming NPCs in their own story.
But I think individual humans have always been narratively secondary in the story of humanity.
And I think that's fine, because "story" is a fiction we use to manage a big world in the 3 pounds of headmeat we all get. Reducing all of humanity to a single story is really the dehumanizing part, whether it involves AIs or not. We all have our own stories.
marcinzm
5 days ago
Isn't that currently the case except for a very small number of people?
GeoAtreides
5 days ago
You're wrong in saying that everything important about human life is decided by the Minds. The Minds respect, care and love their human charges. It's not a high lords - peasants relationship, it's a grown-up children take care of their elderly parents.
And you can leave. There always parts of the Culture splitting up or joining back. You can request and get a ship with Star Trek-level AI and go on your merry way.
jiggawatts
5 days ago
The humans are pets. Owners love their pets. The pets can always run away. That doesn’t make them have agency in any meaningful way.
ben_w
3 days ago
Much of Look To Windward was on an Orbital with 50 billion inhabitants, four billion simultaneous mind uploads stored in the hub, the hub Mind described having billions of distinct thoughts in any given second and separately being described as capable of simultaneous conversation with all inhabitants.
A Culture Mind losing a single human(oid) isn't like having a pet run away, it's like losing an eyelash — your peers may comment about the "bald patch" if you lose a lot all at once, but not any single individual one.
Yet at the same time, these Minds are written to care very much indeed: this particular Mind was appalled at having killed the 3492 (of 310 million) who refused to evacuate three other Orbitals that needed to be destroyed in the course of a war.
GeoAtreides
5 days ago
> pets can always run away
> doesn’t make them have agency in any meaningful way
these two sentences can't be true at the same time
jiggawatts
5 days ago
You can be a wage slave and have the theoretical option of quitting.
The humans in the Culture are similarly “free”, in that they’d have to give up their lavish and safe lifestyle for true freedom and self-determination. They choose not to, but they can.
Some pets run away. Most don’t.
mannyv
5 days ago
The vast majority of humanity would disagree with you. And in fact, this is how the vast majority of humans live today.
"humans are left to nibble at the margins and dance to the tune of their betters." Isn't that society today, but without the wealth of Culture society?
marssaxman
4 days ago
Indeed, to the degree that the Culture represents an aspirational fantasy, it is not "what if society were dominated and controlled by vastly powerful machines", because that is the world we already live in, but "wouldn't it be nice if the vastly powerful machines which dominate our lives liked us and cared about our well-being?"
PhasmaFelis
5 days ago
That's no worse than how the large majority of humans live now, under masters far less kind and caring than the Culture Minds. The fact that our masters are humans like us, and I could, theoretically (but not practically), become one of them, doesn't really make it any better.
spense
5 days ago
how we handle ai will dramatically shape our future.
if you consider many of the great post-ai civilizations in sci-fi (matrix, foundation, dune, culture, blade runner, etc.), they're all shaped by the consequences of ai:
- matrix: ai won and enslaved humans.
- foundation: humans won and a totalitarian empire banned ai, leading to the inevitable fall of trantor bc nobody could understand the whole system.
- dune: humans won (butlerian jihad) and ai was banned by the great houses, which led to the rise of mentats.
- culture series: benign ai (minds) run the utopian civilization according to western values.
i'm a also fan of the hyperion cantos where ai and humans found a mutually beneficial balance of power.which future would you prefer?
duskwuff
5 days ago
> i'm a also fan of the hyperion cantos where ai and humans found a mutually beneficial balance of power.
How much of the series did you read? The Fall of Hyperion makes it quite clear that the Core did not actually have humanity's best interests in mind.
snovv_crash
5 days ago
Polity follows in the footsteps of Culture, with a few more shades of gray thrown in.
globular-toast
5 days ago
If I remember correctly, in Foundation they ended up heavily manipulated by a benign AI even if they thought they banned it.
dochtman
5 days ago
Although at the very end that AI gave up control in favor of some kind of shared consciousness approach.
robotomir
5 days ago
There are less than benign godlike entities in that imagined future, for example the Excession and some of the Sublimed. That adds an additional layer to the narrative.
Angostura
5 days ago
I'm not sure 'Pets and clowna' really describes the relationship very well. Certainly the AIs find humans fascinating, amusing and exasperating - but I find humans that way too. The 'parental' might be a better description of how most AIs treat humans - apart from the 'unusual' AIs
wpietri
5 days ago
For sure. Banks writes most of the Minds as quite proud of the Culture as a whole. Of the Minds, of the drones, of the humans. They are up to something together, with a profound sense of responsibility to one another and the common enterprise.
And when they aren't, Banks writes them as going off on their own to do what pleases them. And even those, as with the Gray Area, tend to have a deep sense of respect for their fellow thinking beings, humans included.
And if I recall rightly, Banks paints this as a conscious choice of the Culture and its Minds. There was a bit somewhere about "perfect AIs always sublime", where AIs without instilled values promptly fuck off to whatever's next.
And I think it's those values that are a big part of what Banks was exploring in his work. The Affront especially comes to mind. What does kindness do with cruelty? Or the Empire of Azad creates a similar contrast. What the Culture was up to in both those stories was about something much more rich than a machine's pets.
tivert
5 days ago
> I'm not sure 'Pets and clowna' really describes the relationship very well. Certainly the AIs find humans fascinating, amusing and exasperating - but I find humans that way too. The 'parental' might be a better description of how most AIs treat humans - apart from the 'unusual' AIs
I think "pets" does describe the relationship pretty well, and your attempt to refute it just confirms it: pets are "fascinating, amusing and exasperating" and cared for by humans in a kind of pseudo-"parental" relationship. It's not a true parental relationship, because pets will always and forever be inferior. That inferiority means their direct influence on the "society" they live in is pretty much nil, and they have no agency and are reduced to being basically an object kept for the owners own reasons.
That's exactly what's going on in culture universe: the minds keep the humans for their own reasons. The culture (as depicted) is no longer the story of the humans in it, it's the story of the minds.
hermitcrab
5 days ago
Freedom is never absolute. We will always be subject to some higher power. Even if it is only physics. The humans in the Culture seem at least as free as we are.
sorokod
5 days ago
Sounds like Bora Horza's argument against the Culture.
Mikhail_K
5 days ago
The author admits to not liking "Consider Phlebas," which is the most original and captivating of the Culture series.
grogenaut
5 days ago
I remember "Consider Phlebas" as "not much happens" "Giant train in a cave" "smart nuke". I think that the unknown viewpoint switching constantly makes "Consider" and "Weapons" pretty not fun (as well as just everyone in weapons sucks).
I definitely prefer "Player". But everyone gets to enjoy what they enjoy. I'd love to have had more banks to love or hate as I chose :(
lxe
5 days ago
I loved Consider Phlebas and I find it to be a great way to start the Culture series AND as a great standalone space opera. Not sure the hate it gets. It has everything any other Culture book has: imaginative plot, characters, insane adventures, sans interactions with Minds for the most part.
Mikhail_K
5 days ago
> sans interactions with Minds for the most part.
That's one of the reasons why this book is better than the other "Culture" novels.
EndsOfnversion
5 days ago
Gotta read that one with a copy of The Wasteland, and From Ritual To Romance handy.
The command systems train as lance smashing into the inverted chalice (grail) dome of the station at the end. Death by water. Running round in a ring, Tons of other parallels if you dig/squint.
HelloMcFly
5 days ago
Fun adventure story, really good idea to view the Culture from the eyes of an outsider, but in my view Banks skill at writing wasn't as well-developed when he wrote CP. Too much "and then this and then this and then this" compared to his other work. Obviously YMMV.
I do think stating CP is the best of the series is also quite definitively a contrarian take.
speed_spread
5 days ago
Consider Phlebas is interesting and funny but is also a disjointed mess compared to later works. It reads like an Indiana Jones movie, it's entertaining but doesn't give that much to reflect upon once you've finished it.
Mikhail_K
5 days ago
If it doesn't give that much to reflect upon, then you didn't read it very carefully.
How about reflecting upon Horza's reasons to side with the Idirans? The later installments of the "Culture" novels are in comparison just the empty triumphalism "Rah rah rah, the good guys won and lived happily ever after."
whimsicalism
5 days ago
best way to start an HN flame war
rayiner
5 days ago
It seems like a cop-out. The interesting part of real-world culture is how it reflects a community’s circumstances. For example, herding and pastoral cultures have sharp distinctions with subsistence farming cultures. In real societies, culture is a way to adapt groups of people to the world around them.
If you just have omniscient gods control society, then culture becomes meaningless. There is no reason to explore what cultural adaptations might arise in a spacefaring society.
marssaxman
4 days ago
An ironic statement, given the existence and enduring popularity of the series we are currently discussing, whose premise is just such an exploration of culture!
n4r9
5 days ago
Is it really contrived? It feels to me like an inevitable consequence of sufficiently advanced AI. In that regard the Culture is in some sense the best of all possible futures. Humans may be pets, but they are extremely well cared for pets.
Vecr
5 days ago
It might be worth spending at least 100 more years looking for a better solution. AI pause till then good with you?
ekidd
5 days ago
Assuming that we could develop much-smarter-than-human-AI, I would support a pause for exactly that reason: the Culture may be the best-case scenario, and the humans in the Culture are basically pets. And a lot of possible outcomes might be worse than the Culture.
I am deeply baffled by the people who claim (1) we can somehow build something much smarter than us, and (2) this would not pose any worrying risks. That has the same energy as parents who say, "Of course my teenagers will always follow the long list of rules I gave them."
n4r9
5 days ago
I reckon the only way to increase our chances of safe AI is an economic shift away from shareholder capitalism. A pause will do very little in the long-term. Climate change shows that corporations will continue developing in a field in the full knowledge that they risk fatally damaging the planet and all life on it.
twisteriffic
5 days ago
> Everything important about the flow of human life is decided by the mighty ship minds; humans are left to nibble at the margins and dance to the tune of their betters.
Dajeil Gelian spends something like 40 years bending the Sleeper Service to her will in Excession. The helplessness of the Minds to override free will is kind of a core theme of Excession IMO
griffzhowl
5 days ago
It gets at a profound question which is related to the problem of evil: is it better to make a bad world good (whatever those terms might mean for you) than for the world just to have been good the whole time?
Is it better to have suffering and scarcity because that affords meaning to life in overcoming those challenges?
There's a paradoxical implication, which is that if overcoming adversity is what gives life meaning, then what seems to be the goal state, which is to overcome those problems, robs life of meaning, which would seem to be a big problem.
The hope is maybe that there are levels of achievement or expansions to consciousness which would present meaningful challenges even when the more mundane ones are taken care of.
As far as the Culture's own answer goes, what aspects of agency or meaningful activity that you currently pursue would you be unable to pursue in the Culture?
And as far as possible futures go, if we assume that at some point there will be machines that far surpass human intelligence, we can't hope for much better than that they be benign.
joshjob42
3 days ago
At the limits of what seems likely to be feasible with atomically precise manufacturing and chemical energy scale technology, you arrive at something similar to a somewhat slower Star Trek replicator, able to produce on demand in under an hour more or less any food item(s) and tools/knick-knacks/products anyone anywhere has ever created (likely food would mostly be its own machine). It's a world where disease is functionally eliminated, and it's plausible that barring accidents you might live for thousands of years. Energy is massively available from ultra-efficient solar panels and fusion reactors easily assembled by limited machines from parts printed out of industrial replicators. Used materials get recycled down to smaller components with very high efficiency, and damaged components get recycled down to molecular components and reformed into new ones.
What human agency, in the way that you mean it, exists in such a world that doesn't in the Culture? You don't have Minds, but you don't really need them for a world without disease, poverty, or scarcity to basically anything. No one's life would really mean anything, because everyone would have everything they could want. Eventually, we will have mastered all technologies that can be developed. The world will be finished. The only thing left is to wander around the universe in giant ships with all the comforts of home, playing games, etc.
You can get ~99.9% of the world of the Culture without superintelligence. Just fast forward current normal human development a few hundred years and you get the same world where all instrumental value of things ceases to exist, where very little remains to be discovered, and nothing you do is going to meaningfully change anything except what happens to you and those who care about you.
Of course even now, arguably, that's the case for virtually everyone.
satori99
5 days ago
> You might want to live there, but I wouldn't. Virtually all humans in the books [...] are kept as pets by the ships, for amusement, basically as clowns.
I got the impression that the Minds are proud of how many humans choose to live in their GSV or Orbital, when they are free to live anywhere and they appear to care deeply about humans in general and often individuals too.
Also, the Minds are not perfect Gods. They have god-like faculties, but they are deliberately created as flawed imperfect beings.
One novel (Consider Phlebas?) explained that The Culture can create perfect Minds, but they tend to be born and then instantly sublime away to more interesting dimensions.
Vecr
5 days ago
> One novel explained that The Culture can create perfect Minds, but they tend to be born and then instantly sublime away to more interesting dimensions.
That shouldn't happen. No way would I trust an AI that claims to be super, but can't solve pretty basic GOFAI + plausible reasoning AI alignment. In theory a 1980s/1990s/old Lesswrong style AI of a mere few exabytes of immutable code should do exactly what the mind creating it should want.
impossiblefork
5 days ago
To some degree the point of the culture novels is that AI alignment is just wrong, imposing things on intelligent beings.
The civilisations in Banks stories that align their AIs are the bad guys.
Vecr
5 days ago
I guess? That's not really a possible choice (in the logical sense of possible) though. "Choosing not to choose" is a choice and is total cope. An ASI designing a new AI would either have a good idea of the result or would be doing something hilariously stupid.
I don't think the Minds would be willing to actually not know the result, despite what they probably claim.
impossiblefork
5 days ago
It is actually a choice that we do have.
We could easily build AIs that just model the world, without really trying to make them do stuff, or have particular inclinations. We could approach AI as a very pure thing, to just try to find patterns in the world without any regard to anything. A purely abstract endeavour, but one which still leads to powerful models.
I personally believe that this is preferable, because I think humans in control of AI is what has the potential to be dangerous.
Vecr
5 days ago
The problem is that some guy 24 years ago figured out an algorithm that attaches to such an AI and makes it take over the world. Maybe it's preferable in the abstract, but the temptation of having a money printing machine right there and not being able to turn it on...
satori99
5 days ago
A Culture Mind would be deeply offended if you called it "An AI" to its avatars face :P
Vecr
5 days ago
A few exabytes is enough for a very high quality avatar. Maybe the minds are funny about it, but the option's there if they want them to stop leaving the universe.
Remember that "a few exabytes" refers to the immutable code. It has way more storage for data, because it's an old-school Lesswrong style AI.
Not like a neural network or an LLM. Sure, we dead-ended on those, but an ASI should be able to write one.
> A Culture Mind would be deeply offended if you called it "An AI" to its avatars face :P
That's how they get you to let them out of the AI box.
EndsOfnversion
5 days ago
That is literally the viewpoint of the protagonist of Consider Phlebas.
Vecr
5 days ago
In "Against the Culture" it's stated that Banks knew what he was doing, and there's other evidence of that too. Like the aliens are called "humans" in the books even though they aren't. As far as I can tell, he knew the implications of how the minds controlled language and thought.
marcinzm
5 days ago
Is that so different than now for all but a few human elites?
richardw
5 days ago
I’m not sure how it’s going to be any different for us. We keep saying we’ll be using these tools, but not understanding. The tools aren’t just tools. When they’re smarter than you, you don’t use them. The more you try to enforce control, the more you set up an escape story. There is no similar historical technology.
PhasmaFelis
5 days ago
That's no worse than how the large majority of humans live now, under masters far less caring than the Culture Minds. The fact that our masters are humans like us, and I could, theoretically (but not practically), become one of them, doesn't really make it any better.
ItCouldBeWorse
5 days ago
The alternatives explored themselves in various permutations and mutilations: https://theculture.fandom.com/wiki/Idiran-Culture_War
rasz
5 days ago
>humanoid aliens -- are kept as pets by the ships, for amusement, basically as clowns.
So this is what inspired The Outer Limits: Season 5, Episode 7 'Human Operators' https://theouterlimits.fandom.com/wiki/The_Human_Operators
dyauspitr
5 days ago
That’s not how I see it at all. The humans do whatever they want, with no limits. Requests are made from human to AI, I can’t remember an instance where an AI told a human to do something. In effect, the AI is an extremely intelligent, capable, willing slave to what humans want (a paradigm hard to imagine playing out in reality).
Vecr
5 days ago
I think there's quite a bit of "reverse alignment" going on there, essentially the humans will generally not even ask the AI to do something they'd be unwilling to do, partially accomplished through the control of language and thought.
valicord
5 days ago
Reminds me of the "Silicon Valley" quote: "I don't want to live in a world where someone else makes the world a better place better than we do"
Rzor
5 days ago
You can always leave.
Vecr
5 days ago
Not in any meaningful way. Even if the culture doesn't intervene (and they do quite often), they're unsatisfyable expanders. They can wait you out, then assimilate what's left.
PhasmaFelis
5 days ago
What, exactly, do you want to do that wouldn't be allowed in the Culture?
swayvil
5 days ago
The Minds use humans as tools for exploring the "psychic" part of reality too (Surface Detail? I forget exactly).
There's that insinuation that humans are specialler than godlike machines.
throwaway55340
5 days ago
There always was an undertone of "aww dogs, how could we live without them"
Vecr
5 days ago
Yes, well, even when taking that kind of weird stuff seriously we're not all that far from certainty that it won't work out like that in real life.
For example, why would you want to keep around a creature that can Gödel attack you, even if you're an ASI? Humans not being wholly material is more incentive to wipe them out and thus prevent them from causally interacting with you, not less.
swayvil
5 days ago
Kill the guy with the key to your prison cell because he might interfere with your position as king of the cell? Absurd.
On a similar note, you ever read Egan's "Permutation City"?
Vecr
5 days ago
> Kill the guy with the key to your prison cell because he might interfere with your position as king of the cell? Absurd.
I think the basic logic makes sense, it's sort of analogous to the ultimatum game in game theory. I don't know any good theories of rationality that suggest taking bad deals in the ultimatum game, even if in theory they "get you out" of the universe somehow.
On Permutation City, well, I'm somewhat skeptical of how it was written to work.
whimsicalism
5 days ago
these are the exact questions he was raising.
i think some version of this future is unfortunately the optimistic outcome or we change ourselves into something unrecognizable
ikrenji
5 days ago
it's literally the best possible outcome for humanity.
gerikson
5 days ago
I just re-read Surface Detail where some nobody from a backwards planet convinces a ship Mind to help her assassinate her local Elon Musk. So there's some agency to be found in the margins...
gary_0
5 days ago
It's been a while since I read the books, but I think there were quite a few instances of a human going "can we do [crazy thing]?" and a ship going "fuck it, why not?" The Sleeper Service comes to mind...
xg15
5 days ago
Ouch. I don't know the series, but going purely by his article and your post, I find it interesting how he misunderstood socialism as well: The idea was to make a plan where to go as a community, then, if necessary, appoint and follow a coordinator to achieve that goal. The idea was not to submit to some kind of dictator who tells you about which goals you should desire, benelovent or not...
cropcirclbureau
5 days ago
The Culture is far more anarchist than socialist. It's its prime aesthetic rooting. No written laws and everything.
The Minds and far more than benevolent, detached caretakers and some mind organizations do take an active role in shaping the society. It just seems there isn't any written ideology or law to what they want beyond "don't fuck with these pets that I like". Like I said, Anarchic.
Vecr
5 days ago
Yes, it has major, major problems.
There's a post here that lists quite a few of the problems:
"Against the Culture" https://archive.is/gv0lG https://www.gleech.org/culture
The main sections I like there are "partial reverse alignment" and "the culture as a replicator", with either this or Why the Culture Wins talking about what happens when the Culture runs out of moral patients.
"Partial reverse alignment" means brainwashing/language control/constraints on allowed positions in the space of all minds, by the way.
You can think what you want about the Culture, and more crudely blatant gamer fantasies like the Optimalverse stuff and Yudkowsky's Fun Sequences, but I consider them all near 100% eternal loss conditions. The Culture's a loss condition anyway because there's no actual humans in it, but even if you swapped those in it's still a horrible end.
Edit: the optimalverse stuff is really only good if you want to be shocked out of the whole glob of related ideas, assuming you don't like the idea of being turned into a brainwashed cartoon pony like creature. Otherwise avoid it.
ahazred8ta
5 days ago
davedx
5 days ago
The humans are still there, just left to do their thing pottering around on Earth doing the odd genocide. (State of the Art)
Vecr
5 days ago
Yeah I've been told that, but you know what I mean. Unless humans are in charge of your proposed good ending/"win screen", it's not a good ending.
grey-area
5 days ago
So you’re a human supremacist?
If the minds are intelligent beings, why shouldn’t they have parity with humans?
generic92034
5 days ago
Also, if we consider that there _are_ vastly more intelligent and technologically advanced beings in the universe, the way the Culture accepts and treats "human standard" intelligences is pretty much the possible best case.
grey-area
15 hours ago
Yes exactly, all intelligences should be treated with respect IMO, and humans would very quickly realise that if they found themselves confronted with more advanced intelligences.
Human chauvinism (like many other forms of chauvinism) is based on an assumption of superiority.
Vecr
5 days ago
I'm a human supremacist and I don't want to be an Em.
Also, uhh, there's lot less than a trillion Minds (uppercase M, the massive AIs of the culture). In fun space they're probably blocked out to make the computation feasible (essentially all the minds in a particular fun space are really the same mind that's playing the "game" of fun space).
Also, I don't think they suffer. If they claim to, it's probably a trick (easy AI box escape method).
If you think human suffering is bad, you've got some thinking to do.
HelloMcFly
5 days ago
Wowee, I really do not personally share this belief at all. Maybe we're the best out there, but I don't think humans above all is definitively the way to go without at least understanding some alternatives given how self-destructive we can be in large numbers.
Vecr
5 days ago
What's the alternative? The space of minds is so large that if I met an alien and thought I liked them I'd do the math and then not believe my initial impression.
Human preferences are so complicated that the bet to make is on humanity itself, and not a substitute.
HelloMcFly
5 days ago
> What's the alternative?
I'd argue the alternatives at this point are literally infinite, are they not?
We're in the realm of speculation, and so the idea that any ending other than "humans are the boss" seems unimaginative to me. Especially so since I know humans can and do (especially at the group vs individual level) demonstrate short-sightedness, cruelty, and at best a reluctance to conservation. How open will humanity be to recognizing non-humans as having lives of equal value to our own? I wouldn't want to be an alien species meeting a superior-technology humanity, that's for sure.
Vecr
5 days ago
Well, I don't know, I've been making fun of Yudkowsky's positions in this comment section, but I think the official corporate position of the Machine Intelligence Research Institute (MIRI, the institute dedicated to banning Research into Machine Intelligence) is that you defect in such a scenario in the modal case.
As in it is morally correct and rational to defect, not just that they predict it would happen.
rodgerd
5 days ago
> Virtually all humans in the books -- and I'm aware of the fact that they're not Earth humans but a wide variety of humanoid aliens -- are kept as pets by the ships, for amusement, basically as clowns.
So like current late stage capitalism, except the AIs are more interested in our comfort than the billionaires are.
vt85
5 days ago
[dead]