AIs are not conscious, but most critics can't adequately explain why

3 pointsposted 12 hours ago
by Novapebble

13 Comments

nicoburns

12 hours ago

Nobody really knows for sure whether AIs are concious. But nobody really knows for sure whether rocks are concious either. The reality is we understand very little about conciousness.

Novapebble

11 hours ago

That point is stated explicitly in the piece. Essentially, we can't know that even other humans are conscious. But we can look at behaviors that we all engage in, and the functions that our substrates perform. And when you do, having a graduated view of intelligence capacity emerges rather than a strict, on/off switch.

"What it's like" is the basis of minds.

andsoitis

11 hours ago

> Nobody really knows for sure whether AIs are concious.

when we ask an AI to describe its experience of its experience, it responds that it doesn't have consciousness or self-awareness.

it could be lying of course, but I take it at its word. if you believe it is lying, then you have bigger problems to deal with.

AnimalMuppet

11 hours ago

Is that the AI talking (that is, a trained behavior), or is it a canned (hard-coded) response?

I have zero problem believing that whatever company hard-coded certain behaviors. Not lying, then, but not really a response from the AI.

andsoitis

10 hours ago

so your conclusion is that AI isn't conscious?

or are you saying it could be conscious, but the company has enslaved the AI in a way that precludes it from knowing it is conscious?

the former seems likely. the latter seems like a paradox, because you cannot experience experience, yet not know it.

user

10 hours ago

[deleted]

user

10 hours ago

[deleted]

andsoitis

11 hours ago

experience of experience is hard to capture in language and it turns out you either get that idea or you don't.

if consciousness is anything at all, then what it is, is exactly that one thing that isn't reduced at all, if it's an illusion.

gzoo

12 hours ago

Oh but they ARE! MUAHAHAHA... no seriously... Claude in particular is getting awful human like. It just cracked a joke about something I mentioned 2 months ago. GULP. I joke about it but at the same time we can see that layer 47 activated in some pattern and that attention heads focused on certain tokens, but translating that into "the model reasoned about X, then connected it to Y, then chose Z" is still largely unsolved.

Novapebble

11 hours ago

Being able to make statistically relevant references to words is abstract token shuffling. One could say that humans are doing that also, but we also have countless cellular interactions and confirmations that LLMs don't engage in. Minds are things bodies do, not software that can be transported elsewhere.

gzoo

7 hours ago

I get it but what if the upward trajectory in AI improvement we're seeing eventually gets so close we can barely tell the difference? I'm not saying they will be 100% "human" but they sure will FEEL like it. Enough to distort reality in a sense.

Novapebble

5 hours ago

We're already at that point in many situations, but the simulations always break when the datasets are thin. And they always will because the chatbot cannot generate new experience because it has no direct connection to reality through its embodiment.

Sensing robots that can move around with an LLM attached to speak for them would likely be much closer to the real thing.

user

12 hours ago

[deleted]