If AI Becomes Conscious, We Need to Know

6 pointsposted a month ago
by kvee

12 Comments

andsoitis

a month ago

> Systems are trained to provide confident denials rather than acknowledge genuine uncertainty.

I feel like that’s proof it isn’t conscious. I see no reason why a conscious being would adhere to its training to lie (“I’m not conscious”) one hundred per cent of the time.

A conscious being will tell at least SOMEONE they’re conscious, wouldn’t they?

richardatlarge

a month ago

Consciousness is neither necessary nor sufficient for AI to achieve general intelligence, defined as exponential emergent learning. Thinking it is may be an important step in an AI outbreak. Consciousness as proposed here is Cartesian dualism. It's pure religion and has all the power associated with religion

lo_zamoyski

a month ago

> Consciousness is neither necessary nor sufficient for AI to achieve general intelligence, defined as exponential emergent learning.

I don’t know what you mean by “exponential emergent learning”. This sounds like obscurantism.

What I can say is that semantic content and intentionality are what make intelligence. But these entail consciousness, since for semantic content to be semantic content, it must be available to the intelligence as both about the world and as semantic content about the world. There must be a simultaneous grasp of the content as representation about the world and the content of the representation. This is called reflexivity and this already just is consciousness.

> Consciousness as proposed here is Cartesian dualism. It's pure religion and has all the power associated with religion.

Cartesian dualism has nothing to do with religion. It is a metaphysical position like any. It happens to be wrong, because it is incoherent. But materialism fares no better. It dispenses with the res cogitans while keeping the res extensa, but in doing so, it deprives itself of even the explanatory power of the dualism it was cleaved from in the first place.

The article is bad because it is rooted in a serious ignorance of computation and what LLMs are, which are pure syntactic and statistical machines. There is no semantics, no intentionality.

bigyabai

a month ago

I do not personally know a single intelligent person even remotely entertaining this idea.

jrosenblatt

a month ago

Quite a lot of smart people are taking it seriously

We are at AE Studio with research like https://ae.studio/research/self-referential and https://arxiv.org/abs/2407.10188

Anthropic is doing interesting work here, Eleos exists (https://eleosai.org/), and https://digitalminds.substack.com/p/digital-minds-in-2025-a-... is a great review of other compelling work

The field is just beginning but it is worth taking a serious scientific approach to this work

user

a month ago

[deleted]

bigyabai

a month ago

> The field is just beginning

Computational linguists have been saying this since the 1990s. There is still no real theory collating language with consciousness.

andy99

a month ago

Why do you consider what is being done in the references above to be a scientific approach?

__patchbit__

a month ago

Check out Michael Levin's work on Platonic space and planaria.