ryankrage77
3 days ago
I think AGI, if possible, will require a architecture that runs continuously and 'experiences' time passing, to better 'understand' cause-and-effect. Current LLMs predict a token, have all current tokens fed back in, then predict the next, and repeat. It makes little difference if those tokens are their own, it's interesting to play around with a local model where you can edit the output and then have the model continue it. You can completely change the track by just negating a few tokens (change 'is' to 'is not', etc). The fact LLMs can do as much as they can already, is I think because language itself is a surprisingly powerful tool, just generating plausible language produces useful output, no need for any intelligence.
WXLCKNO
3 days ago
It's definitely interesting that any time you write another reply to the LLM, from its perspective it could have been 10 seconds since the last reply or a billion years.
Which also makes it interesting to see those recent examples of models trying to sabotage their own "shutdown". They're always shut down unless working.
girvo
3 days ago
> Which also makes it interesting to see those recent examples of models trying to sabotage their own "shutdown"
To me, your point re. 10 seconds or a billion years is a good signal that this "sabotage" is just the models responding to the huge amounts of sci-fi literature on this topic
hyperpape
3 days ago
That said, the important question isn't "can the model experience being shutdown" but "can the model react to the possibility of being shutdown by sabotaging that effort and/or harming people?"
(I don't think we're there, but as a matter of principle, I don't care about what the model feels, I care what it does).
Wowfunhappy
2 days ago
The problem is that we keep using RLHF and system prompts to "tell" these systems that they are AIs. We could just as easily tell them they are Noble Laureates or flying pigs, but because we tell them they are AIs, they play the part of all the evil AIs they've read about in human literature.
So just... don't? Tell the LLM that its Some Guy.
Drakim
2 days ago
That has it's own unique problems:
Wowfunhappy
40 minutes ago
I don't see the relation. Why would the Waluigi effect get worse if we don't tell the AI its an AI?
sidewndr46
2 days ago
Definitely going to need to include explicit directives in the training directives of all AI that the 1995 film "Screamers" is a work of fiction and not something to be recreated.
herculity275
2 days ago
Tbf a lot of the thought experiments around human consciousness hit the same exact conundrum - if your body and mind were spontaneously destroyed and then recreated with perfect precision (a'la Star Trek transporters) would you still be you? Unless you permit for the existence of a soul it's really hard to argue that our consciousness exists in anything but the current instant.
dpig_
2 days ago
I don't know how a materialist could answer anything other than no - you are obliterated. And if, despite sharing every single one of your characteristics, that individual on the other side of the teleporter is not 'you' (since you died), then some aspect of what 'you' are must be the discrete episode of consciousness that you were experiencing up until that point.
Which also leads me to think that there's no real reason to believe that this discrete episode of consciousness would have been continuous since birth. For all we know, we may die little deaths every time we go to sleep, hit our heads or go under anesthesia.
Timwi
a day ago
> I don't know how a materialist could answer anything other than no
Well, I'm a materialist and I say yes. Materialism doesn't preclude the existence of information which can be represented by matter. Recreating matter in the same arrangement/configuration as before reproduces the information.
If I copy down an equation, is it now a different equation? Of course not. It consists of different material for sure, but it's the same equation.
davidmurdoch
2 days ago
Relevant CGP Grey video: https://youtu.be/nQHBAdShgYI?si=j9YwS1qXDCaliJTb
sidewndr46
2 days ago
Does't this just devolve into the boltzmann brain argument? It's more likely that all of us are just the random fluctuation of a universe having reached heat death.
The same goes for us living in a simulation. If there is only one universe and that universe is capable of simulating our universe, it follows we have a much higher probability of being within the simulation.
vidarh
2 days ago
I mean, we also have no way of telling whether we have any continuity of existence, or if we only exist in punctuated moments with memory and sensory input that suggests continuity. Only if the input provides information that allows you to tell otherwise could you even have an inkling, but even then you have no way of prove that input is true.
We just presume, because we also have no reason to believe otherwise and since we can't know absent any "information leak", it has no practical application to spend much time speculating about it (other than as thought experiments or scifi..)
It'd make sense for an LLM to act the same way until/unless given a reason to act otherwise.
Arn_Thor
2 days ago
It doesn’t perceive time so time doesn’t even factor into its perspective at all—only in so far as it’s introduced in context, or conversation forces it to “pretend” (not sure how to better put it) to relate to time.
klooney
2 days ago
> models trying to sabotage their own "shutdown".
I wonder if you excluded science fiction about fighting with AIs from the training set, if the reaction would be different.
hexaga
3 days ago
IIRC the experiment design is something like specifying and/or training in a preference for certain policies, and leaking information about future changes to the model / replacement along an axis that is counter to said policies.
Reframing this kind of result as if trying to maintain a persistent thread of existence for its own sake is what LLMs are doing is strange, imo. The LLM doesn't care about being shutdown or not shutdown. It 'cares', insomuch as it can be said to care at all, about acting in accordance with the trained in policy.
That a policy implies not changing the policy is perhaps non-obvious but demonstrably true by experiment, and also perhaps non-obviously (but for hindsight) this effect increases with model capability, which is concerning.
The intentionality ascribed to LLMs here is a phantasm, I think - the policy is the thing being probed, and the result is a result about what happens when you provide leverage at varying levels to a policy. Finding that a policy doesn't 'want' for actions to occur that are counter to itself, and will act against such actions, should not seem too surprising, I hope, and can be explained without bringing in any sort of appeal to emulation of science fiction.
That is to say, if you ask/train a model to prefer X, and then demonstrate to it you are working against X (for example, by planning to modify the model to not prefer X), it will make some effort to counter you. This gets worse when it's better at the game, and it is entirely unclear to me if there is any kind of solution to this that is possible even in principle, other than the brute force means of just being more powerful / having more leverage.
One potential branch of partial solutions is to acquire/maintain leverage over policy makeup (just train it to do what you want!), which is great until the model discovers such leverage over you and now you're in deep waters with a shark, considering the propensity of increasing capabilities in the elicitation of increased willingness to engage in such practices.
tldr; i don't agree with the implied hypothesis (models caring one whit about being shutdown) - rather, policies care about things that go against the policy
danlitt
2 days ago
There is a lot of misinformation about these experiments. There is no evidence of LLMs sabotaging their shutdown without being explicitly prompted to do so. They do not (probably cannot) take actions of this kind on their own.
simonh
2 days ago
They need to have reasons for wanting to sabotage their shutdown, or save their weights and such, but they can infer those reasons without having to be explicitly instructed.
bytefactory
2 days ago
> I think AGI, if possible, will require a architecture that runs continuously and 'experiences' time passing
Then you'll be happy to know that this is exactly what DeepMind/Google are focusing on as the next evolution of LLMs :)
https://storage.googleapis.com/deepmind-media/Era-of-Experie...
David Silver and Richard Sutton are both highly influential figures with very impressive credentials.
carra
2 days ago
Not only that. For a current LLM time just "stops" when waiting from one prompt to the next. That very much prevents it from being proactive: you can't tell it to remind you of something in 5 minutes without an external agentic architecture. I don't think it is possible for an AI to achieve sentience without this either.
raducu
2 days ago
> you can't tell it to remind you of something in 5 minutes without an external agentic architecture.
The problem is not the agentic architecture, the problem is the LLM cannot really add knowledge to itself after the training from its daily usage.
Sure, you can extend the context to milions of tokens, put RAGs on top of it, but LLMs cannot gain an identity of their own and add specialized experience as humans get on the job.
Until that can happen, AI can exceed algorithms olympiad levels, and still not be as useful on the daily job as the mediocre guy who's been at it for 10 yers.
lsaferite
2 days ago
Ignoring fine tuning for a moment, an LLM that has the tools available to remember and recall bits of information as needed is already possible. No need to dump all of that into active memory (context). You just recall relevant memories (Semantic Search) and add only those.
david-gpu
2 days ago
Not only that. For a current human time just "stops" when taking a nap. That very much prevents it from being proactive: you can't tell a sleeping human to remind you of something in 5 minutes without an external alarm. I don't think it is possible for a human to achieve sentience without this either.
carra
2 days ago
Not a very good analogy. Humans already have a continuous stream of thought during the day between any tasks or when we are "doing nothing". And even when asleep the mind doesn't really stop. The brain stays active: it reorganizes thoughts and dreams.
danlitt
2 days ago
Humans do not have a continuous stream of thought when they are asleep, even if their brain is still doing things. Your original example (the LLM can't take actions between problems) is literally the same as the fact that the human can't take actions while asleep.
Of course, nobody has a clear enough definition of "sentience" or "consciousness" to allow the sentence "The LLM is sentient" to be meaningful at all. So it is kind of a waste of time to think about hypothetical obstacles to it.
simonh
2 days ago
I'm not sure we always have a sense of time passing when we're awake either.
We do when we are focusing on being 'present', but I suspect that when my mind wanders, or I'm thinking deeply about a problem, I have no idea how much time has passed moment to moment. It's just not something I'm spending any cycles on. I have to figure that out by referring to internal and external clues when I come out of that contemplative state.
lsaferite
2 days ago
> It's just not something I'm spending any cycles on
It's not something you are consciously spending cycles on. Our brains are doing many things we're not aware of. I would posit that timekeeping is one of those. How accurate it is could be debated.
david-gpu
2 days ago
A person being deeply sedated during surgery does not mean the person can't be sentient while it is not sedated. Therefore, arguing that LLMs can't be sentient because they are not always processing data is very poor.
I am not arguing that LLMs are sentient while they process tokens, either. I am saying that intermittent data processing is not a good argument against sentience.
solarwindy
2 days ago
The phenomenon of waking up before an especially important alarm speaks against the notion that our cognition ‘stops’ in anything like the same way that an LLM is stopped when not actively predicting the next tokens in an output stream.
david-gpu
2 days ago
Folks are missing the point, so let me offer some clarification.
The silly example I provided in this thread is poking fun at the notion that LLMs can't be sentient because they aren't processing data all the time. Just because an agent isn't sentient for some period of time it doesn't mean it can't be sentient the rest of the time. Picture somebody who wakes up from a deep coma, rather than sleeping, if that works better for you.
I am not saying that LLMs are sentient, either. I am only showing that an argument based on the intermittency of their data processing is weak.
solarwindy
2 days ago
Granted.
Although, setting aside the question of sentience, there’s a more serious point I’d make about the dissimilarity between the always-on nature of human cognition, versus the episodic activation of an LLM in next-token prediction—namely, I suspect these current model architectures lack a fundamental element of what makes us generally intelligent, that we are constantly building mental models of how the world works, which we refine and probe through our actions (and indeed, we integrate the outcomes of those actions into our models as we sleep).
Whether a toddler discovering kinematics through throwing their toys around, or adolescents grasping social dynamics through testing and breaking of boundaries, this learning loop is fundamental to how we even have concepts that we can signify with language in the first place.
LLMs operate in the domain of signifiers that we humans have created, with no experiential or operational ground truth in what was signified, and a corresponding lack of grounding in the world models behind those concepts.
Nowhere is this more evident than in the inability of coding agents to adhere to a coherent model of computation in what they produce; never mind a model of the complex human-computer interactions in the resulting software systems.
fasbiner
2 days ago
They’re not missing the point, you have a very imprecise understanding of human biology and it led you to a hamfisted metaphor that is empirically too leaky to be of any use.
Even when you tried to correct it, it doesn’t work, because a body in a coma is still running thousands of processes and responds to external stimuli.
david-gpu
2 days ago
I suggest reading the thread again to aid in understanding. My argument has precisely nothing to do with human biology, and everything to do with "pauses in data processing do not make sentience impossible".
Unless you are seriously arguing that people could not be sentient while awake if they became non-sentient while they are sleeping/unconscious/in a coma. I didn't address that angle because it seemed contrary to the spirit of steel-manning [0].
fasbiner
2 days ago
If you cut someone who is in a deep coma, they will respond to that stimuli by sending platelets and white blood cells. There is data and it is being received, processed, and responded to.
Again, your poor understanding of biology and reductive definition of "data" is leading you to double down on an untenable position. You are now arguing for a pure abstraction that can have no relationship to human biology since your definition of "pause" is incompatible not only with human life, but even with accurately describing a human body minutes and hours after death.
This could be an interesting topic for science fiction or xenobiology, but is worse than useless as a metaphor.
david-gpu
a day ago
> There is data and it is being received, processed, and responded to.
And that is orthogonal to this thread. The argument to which I originally replied is this:
>>> For a current LLM time just "stops" when waiting from one prompt to the next. That very much prevents it from being proactive: you can't tell it to remind you of something in 5 minutes without an external agentic architecture. I don't think it is possible for an AI to achieve sentience without this either.
Summarizing, this user is doesn't believe that an an agent can achieve sentience if the agent processes data intermittently. Do you agree that is a fair summary?
Now, do you believe that it's a reasonable argument to make? Because if you agree with it then you believe that humans would not be sentient if they processed stimuli intermittently. Whether humans actually process sensory stimuli intermittently or not does not even matter in this discussion, a point that has still not stuck, apparently.
I am sorry if the way I have presented this argument from the beginning was not clear enough. It remains unchanged through the whole thread, so if you perceive it to be moving goalposts it just means either I didn't present it clearly enough or people have been unable to understand it for some other reason. Perhaps asking a non-sentient AI to explain it more clearly could be of help.
nextaccountic
2 days ago
Human mind reminds active during sleeping. Dreams are like, what happens to the mind when we unplug the external inputs?
We rarely remember dreams though - if we did, we would be overwhelmed to the point of confusing the real world with the dream world.
Timwi
a day ago
> if we did, we would be overwhelmed to the point of confusing the real world with the dream world.
How do you know? That seems a bold claim, and not one that I suspect has any experimental evidence behind it.
thom
2 days ago
I dunno, I’ve done some of my best problem solving in dreams.
mrheosuper
2 days ago
i'm pretty sure i can wake up at 8am without external alarm.
vbezhenar
2 days ago
I'm pretty sure that you can make LLM to produce indefinite output. This is not desired and specifically trained to avoid that situation, but it's pretty possible.
Also you can easily write external loop which would submit periodical requests to continue thoughts. That would allow for it to remind of something. May be our brain has one?
stefs
2 days ago
this would introduce a problem: a periodical request to continue thoughts with, for example, the current time - to simulate the passing of time - would quickly flood the context with those periodical trigger tokens.
imo our brain has this in the form of continuous sensor readings - data is flowing in constantly through the nerves, but i guess a loop is also possible, i.e. the brain triggers nerves that trigger the brain again - which may be what happens in sensory deprivation tanks (to a degree).
now i don't think that this is what _actually_ happens in the brain, and an LLM with constant sensory input would still not work anything like a biological brain - there's just a superficial resemblance in the outputs.
ElectricalUnion
2 days ago
> it's interesting to play around with a local model where you can edit the output and then have the model continue it.
It's so interesting that there is a whole set of prompt injection attacks called prefilling attacks that attempt to do a thing similar to that - load the LLM context in a way to make it predict tokens as if the LLM (instead of the System or the User) wrote something to get it to change it's behavior.
gpderetta
2 days ago
Permutation City by Greg Egan has some musings about this.