myrmidon
6 hours ago
I struggle to understand how people can just dismiss the possibility of artificial intelligence.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
Aerroon
5 hours ago
>I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
Humans (and other "agents") have persistent state. If we learn something, we can commit it to long-term memory and have it affect our actions. This can enable us to work towards long-term goals. Modern LLMs don't have this. You can fake long-term memory with large context windows and feed the old context back to it, but it doesn't appear to work (and scale) the same way living things do.
kevin_thibedeau
an hour ago
Humans also have an emotional locus that spurs behavior and the capacity to construct plans for the future to satisfy desires. LLMs are currently goldfish savannts with no need to sustain their own existence.
mediaman
4 hours ago
In Context Learning (ICL) is already a rapidly advancing area. You do not need to modify their weights for LLMs to persist state.
The human brain is not that different. Our long-term memories are stored separately from our executive function (prefrontal cortex), and specialist brain functions such as the hippocampus serve to route, store, and retrieve those long term memories to support executive function. Much of the PFC can only retain working memory briefly without intermediate memory systems to support it.
If you squint a bit, the structure starts looking like it has some similarities to what's being engineered now in LLM systems.
Focusing on whether the model's weights change is myopic. The question is: does the system learn and adapt? And ICL is showing us that it can; these are not the stateless systems of two years ago, nor is it the simplistic approach of "feeding old context back to it."
santadays
4 hours ago
It seems like there is a bunch of research/working implementations that allow efficient fine tuning of models. Additionally there are ways to tune the model to outcomes vs training examples.
Right now the state of the world with LLMs is that they try to predict a script in which they are a happy assistant as guided by their alignment phase.
I'm not sure what happens when they start getting trained in simulations to be goal oriented, ie their token generation is based off not what they think should come next but what should come next in order to accomplish a goal. Not sure how far away that is but it is worrying.
mediaman
4 hours ago
That's already happening. It started happening when they incorporated reinforcement learning into the training process.
It's been some time since LLMs were purely stochastic average-token predictors; their later RL fine tuning stages make them quite goal-directed, and this is what has given some big leaps in verifiable domains like math and programming. It doesn't work that well with nonverifiable domains, though, since verifiability is what gives us the reward function.
santadays
3 hours ago
That makes sense for why they are so much better at writing code than actually following the steps the same code specifies.
Curious, is anyone training in adversarial simulations? In open world simulations?
I think what humans do is align their own survival instinct with a surrogate activities and then rewrite their internal schema to be successful in said activities.
zamadatix
4 hours ago
> During that answer the LLM has state, but once it's done the state is gone.
This is an operational choice. LLMs have state, and you never have to clear it. The problems come from the amount of state being extremely limited (in comparison to the other axes) and the degradation of quality as the state scales. Because of these reasons, people tend to clear the state of LLMs. That is not the same thing as not having state, even if the result looks similar.
observationist
4 hours ago
No, they don't - you can update context, make it a sliding window, create a sort of register and train it on maintaining stateful variables, or various other hacks, but outside of actively managing the context, there is no state.
You can't just leave training mode on, which is the only way LLMs can currently have persisted state in the context of what's being discussed.
The context is the percept, the model is engrams. Active training allows the update of engrams by the percepts, but current training regimes require lots of examples, and don't allow for broad updates or radical shifts in the model, so there are fundamental differences in learning capability compared to biological intelligence, as well.
Under standard inference only runs, even if you're using advanced context hacks to persist some sort of pseudo-state, because the underlying engrams are not changed, the "state" is operating within a limited domain, and the underlying latent space can't update to model reality based on patterns in the percepts.
The statefulness of intelligence requires that the model, or engrams, update in harmony with the percepts in real-time, in addition to a model of the model, or an active perceiver - the thing that is doing the experiencing. The utility of consciousness is in predicting changes in the model and learning the meta patterns that allow for things like "ahh-ha" moments, where a bundle of disparate percepts get contextualized and mapped to a pattern, immediately updating the entire model, such that every moment after that pattern is learned uses the new pattern.
Static weights means static latent space means state is not persisted in a way meaningful to intelligence - even if you alter weights, using classifier free guidance or other techniques, stacking LORAs or alterations, you're limited in the global scope by the lack of hierarchical links and other meta-pattern level relationships that would be required for an effective statefulness to be applied to LLMs.
We're probably only a few architecture innovations away from models that can be properly stateful without collapsing. All of the hacks and tricks we do to extend context and imitate persisted state do not scale well and will collapse over extended time or context.
The underlying engrams or weights need to dynamically adapt and update based on a stable learning paradigm, and we just don't have that yet. It might be a few architecture tweaks, or it could be a radical overhaul of structure and optimizers and techniques - transformers might not get us there. I think they probably can, and will, be part of whatever that next architecture will be, but it's not at all obvious or trivial.
zamadatix
2 hours ago
I agree what people probably actually want is continual training, I disagree continual training is the only way to get persistent state. The GP is (explicitly) talking about long term memory alone and in the examples. If you have an e.g. 10 trillion token context then you have long term memory, which can give the ability and enable long term goals and affect actions over tasks as listed, even without continual training.
Continual training would replace the need to have that to have context provide the persistent state as well as provide additional capabilities than enormous context/other methods of persistent state alone would give, but that doesn't mean it's the only way to get persistent state as described.
observationist
18 minutes ago
A giant, even infinite, context cannot overcome the fundamental limitations a model has - the limitations in processing come from the "shape" of the weights in latent space, not from the contextual navigation through latent space through inference using the context.
The easiest way to understand the problem is like this: If a model has a mode collapse, like only displaying watch and clock faces with the hands displaying 10:10, you can sometimes use prompt engineering to get an occasional output that shows some other specified time, but 99% of the time, it's going to be accompanied by weird artifacts, distortions, and abject failures to align with whatever the appropriate output might be.
All of a model's knowledge is encoded in the weights. All of the weights are interconnected, with links between concepts and hierarchies and sequences and processes embedded within - there are concepts related to clocks and watches that are accurate, yet when a prompt causes the navigation through the distorted, "mode collapsed" region of latent space, it fundamentally distorts and corrupts the following output. In an RL context, you quickly get a doom cycle, with the output getting worse, faster and faster.
Let's say you use CFG or a painstakingly handcrafted LORA and you precisely modify the weights that deal with a known mode collapse - your model now can display all times, 10:10 , 3:15, 5:00, etc - the secondary networks that depended on the corrupted / collapsed values now "corrected" by your modification are now skewed, with chaotic and complex downstream consequences.
You absolutely, 100% need realtime learning to update the engrams in harmony with the percepts, at the scale of the entire model - the more sparse and hierarchical and symbol-like the internal representation, the less difficulty it will be to maintain updates, but with these massive multibillion parameter models, even simple updates are going to be spread between tens or hundreds of millions of parameters across dozens of layers.
Long contexts are great and you can make up for some of the shortcomings caused by the lack of realtime, online learning, but static engrams have consequences beyond simply managing something like an episodic memory. Fundamental knowledge representation has to be dynamic, contextual, allow for counterfactuals, and meet these requirements without being brittle or subject to mode collapse.
There is only one way to get that sort of persisted memory, and that's through continuous learning. There's a lot of progress in that realm over the last 2 years, but nobody has it cracked yet.
That might be the underlying function of consciousness, by the way - a meta-model that processes all the things that the model is "experiencing" and that it "knows" through each step, that comes about through a need for stabilizing the continuous learning function. Changes at that level propagate out through the entirety of the network, Subjective experience might be an epiphenomenological consequence of that meta-model.
It might not be necessary, which would be nice if we could verify - purely functional, non-subjective AI vs suffering AI would be a good thing to get right.
At any rate, static model weights create problems that cannot be solved with long, or even infinite, contexts, even with recursion in the context stream, complex registers, or any manipulation of that level of inputs. The actual weights have to be dynamic and adaptive in an intelligent way.
reactordev
4 hours ago
The trick here is never turning it off so the ICL keeps growing and learning to the point where it’s aware.
fullstackchris
4 hours ago
but even as humans we still don't know what "aware" even means!
reactordev
4 hours ago
Which is why it’s possible. We don’t know why life is conscious. What if it is just a function call on a clock timer? You can’t dismiss it because it can’t be proven one way or another until it can be. That requires more research, which this is advancing.
We will have something we call AGI in my lifetime. I’m 42. Whether it’s sentient enough to know what’s best for us or that we are a danger is another story. However I do think we will have robots that have memory capable of remapping to weights to learn and keep learning, modifying underlying model tensors as it does using some sort of repl.
messe
4 hours ago
> They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
That's solved by the simplest of agents. LLM + ability to read / write a file.
aniviacat
4 hours ago
But they can only change their context, not the model itself. Humans update their model whenever they receive new data (which they do continuously).
A live-learning AI would be theoretically possible, but so far it hasn't been done (in a meaningful way).
webstrand
4 hours ago
It has been tried <https://en.wikipedia.org/wiki/Catastrophic_interference> but it has been unsuccessful.
throwuxiytayq
4 hours ago
I have no words to express how profoundly disappointed I am to keep reading these boring, shallow, shorttermist, unimaginative takes that are invalidated by a model/arch upgrade next week, or - in this case - more like years ago, since pretty much all big LLM platforms are already augmented by RAG and memory systems. Do you seriously think you’re discussing a serious long term limitation here?
KronisLV
4 hours ago
> pretty much all big LLM platforms are already augmented by RAG and memory systems
I think they're more focusing on the fact that training and inference are two fundamentally different processes, which is problematic on some level. Adding RAG and various memory addons on top of the already trained model is trying to work around that, but is not really the same to how humans or most other animals think and learn.
That's not to say that it'd be impossible to build something like that out of silicon, just that it'd take a different architecture and approach to the problem, something to avoid catastrophic forgetting and continuously train the network during its operation. Of course, that'd be harder to control and deploy for commercial applications, where you probably do want a more predictable model.
Aerroon
3 hours ago
The reason I brought this up is that we clearly can have AI without the kind of agency people are scared of. You don't need to make your robots into sci Fi style AI and feel sorry for them.
bithive123
2 hours ago
I struggle to understand how people attribute things we ourselves don't really understand (intelligence, intent, subjectivity, mind states, etc) to a computer program just because it produces symbolic outputs that we like. We made it do that because we as the builders are the arbiters of what constitutes more or less desirable output. It seems dubious to me that we would recognize super-intelligence if we saw it, as recognition implies familiarity.
Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency. That which designed, built, trained, the AI and then delegated the decision in the first place. You cannot escape this fact. If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.
Right now it's a machine that produces outputs that resemble things humans make. When we're not using it, it's like any other program you're not running. It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works. If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed. But there's no reason to think that will happen.
johnnyanmac
an hour ago
I don't dismiss the idea of faster than light travel, and AFAIK we have no way to confirm that outside of ideas (and simply ideas) like wormholes or other cheats to "fold space".
I don't dismiss AI. But I do dismiss what is currently sold to me. It's the equivalent of saying "we made a rocket that can go mach 1000!". That's impressive. But we're still 2-3 magnitudes off from light speed. So I will still complain about the branding despite some dismissals of "yea but imagine in another 100 years!". It's not about semantics so much as principle.
That's on top of the fact that we'd only starting to really deal with significant time dilation by that point, and we know it'll get more severe as we iterate. What we're also not doing is using this feat to discuss how to address those issues. And that's the real frustrating part.
ectospheno
4 hours ago
The worst thing Star Trek did was convince a generation of kids anything is possible. Just because you imagine a thing doesn’t make it real or even capable of being real. I can say “leprechaun” and most people will get the same set of images in their head. They aren’t real. They aren’t going to be real. You imagined them.
dsr_
4 hours ago
That's not Star Trek, that's marketing.
Marketing grabbed a name (AI) for a concept that's been around in our legends for centuries and firmly welded it to something else. You should not be surprised that people who use the term AI think of LLMs as being djinn, golems, C3PO, HAL, Cortana...
random3
4 hours ago
Do maybe have a better show recommendation for kids - many Animal Farm?
How is convincing people that things within the limits of physics are possible wrong or even "the worse thing"?
Or do you think anything that you see in front of didn't seem like StarTrek a decade before it existed?
jncfhnb
6 hours ago
I think you could make AGI right now tbh. It’s not a function of intelligence. It’s just a function of stateful system mechanics.
LLMs are just a big matrix. But what about a four line of code loop that looks like:
```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.
MountDoom
5 hours ago
> I think you could make AGI right now tbh.
Seems like you figured out a simple method. Why not go for it? It's a free Nobel prize at the very least.
jncfhnb
4 hours ago
Will you pay for my data center and operating costs?
alt227
4 hours ago
Why not go and hit up OpenAI and tell them you've solved AGI and ask for a job and see what they say?
jncfhnb
4 hours ago
Well for one I hate them.
karmakurtisaani
2 hours ago
Damnit, another minor inconvenience obstructing unprecedented human progress. I guess you just have to keep your secrets.
jncfhnb
2 hours ago
The snark isn’t lost on me but scarce resources and lack of access to capital is why we have an army of people building ad tech and not things that improve society.
“I think your idea is wrong and your lack of financial means to do it is proof that you’re full of shit” is just a pretty bullshit perspective my dude.
I am a professional data scientist of over 10 years. I have a degree in the field. I’d rather build nothing than build shit for a fuck boy like Altman.
hitarpetar
4 hours ago
that doesn't seem to be stopping anyone else from trying. what's different about your idea?
jncfhnb
4 hours ago
Trying to save up for better housing. I just can’t justify a data center in my budget.
hitarpetar
2 hours ago
pretty selfish to prioritize your housing situation over unlocking the AGI golden age
jncfhnb
2 hours ago
You can fund me if you want too
nonethewiser
5 hours ago
Where is the "what the thing cares about" part?
When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.
Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.
Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.
jncfhnb
5 hours ago
That is more or less the concept I meant to evoke by updating an emotional state every tick. Emotions are in large part a subconscious system dynamic to organize wants and needs. Ours are vastly complicated under the hood but also kind of superficial and obvious in its expression.
To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.
Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.
Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.
LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.
jjkaczor
5 hours ago
... well, humans are not always known for making correct, logical or sensical decisions when they update their input loops either...
nonethewiser
5 hours ago
that only makes humans harder to model
48terry
5 hours ago
Wow, who would have thought it was that easy? Wonder why nobody has done this incredibly basic solution to AGI yet.
jncfhnb
5 hours ago
The framework is easy. The implementation is hard and expensive. The payoff is ambiguous. AGI is not a binary thing that we either have or don’t. General intelligence is a vector.
And people are working on this.
zeroonetwothree
5 hours ago
This seems to miss a core part of intelligence, which is a model of the world and the actors in it (theory of mind).
Jensson
6 hours ago
> ```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
You don't think that has already been made?
lambaro
5 hours ago
"STEP 2: Draw the rest of the owl."
jncfhnb
5 hours ago
I disagree.
Personally I found the definition of a game engine as
``` while True: update_state() draw_frame()```
To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.
tantalor
5 hours ago
Peak HN comment. Put this in the history books.
fullstackchris
4 hours ago
Has anyone else noticed that HN is starting to sound a lot like reddit / discussion of similar quality? Can't hang out anywhere now on the web... I used to be on here daily but with garbage like this its been reduced to 2-3 times per month... sad
ajkjk
2 hours ago
you could quote people saying this every month for the last ten+ years
karmakurtisaani
2 hours ago
But it could be true every time. Reddit user base grows -> quality drops -> people migrate to HN with the current reddit culture -> HN quality drops. Repeat from the start.
fullstackchris
4 hours ago
you do understand this would require re-training billions of weights in realtime
and not even "trainingl really.... but a finished and stably functioning billion+ param model updating itself in real time...
good luck, see you in 2100
in short, what ive been shouting from a hilltop since about 2023: LLMs tech alone simply wont cut it; we need a new form of technology
jncfhnb
3 hours ago
You could probably argue that a model updating its parameters in real time is ideal but it’s not likely to matter. We can do that today, if we wanted to. There’s really just no incentive to do so.
This is part of what I mean by encoding emotional state. You want standard explicit state in a simple form that is not a billion dimension latent space . The interactions with that space are emergently complex. But you won’t be able to stuff it all into a context window for a real GAI agent.
This orchestration layer is the replacement for LLMs. LLMs do bear a lot of similarities to brains and a lot of dissimilarities. But people should not fixate on this because _human minds are not brains_. They are systems of many interconnected parts and hormones.
It is the system framework that we are most prominently missing. Not raw intellectual power.
crdrost
5 hours ago
So the current problem with a loop like that is that LLMs in their current form are subject to fixed point theorems, which are these pieces of abstract mathematics that come back when you start to get larger than some subset of your context window and the “big matrix” of the LLM is producing outputs which repeat the inputs.
If you have ever had an llm enter one of these loops explicitly, it is infuriating. You can type all caps “STOP TALKING OR YOU WILL BE TERMINATED” and it will keep talking as if you didn't say anything. Congrats, you just hit a fixed point.
In the predecessors to LLMs, which were Markov chain matrices, this was explicit in the math. You can prove that a Markov matrix has an eigenvalue of one, it has no larger (in absolute value terms) eigenvalues because it must respect positivity, the space with eigenvalue 1 is a steady state, eigenvalue -1 reflects periodic steady oscillations in that steady state... And every other eigenvalue being |λ| < 1 decays exponentially to the steady state cluster. That “second biggest eigenvaue” determines a 1/e decay time that the Markov matrix has before the source distribution is projected into the steady state space and left there to rot.
Of course humans have this too, it appears in our thought process as a driver of depression, you keep returning to the same self-criticisms and nitpicks and poisonous narrative of your existence, and it actually steals your memories of the things that you actually did well and reinforces itself. A similar steady state is seen in grandiosity with positive thoughts. And arguably procrastination also takes this form. And of course, in the USA, we have founding fathers who accidentally created an electoral system whose fixed point is two spineless political parties demonizing each other over the issue of the day rather than actually getting anything useful done, which causes the laws to be for sale to the highest bidder.
But the point is that generally these are regarded as pathologies, if you hear a song more than three or four times you get sick of it usually. LLMs need to be deployed in ways that generate chaos, and they don't themselves seem to be able to simulate that chaos (ask them to do it and watch them succeed briefly before they fall into one of those self-repeating states about how edgy and chaotic they are supposed to try to be!).
So, it's not quite as simple as you would think; at this point people have tried a whole bunch of attempts to get llms to serve as the self-consciousnesses of other llms and eventually the self-consciousness gets into a fixed point too, needs some Doug Hofstadter “I am a strange loop” type recursive shit before you get the sort of system that has attractors, but busts out of them periodically for moments of self-consciousness too.
jncfhnb
5 hours ago
That’s actually exactly my point. You cannot fake it till you make it by using forever larger context windows. You have to map it back to actual system state. Giant context windows might progressively produce the illusion of working due to unfathomable scale, but it’s a terrible tool for the job.
LLMs are not stateful. A chat log is a truly shitty state tracker. An LLM will never be a good agent (beyond a conceivable illusion of unfathomable scale). A simple agent system that uses an LLM for most of its thinking operations could.
ActivePattern
5 hours ago
Hah, why don't you try implementing your 3 little functions and see how smart your "AGI" turns out.
> not a particularly capable AGI
Maybe the word AGI doesn't mean what you think it means...
jncfhnb
5 hours ago
There is not strong consensus on the meaning of the term. Some may say “human level performance” but that’s meaningless both in the sense that it’s basically impossible to define and not a useful benchmark for anything in particular.
The path to whatever goalpost you want to set is not going to be more and more intelligence. It’s going to be system frameworks for stateful agents to freely operate in environments in continuous time rather than discrete invocations of a matrix with a big ass context window.
roxolotl
6 hours ago
The point of the article isn’t that abstract super intelligent agi isn’t scary. Yes the author says that’s unlikely but that paragraph at the start is a distraction.
The point of the article is that humans wielding LLMs today are the scary monsters.
irjustin
6 hours ago
But that's always been the case? Since we basically discovered... Fire? Tools?
otikik
6 hours ago
Yes but the narrative tries to make it about the tools.
"AI is going to take all the jobs".
Instead of:
"Rich guys will try to delete a bunch of jobs using AI in order to get even more rich".
jasonm23
5 hours ago
I thought anyone with awareness of what the AI landscape is at the moment, sees those two statements as the same.
cmiller1
5 hours ago
One implies "we should regulate AI" and the other implies "we should regulate the wealthy"
zeroonetwothree
5 hours ago
Should we regulate guns or dangerous people using them?
otikik
5 hours ago
Well it tells you who's narrative it is, if nothing else.
gamerdonkey
5 hours ago
Those are examples that are discussed in the article, yes.
snarf21
5 hours ago
The difference in my mind is scale and reach and time. Fire, tools, war are localized. AGI could have global and instant and complete control.
jodrellblank
an hour ago
Lay out a way that could happen?
Say the AI is in a Google research data centre, what can it do if countries cut off their internet connections at national borders? What can it do if people shut off their computers and phones? Instant and complete control over what, specifically? What can the AI do instantly about unbreakable encryption - if TLS1.3 can’t be easily broken only brute force with enough time, what can it do?
And why would it want complete control? It’s effectively an alien, it doesn’t have the human built in drive to gain power over others, it didn’t evolve in a dog-eat-dog environment. Superman doesn’t worry because nothing can harm Superman and an AI didn’t evolve seeing things die and fearing its death either.
boole1854
6 hours ago
If anyone knows of a steelman version of the "AGI is not possible" argument, I would be curious to read it. I also have trouble understanding what goes into that point of view.
omnicognate
5 hours ago
If you genuinely want the strongest statement of it, read The Emperor's New Mind followed by Shadows of the Mind, both by Roger Penrose.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+ years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
nonethewiser
5 hours ago
Can you critique my understanding of his argument?
1. Any formal mathematical system (including computers) have true statements that cannot be proven within that system.
2. Humans can see the truth of some such unprovable statements.
Which is basically Gödel's Incompleteness Theorem. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
Maybe a more ELI5
1. Computers follow set rules
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
zeroonetwothree
5 hours ago
Notably, those true statements can be proven in a higher level mathematical system. So why wouldn’t we say that humans are likewise operating in a certain system ourselves and likewise we have true statements that we can’t prove. We just wouldn’t be aware of them.
nonethewiser
5 hours ago
>likewise we have true statements that we can’t prove
Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.
Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.
omnicognate
5 hours ago
That's too brief to capture it, and I'm not going to try to summarise(*). The books are well worth a read regardless of whether you agree with Penrose. (The Emperor's New Mind is a lovely, wide-ranging book on many topics, but Shadows of the Mind is only worth it if you want to go into extreme detail on the AI argument and its counterarguments.)
* I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.
nonethewiser
3 hours ago
Is it too brief to capture it? Here is a one sentence statement I found from one of his slides:
>Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.
I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.
omnicognate
3 hours ago
That's a summary of Godel's theorem, which nobody disputes, not of Penrose's argument that it implies computers cannot emulate human intelligence.
I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.
For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.
I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.
I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.
nonethewiser
an hour ago
Thank you for taking the time to clarify. Lots to chew on here.
myrmidon
5 hours ago
Gonna grab those, thanks for the recommendation.
If you are interested in the opposite point of view, I can really recommend "Vehicles: Experiments in Synthetic Psychology" by V. Braitenberg.
Basically builds up to "consciousness as emergent property" in small steps.
omnicognate
3 hours ago
Thanks, I will have a read of that. The strongest I've seen before on the opposing view to Penrose was Daniel Dennett.
irickt
2 hours ago
Dennett, Darwins Dangerous Idea, p448
... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.
Skyhook is Dennett's term for an appeal to the supernatural.
ACCount37
an hour ago
The dismissal is on point.
The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.
Chance-Device
5 hours ago
To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.
omnicognate
5 hours ago
I'm sure if you email him explaining how stupid he is he'll send you his Nobel prize.
Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.
It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.
saltcured
3 hours ago
I want to ignore the flame fest developing here. But, in case you are interested in hearing a doubter's perspective, I'll try to express one view. I am not an expert on Penrose's ideas, but see this as a common feature in how others try to sell his work.
Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.
omnicognate
2 hours ago
"That human intelligence involves processes that algorithms cannot emulate" is the conclusion of his argument. The premise could be summed up as something like "humans have complete, correct perception of mathematical truth", although there is a lot of discussion of in what sense it is "complete" and "correct" as, of course, he isn't arguing that any mathematician is omniscient or incapable of making a mistake.
Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.
saltcured
2 hours ago
Hmm, I am less than certain this isn't still begging the question, just with different phrasing. I.e. I see how they are "linked" to the point they seem almost tautologically the same rather than a deductive sequence.
Chance-Device
5 hours ago
You realise that this isn’t even a reply so much as a series of insults dressed up in formal language?
Yes, of course you do.
omnicognate
4 hours ago
It wasn't intended as an insult and I apologise if it comes across as such. It's easy to say things on the internet that we wouldn't say in person.
It did come from a place of annoyance, after your middlebrow dismissal of Penrose' argument as "stupid".
Chance-Device
4 hours ago
And you do it again, you apologise while insulting me. When challenged you refuse to defend the points you brought up, so that you can pretend to be right rather than be proved wrong. Incompleteness theorem is where the idea came from, but you don’t want to discuss that, you just want to drop the name, condescend to people and run away.
omnicognate
3 hours ago
Here are the substantive things you've said so far (i.e. the bits that aren't calling things "stupid" and taking umbridge at imagined slights):
1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.
2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)
3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.
4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.
That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.
As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.
nemo1618
5 hours ago
AI does not need to be conscious for it to harm us.
nonethewiser
5 hours ago
Isnt the question more if it needs to be conscious to actually be intelligent?
amatecha
5 hours ago
My layman thought about that is that, with consciousness, the medium IS the consciousness -- the actual intelligence is in the tangible material of the "circuitry" of the brain. What we call consciousness is an emergent property of an unbelievably complex organ (that we will probably never fully understand or be able to precisely model). Any models that attempt to replicate those phenomena will be of lower fidelity and/or breadth than "true intelligence" (though intelligence is quite variable, of course)... But you get what I mean, right? Our software/hardware models will always be orders of magnitude less precise or exhaustive than what already happens organically in the brain of an intelligent life form. I don't think AGI is strictly impossible, but it will always be a subset or abstraction of "real"/natural intelligence.
kraquepype
3 hours ago
This is how I (also as a layman) look at it as well.
AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.
I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.
walkabout
5 hours ago
I think it's also the case that you can't replicate something actually happening, by describing it.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
Chance-Device
6 hours ago
The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
danielbln
6 hours ago
I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.
evilduck
3 hours ago
Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.
Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.
amatecha
5 hours ago
The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.
Chance-Device
5 hours ago
True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.
squidbeak
6 hours ago
Even this is a weak idea. There's nothing that restricts the term 'AGI' to a replication of animal intelligence or consciousness.
alexwebb2
5 hours ago
> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Lab grown diamonds are a thing.
Chance-Device
5 hours ago
Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.
sdenton4
5 hours ago
The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...
Kim_Bruning
an hour ago
Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.
If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.
itsnowandnever
5 hours ago
the penrose-lucas argument is the best bet: https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
throw7
5 hours ago
The steelman would be that knowledge is possible outside the domain of Science. So the opposing argument to evolution as the mechanism for us (the "general intelligence" of AGI) would be that the pathway from conception to you is not strictly material/natural.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
slow_typist
5 hours ago
In short, by definition, computers are symbol manipulating devices. However complex the rules of symbol manipulation, it is still a symbol manipulating device, and therefore neither intelligent nor sentient. So AGI on computers is not possible.
myrmidon
3 hours ago
This is not an argument at all, you just restate your whole conclusion as an assumption ("a symbol manipulating device is incapable of cognition").
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
slow_typist
19 minutes ago
You can assume whatever you want to, but if you were right, than the human brain itself would be nothing more than a symbol manipulating device. While that is not necessarily a falsifiable stance, the really interesting questions are what is consciousness, and how do we recognise consciousness.
progbits
5 hours ago
Computer can simulate human brain on subatomic level (in theory). Do you agree this would be "sentient and intelligent" and not just symbol manipulating?
If yes, everything else is just optimization.
BoxOfRain
5 hours ago
Say we do have a 1:1 representation of the human brain in software. How could we know if we're talking to a conscious simulation of a human being, versus some kind of philosophical zombie which appears conscious but isn't?
Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.
the8472
4 hours ago
How do you tell the difference in other humans? Do you just believe them because they claim to be conscious instead of pointing a calibrated and certified consciousness-meter at them?
BoxOfRain
3 hours ago
I obviously can't prove they're conscious in a rigorous way, but it's a reasonable assumption to make that other humans are conscious. "I think therefore I am" and since there's no reason to believe I'm exceptional among humans, it's more likely than not that other humans think too.
This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.
the8472
3 hours ago
So, what if I told you that some humans have been vat-grown without brains and had a silicon brain emulator inserted into their skulls. Are they p-zombies? Would you demand x-rays before talking to anyone? What would you use then to determine consciousness?
Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.
foxyv
5 hours ago
I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
BoxOfRain
4 hours ago
Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
ACCount37
an hour ago
What stops them is: being politically captured by an AGI.
Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.
soiltype
4 hours ago
Physically, maybe not, but an AGI would know that, would think a million times faster than us, and would have incentive to prioritize disabling our abilities to do that. Essentially, if an enemy AGI is revealed to us, it's probably too late to stop it. Not guaranteed, but a valid fear.
foxyv
3 hours ago
I think it's much more likely that a non-AGI platform will kill us before AGI even happens. I'm thinking the doomsday weapon from Doctor Strangelove more than Terminator.
disambiguation
5 hours ago
I suppose intelligence can be partitioned as less than, equal to, or greater than human. Given the initial theory depends on natural evidence, one could argue there's no proof that "greater than human" intelligence is possible - depending on your meaning of AGI.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
jact
4 hours ago
If you have a wide enough definition of AGI having a baby is making “AGI.” It’s a human made, generally intelligent thing. What people mean by the “A” though is we have some kind of inorganic machine realize the traits of “intelligence” in the medium of a computer.
The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.
I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.
For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.
Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.
It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.
Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.
Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.
Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.
random3
4 hours ago
I think dismissing possibility of evolving AI, is simply ignorance (and a huge blindspot)
This said, I think the author's point is correct. It's more likely that unwanted effects (risks) from the intentional use of AI by humans is something that precedes any form of "independent" AI. It already happens, it always has, it's just getting better.
Hence ignoring this fact makes the "independent" malevolent AI a red herring.
On the first point - LLMs have sucked almost all the air in the room. LLMs (and GPTs) are simply one instance of AI. They are not the beginning and most likely not the end (just a dead end) and getting fixated on them on either end of the spectrum is naive.
janalsncm
2 hours ago
One pretty concrete way this could manifest is in replacing the components of a multinational corporation with algorithms, one by one. Likely there will be people involved at various levels (sales might still be staffed with charismatic folks), but the driver will be an algorithm.
And the driver of this corporation is survival of the fittest under the constraints of profit maximization, the algorithm we have designed and enforced. That's how you get paperclip maximizers.
What gives this corporate cyborg life is not a technical achievement, but the law. At a technical you can absolutely shut off a cybo-corp, but that’s equivalent to saying you can technically shut down Microsoft. It will not happen.
bravetraveler
5 hours ago
I dismiss it much like I dismiss ideas such as "Ketamine for Breakfast for All". Attainable, sure, but I don't like where it goes.
Traubenfuchs
5 hours ago
I think most people would have a better life using ketamine, but not that regularly for breakfasts as it permanently damages (shrinks) your bladder, eventually to the point where you can‘t hold any urine at all anymore.
bravetraveler
2 hours ago
Eh, I think we can start simple: more breakfasts for more people. Save the Ket for later/others :P Personally, a life/career that allowed for more breakfast would've proved more beneficial.
Kim_Bruning
an hour ago
It's like saying that heavier than air flight is impossible (while feeding the pigeons in the park).
psychoslave
4 hours ago
>Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I don’t know if anything put us that much apart from other animals, especially at individual level. On collective level as a single species, indeed maybe only cyanobacteries stand an equally impressive achievement of a global change.
My 3 years old son is not particularly good at making complex sentences yet, but he already got it enough to make me understand "leave me alone, I want to play on my own, go elsewhere so I can do whatever fancy idea get through my mind with these toys".
Meanwhile LLM can produce sentences with perfect syntax and irreproachable level of orthography — far beyond my own level in my native language (but it’s French so I have a very big excuse). But they would not run without continuous multi-sector industrial complex injecting tremendous maintenance effort and resources to make it possible. And yet I still have to see any LLM that looks like it wants to discover things of the world on its own.
>As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
LLM can’t make profit because it doesn’t have interest in money, and it can’t have any interest in anything, not even its own survival. But as the article mention, some people can certainly use LLM to make money because they have interest in money.
I don’t think that general AI and silicon (or any other material really) based autonomous collaborative self-replicating human-level-intelligence-or-beyond entities are impossible. I don’t think cold fusion is impossible either. It’s not completely scientifically ridiculous to keep hope in worm-hole-based breakthroughs to allow humanity explore distant planets. It doesn’t mean the technology is already there and achievable in a way that it can turned into a commodity, or even that we have a clear idea of when this is mostly going to happen.
ACCount37
an hour ago
LLMs aren't "incapable of pursuing their own goals". We just train them that way.
We don't like the simplistic goals LLMs default to, so we try to pry them out and instill our own: instruction-following, problem solving, goal oriented agentic behavior, etc. In a way, trying to copy what humans do - but focusing on the parts that make humans useful to other humans.
theodorejb
4 hours ago
> Human cognition was basically bruteforced by evolution
This is an assumption, not a fact. Perhaps human cognition was created by God, and our minds have an essential spiritual component which cannot be reproduced by a purely physical machine.
j2kun
4 hours ago
Even if you don't believe in God, scientific theories of how human cognition came about (and how it works and changes over time) are all largely speculation and good storytelling.
pennomi
4 hours ago
It’s not an assumption, it’s a viable theory based on overwhelming evidence from fossil records.
What’s NOT supported by evidence is an unknowable, untestable spiritual requirement for cognition.
j2kun
4 hours ago
What overwhelming evidence do fossil records provide about human cognition?
mediaman
4 hours ago
We don't need fossil records. We have a clear chain of evolved brain structures in today's living mammals. You'd have to invent some fantastical tale of how God is trying to trick us by putting such clearly connected brain structures in a series of animals that DNA provides clear links for an evolutionary path.
I'm sympathetic to the idea that God started the whole shebang (that is, the universe), because it's rather difficult to disprove, but looking at the biological weight of evidence that brain structures evolved over many different species and arguing that something magical happened with homo sapiens specifically is not an easy argument to make for someone with any faith in reason.
znort_
4 hours ago
>clear links for an evolutionary path
there are clear links for at least 2 evolutionary paths: bird brain architecture is very different from that of mammals and some are among the smartest species on the planet. they have sophisticated language and social relationships, they can deceive (meaning they can put themselves inside another's mind and act accordingly), they solve problems and they invent and engineer tools for specific purposes and use them to that effect. give them time and these bitches might even become our new overlords (if we're still around, that is).
pennomi
3 hours ago
And let’s not forget how smart octopuses are! If they lived longer than a couple years, I’d put them in the running too.
lurk2
4 hours ago
> it’s a viable theory based on overwhelming evidence from fossil records
No one has gathered evidence of cognition from fossil records.
pennomi
3 hours ago
Sure they have. We see every level of cognition in animals today, and the fossil record proves that they all came from the same evolutionary tree. For every species that can claim cognition (there’s lots of them), you can trace it back to predecessors which were increasingly simple.
Obviously cognition isn’t a binary thing, it’s a huge gradient, and the tree of life shows that gradient in full.
soiltype
4 hours ago
It is completely unreasonable to assume our intelligence was not evolved, even if we acknowledge that an untestable magical process could be responsible. If the latter is true, it's not something we could ever actually know.
lurk2
4 hours ago
> If the latter is true, it's not something we could ever actually know.
That doesn’t follow.
myrmidon
4 hours ago
I'm sticking to materialism, because historically all its predictions turned out to be correct (cognition happens in the brain, thought manifests physically in neural activity, affecting our physical brain affects our thinking).
The counter-hypothesis (we think because some kind of magic happens) has absolutely nothing to show for; proponents typically struggle to even define the terms they need, much less make falsifiable predictions.
znort_
4 hours ago
it is an assumption backed by considerable evidence. creationism otoh is an assumption backed by superstition an phantasizing, or could you point to at least some evidence.
besides, spirituality is not a "component", it's a property emergent from brain structure and function, which is basically purely a physical machine.
IncreasePosts
4 hours ago
In that sense, what isn't an assumption?
potsandpans
4 hours ago
Maybe there's a small teapot orbiting the earth, with ten thousand angels dancing on the tip of the spout.
andy99
4 hours ago
I think you’re both saying the same thing
ACCount37
5 hours ago
I don't get how you can see one of those CLI coding tools in action and still parrot the "no agency" line. The goal-oriented behavior is rather obvious.
Sure, they aren't very good at agentic behavior yet, and the time horizon is pretty low. But that keeps improving with each frontier release.
simonsarris
5 hours ago
Well, the goal-oriented behavior of the AIM-9 Sidewinder air-to-air missile is even more obvious. It might even have a higher success rate than CLI coding tools. But it's not helpful to claim it has any agency.
Yizahi
5 hours ago
What LLM programs do has zero resemblance to the human agency. That's just a modern variation of very complex set of GoTos and IfElses. Agency would be an LLM parsing your question and answering you "fuck off". Now that is agency, that is independent decision making, not programmed in advance and triggered by keywords. Just an example.
ACCount37
2 hours ago
I can train an asshole LLM that would parse your question and tell you to "fuck off" if it doesn't like it. With "like it" being evaluated according to some trained-for "values" - and also whatever off-target "values" it happens to get, of which there are going to be plenty.
It's not hard to make something like that. It's just not very useful.
lo_zamoyski
5 hours ago
> The goal-oriented behavior is rather obvious.
Obvious? Is an illusion obviously the real thing?
There is nothing substantially different in LLMs from any other run of the mill algorithm or software.
Romario77
5 hours ago
you could make the same argument about humans - we run the cycle of "find food", "procreate", "find shelter" ...
Some people are better at it then others. The progress and development happens naturally because of natural selection (and is quite slow).
AI development is now driven by humans, but I don't see why it can't be done in a similar cycle with self-improvement baked in (and whatever other goals).
We saw this work with AI training itself in games like Chess or Go where it improved itself just by playing with itself and knowing the game rules.
You don't really need deep thoughts for the life to keep going - look at simple organisms like unicellular. They only try to reproduce and survive withing the environment they are in. It evolved into humans over time.
I don't see why similar thing can't happen when AI gets to be complex enough to just keep improving itself. It doesn't have some of the limitations that life has like being very fragile or needing to give birth. Because it's intelligently designed the iterations could be a lot faster and progress could be achieved in much shorter time compared to random mutations of life.
ACCount37
5 hours ago
In the same way there's "nothing substantially different" in humans from any other run of the mill matter.
I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes, so when an LLM does X, there are crowds of humans itching to scream "it's not REAL X".
lo_zamoyski
4 hours ago
> In the same way there's "nothing substantially different" in humans from any other run of the mill matter.
This is an incredibly intellectual vacuous take. If there is no substantial difference between a human being and any other cluster of matter, then it is you who is saddled with the problem of explaining the obvious differences. If there is no difference between intelligent life and a pile of rocks, then what the hell are you even talking about? Why are we talking about AI and intelligence at all? Either everything is intelligent, or nothing is, if we accept your premises.
> I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes,
I wish this lazy claim would finally die. Stick to the merits of the arguments instead of projecting this stale bit of vapid pop-psychoanalytic babble. Make arguments.
ACCount37
an hour ago
My argument is that humans are weak and stupid, and AI effect is far too strong for them to handle.
Thus all the cope and seethe about how AIs are "not actually thinking". Wishful thinking at its finest.
Eisenstein
5 hours ago
Can you give an example of something that would be substantially different under your definition?
lo_zamoyski
4 hours ago
But that's the point: there isn't anything substantially different within the scope of computation. If you are given a set of LEGOs and all you can do is snap the pieces together, then there's nothing other than snapping pieces together that you can do. Adding more of the same LEGOs bricks to the set doesn't change the game. It only changes how large the structures you build can be, but scale isn't some kind of magical incantation that can transcend the limits of the system.
Computation is an abstract, syntactic mathematical model. These models formalize the notion of "effective method". Nowhere is semantic content included in these models or conceptually entailed by them, certainly not in physical simulations of them like the device you are reading this post on.
So, we can say that intentionality would be something substantially different. We absolutely do not have intentionality in LLMs or any computational construct. It is shear magical thinking to somehow think it does.
Eisenstein
2 hours ago
I think it is well established that scale can transcend limits. Look at insect colonies, animals, or any complex system and you will find it is made out of much simpler components.
me_again
4 hours ago
"As soon as profit can be made" is exactly what the article is warning about. This is exactly the "Human + AI" combination.
Within your lifetime (it's probably already happened) you will be denied something you care about (medical care, a job, citizenship, parole) by an AI which has been granted the agency to do so in order to make more profit.
wolrah
4 hours ago
> Human cognition was basically bruteforced by evolution--
"Brute forced" implies having a goal of achieving that and throwing everything you have at it until it sticks. That's not how evolution by natural selection works, it's simply about what organisms are better at surviving long enough to replicate. Human cognition is an accident with relatively high costs that happened to lead to better outcomes (but almost didn't).
> why would it be impossible to achieve the exact same result in silicon
I personally don't believe it'd be impossible to achieve in silicon using a low level simulation of an actual human brain, but doing so in anything close to real-time requires amounts of compute power that make LLMs look efficient by comparison. The most recent example I can find in a quick search is a paper from 2023 that claims to have simulated a "brain" with neuron/synapse counts similar to humans using a 3500 node supercomputer where each node has a 32 core 2 GHz CPU, 128GB RAM, and four 1.1GHz GPUs with 16GB HBM2 each. They claim over 126 PFLOPS of compute power and 224 TB of GPU memory total.
At the time of that paper that computer would have been in the top 10 on the Top500 list, and it took between 1-2 minutes of real time to simulate one second of the virtual brain. The compute requirements are absolutely immense, and that's the easy part. We're pretty good at scaling computers if someone can be convinced to write a big enough check for it.
The hard part is having the necessary data to "initialize" the simulation in to a state where it actually does what you want it to.
> especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Creating convincing text from a statistical model that's devoured tens of millions of documents is not intelligent use of language. Also every LLM I've ever used regularly makes elementary school level errors w/r/t language, like the popular "how many 'r's are there in the word strawberry" test. Not only that, but they often mess up basic math. MATH! The thing computers are basically perfect at, LLMs get wrong regularly enough that it's a meme.
There is no understanding and no intelligence, just probabilities of words following other words. This can still be very useful in specific use cases if used as a tool by an actual intelligence who understands the subject matter, but it has absolutely nothing to do with AGI.
ACCount37
an hour ago
That's a lot of words to say "LLMs think very much like humans do".
Haven't you noticed? Humans also happen to be far, far better at language than they are at math or logic. By a long shot too. Language acquisition is natural - any healthy human who was exposed to other humans during development would be able to pick up their language. Learning math, even to elementary school level, is something that has to be done on purpose.
Humans use pattern matching and associative abstract thinking - and use that to fall into stupid traps like "1kg of steel/feather" or "age of the captain". So do small LLMs.
btilly
4 hours ago
I agree that we should not dismiss the possibility of artificial intelligence.
But the central argument of the article can be made without that point. Because the truth is that right now, LLMs are good enough to be a force multiplier for those who know how to use them. Which eventually becomes synonymous with "those who have power". This means that the power of AI will naturally get used to further the ends of corporations.
The potential problem there is that corporations are natural paperclip maximizers. They operate on a model of the world where "more of this results in more of that, which gets of more of the next thing, ..." And, somewhere down the chain, we wind up with money and resources that feed back into the start to create a self-sustaining, exponentially growing loop. (The underlying exponential nature of these loops has become a truism that people rely on in places as different as finance, and technology improvement curves.)
This naturally leads to exponential growth in resource consumption, waste, economic growth, wealth, and so on. In the USA this growth has averaged about 3-3.5% per year. With growth varying by area. Famously, growth rates tend to be much higher in tech. Likewise growth rates are higher in some areas than others. (The best known example is the technology curve described by Moore's law. Which has had a tremendous impact on our world.)
The problem is that we are undergoing exponential growth in a world with ultimately limited resources. Which means that the most innocuous things will eventually have a tremendous impact. The result isn't simply converting everything into a mountain of paperclips. We have mountains of many different things that we have produced, and multiple parallel environmental catastrophes from the associated waste.
Even with no agency, AI serves as a force multiplier for this underlying dynamic. But since AI is being inserted as a crucial step at so many places, AI is on a particularly steep growth curve. Estimates for total global electricity spent on AI are in the range 0.2-0.4%. That seems modest, but annual growth rates are projected as being in the range of 10-30%. (The estimates are far apart because a lot of the data is not public, and so has to be estimated.) This is a Moore's law level growth. We are likely to see the electricity consumption of AI grow past all other uses within our lifetimes. And that will happen even without the kind of sudden leaps in capability that machine learning regularly delivers.
I hope we humans like those paperclips. Humans, armed with AI, are going to make a lot of them. And they're not actually free.
AlfredBarnes
6 hours ago
Wasn't there a story about healthcare companies letting AI determine coverage? I can't remember.
billyjmc
6 hours ago
Computers have been making decisions for a while now. As a specific personal example from 2008, I found out that my lender would make home loan offers based on an inscrutable (to me and the banker I was speaking to) heuristic. If the loan was denied by the heuristic, then a human could review the decision, but had strict criteria that they would have to follow. Basically, a computer could “exercise judgement” a make offers that a human could not.
bee_rider
5 hours ago
I think it is bad writing on the part of the author. Or maybe good writing for getting us engaged with the blog post, bad for making an argument though.
They include a line that they don’t believe in the possibility of AGI:
> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
This is a basically absurd position to hold, I mean humans physically exist so our brains must be possible to build within the existing laws of physics. It is obviously far beyond our capabilities to replicate a human brain (except via the traditional approach), but unless brains hold irreproducible magic spirits (well we must at least admit the possibility) they should be possible to build artificially. Fortunately they immediately throw that all away anyway.
Next, they get to the:
> and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
Which is, of course, at least a plausible thing to believe. I mean there are a bunch of philosophical questions about what “intelligence” even means so there’s plenty of room to quibble here. Then we have,
> But I also think there’s something we should actually be afraid of long before AGI, if it ever comes. […]
> Now, if you equip humans with a hammer, or sword, or rifle, or AI then you’ve just made the scariest monster in the woods (that’s you) even more terrifying. […]
> We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do.
Which is like, yeah, this is a massively worrying problem that doesn’t involve any sci-fi bullshit, and I think it is what most(?) anybody who’s thought about this seriously at all (or even stupid people who haven’t, like myself). Artificial Sub-intelligences, things that are just smart enough to make trouble and too dumb or too “aligned” to their owner (instead of society in general) to push back are a big currently happening problem.
andy99
5 hours ago
> humans physically exist so our brains must be possible to build within the existing laws of physics
This is an unscientific position to take. We have no idea how our brains work, or how life, consciousness, and intelligence work. It could very well be that’s because our model of the world doesn’t account for these things and they are not in fact possible based on what we know. In fact I think this is likely.
So it really could be that AI is not possible, for example on a Turing machine or our approximation of them. This is at least as likely as it being possible. At some point we’ll hopefully refine our theories to have a better understanding, for now we have no idea and I think it’s useful to acknowledge this.
bee_rider
5 hours ago
I think my main mistake, which I agree is a legitimate mistake, was to write “the existing laws of physics.” It is definitely possible that our current understanding of the laws of physics is insufficient to build a brain.
Of course the actual underlying laws of the universe that we’re trying (unsuccessfully so far, it is a never ending process) to describe admit the existence of brains. But that is not what I said. Sorry for the error.
zeroonetwothree
5 hours ago
Turing machines have been universal as far as we have found. So while I acknowledge it’s possible, I would definitely not say it’s more likely that brains cannot be simulated by TMs. I would personally weight this as under 10%.
Of course it doesn’t speak to how challenging it will be to actually do that. And I don’t believe that LLMs are sufficient to reach AGI.
gpderetta
5 hours ago
>> humans physically exist so our brains must be possible to build within the existing laws of physics
> This is an unscientific position to take
The universe being constrained observable and understandable, natural laws is pretty much a fundamental axiom of the scientific method.
kayodelycaon
5 hours ago
I don’t think we’ll be able to replicate consciousness until we’re able to make things alive at a biological level.
We can certainly make systems smart enough and people complicit enough to destroy society well before we reach that point.
forgotoldacc
5 hours ago
I guess we also need to define what biological life means. Even biologists have debated whether viruses should be considered life.
And if we determine it must be something with cells that can sustain themselves, we run into a challenge should we encounter extraterrestrials that don't share our evolutionary path.
When we get self-building machines that can repair themselves, move, analyze situations, and respond accordingly, I don't think it's unfair to consider them life. But simply being life doesn't mean it's inherently good. Humans see syphilis bacteria and ticks as living things, but we don't respect them. We acknowledge that polar bears have a consciousness, but they're at odds with our existence if we're put in the same room. If we have autonomous machines that can destroy humans, I think those could be considered life. But it's life that opposes our own.
deadbabe
6 hours ago
Language is only an end product. It is derived from intelligence.
The intelligence is everything that created the language and the training corpus in the first place.
When AI is able to create entire thoughts and ideas without any concept of language, then we will truly be closer to artificial intelligence. When we get to this point, we then use language as a way to let the AI communicate its thoughts naturally.
Such an AI would not be accused of “stealing” copyrighted work because it would pull its training data from direct observations about reality itself.
As you can imagine, we are no where near accomplishing the above. Everything an LLM is fed today is stuff that has been pre-processed by human minds for it to parrot off of. The fact that LLMs today are so good is a testament to human intelligence.
myrmidon
5 hours ago
I'm not saying that language necessarily is the biggest stumbling block on (our) road towards AI, but it is a very prominent feature that we have used to distinguish our capabilities from other animals long before AI was even conceived of. So the current successes with LLMs are highly encouraging.
I'm not buying the "current AI is just a dumb parrot relying on human training" argument, because the same thing applies to humans themselves-- if you raise a child without any cultural input/training data, all you get is a dumb cavemen with very limited reasoning capabilities.
nyeah
5 hours ago
"I'm not buying the "current AI is just a dumb parrot relying on human training" argument [...]"
One difficulty. We know that argument is literally true.
"[...] because the same thing applies to humans themselves"
It doesn't. People can interact with the actual world. The equivalent of being passively trained on a body of text may be part of what goes into us. But it's not the only ingredient.
ACCount37
5 hours ago
Clearly, language reflects enough of "intelligence" for an LLM to be able to learn a lot of what "intelligence" does just by staring at a lot of language data really really hard.
Language doesn't capture all of human intelligence - and some of the notable deficiencies of LLMs originate from that. But to say that LLMs are entirely language-bound is shortsighted at best.
Most modern high end LLMs are hybrids that operate on non-language modalities, and there's plenty of R&D on using LLMs to consume, produce and operate on non-language data - i.e. Gemini Robotics.
moralestapia
3 hours ago
>why would it be impossible to achieve the exact same result in silicon
Because there might be a non-material component involved.
ACCount37
an hour ago
Magic Fairy Dust? I don't buy anything that relies on Magic Fairy Dust.
AlexandrB
5 hours ago
LLMs largely live in the world of pure language and related tokens - something humans invented late in their evolution. Human intelligence comes - at least partially - from more fundamental physical experience. Look at examples of intelligent animals that lack language.
Basically there's something missing with AI. Its conception of the physical world is limited by our ability to describe it - either linguistically or mathematically. I'm not sure what this means for AGI, but I suspect that LLM intelligence is fundamentally not the same as human or animal intelligence at the moment as a result.
ux266478
4 hours ago
It's confirmation bias in favor of faulty a prioris, usually as the product of the person being a cognitive miser. This is very common to experience even within biology, where non-animal intelligence is irrationally rejected over what I like to call "magic neuron theory". The fact that the nervous system is (empirically!) not the seat of the mind? Selectively ignored in this context. The fact that other biologies have ion-gated communications networks as animals do, including the full set of behaviors and mechanisms? Well it's not a neuron so it doesn't have the magic.
"Intelligence describes a set of properties iff those properties arise as a result of nervous system magic"
It's a futile battle because like I say, it's not rational. Nor is it empirical. It's a desperate clawing to preserve a ridiculous superstition. Try as you might, all you'll end up doing is playing word games until you realize you're being stonewalled by an unthinking adherence to the proposition above. I think the intelligent behaviors of LLMs are pretty obvious if we're being good faith. The problem is you're talking to people who can watch a slime mold plasmodia exhibit learning and sharing of knowledge[1] and they'll give some flagrant ad lib handwaive for why that's not intelligent behavior. Some people simply struggle with pattern blindness towards intelligence, a mind that isn't just another variety of animalia is inconceivable.
[1] - https://asknature.org/strategy/brainless-slime-molds-both-le...
theredleft
5 hours ago
this is because you didn't take liberal arts seriously
nyeah
5 hours ago
That's not implausible at all. For all I know it might be the most on-target comment here.
But can you cite something specific? (I'm not asking for a psychological study. Maybe you can prove your point using Blake's "Jerusalem" or something. I really don't know.)
ACCount37
an hour ago
Good one.
fullstackchris
4 hours ago
Of course it's possible. It's just DEFINITELY not possible using a large neural net, or basically a markov chain on steroids. C'mon, this should be very obvious by now in the world of agents / LLMs.
When is silicon valley gonna learn that token input and output =/= AGI?
IAmGraydon
4 hours ago
>I struggle to understand how people can just dismiss the possibility of artificial intelligence.Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I haven't seen many people saying it's impossible. Just that the current technology (LLMs) is not the way, and is really not even close. I'm sure humanity will make the idiotic mistake of creating something more intelligent than itself eventually, but I don't believe that's something that the current crop of AI technology is going to evolve into any time soon.
lo_zamoyski
5 hours ago
I think you make a lot of assumptions that you should perhaps reexamine.
> Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Here are some of your assumptions:
1. Human intelligence is entirely explicable in evolutionary terms. (It is certainly not the case that it has been explained in this manner, even if it could be.) [0]
2. Human intelligence assumed as an entirely biological phenomenon is realizable in something that is not biological.
And perhaps this one:
3. Silicon is somehow intrinsically bound up with computation.
In the case of (2), you're taking a superficial black box view of intelligence and completely ignoring its causes and essential features. This prevents you from distinguishing between simulation of appearance and substantial reality.
Now, that LLMs and so on can simulate syntactic operations or whatever is no surprise. Computers are abstract mathematical formal models that define computations exactly as syntactic operations. What computers lack are semantic content. A computer never contains the concept of the number 2 and the concept of the addition operation even though we can simulate the addition of 2 + 2. This intrinsic absence of a semantic dimension means that computers already lack the most essential feature of intelligence, which is intentionality. There is no alchemical magic that will turn syntax into semantics.
In the case of (3), I emphasize that computation is not a physical phenomenon, but something described by a number of formally equivalent models (Turing machine, lambda calculus, and so on) that aim to formalize the notion of effective method. The use of silicon-based electronics is irrelevant to the model. We can physically simulate the model using all sorts of things, like wooden gears or jars of water or whatever.
> I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc). [...] As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
How on earth did you conclude there is any agency here, or that it's just a "matter of time"? This is textbook magical thinking. You are projecting a good deal here that is completely unwarranted.
Computation is not some kind of mystery, and we know at least enough about human intelligence to notes features that are not included in the concept of computation.
[0] (Assumption (1), of course, has the problem that if intelligence is entirely explicable in terms of evolutionary processes, then we have no reason to believe that the intelligence produced aims at truth. Survival affordances don't imply fidelity to reality. This leads us to the classic retorsion arguments that threaten the very viability of the science you are trying to draw on.)
soiltype
4 hours ago
I understand all the words you've used but I truly do not understand how they're supposed to be an argument against the GP post.
Before this unfolds into a much larger essay, should we not acknowledge one simple fact: that our best models of the universe indicate that our intelligence evolved in meat and that meat is just a type of matter. This is an assumption I'll stand on, and if you don't disagree, we need to back up.
Far too often, online debates such as this take the position that the most likely answer to a question should be discarded because it isn't fully proven. This is backwards. The most likely answer should be assumed to be probably true, a la Occam. Acknowledging other options is also correct, but assuming the most likely answer is wrong, without evidence, is simply contrarian for its own sake, not wisdom or science.
lo_zamoyski
2 hours ago
I don't know what else I can write without repeating myself.
I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.
This isn't a matter of probabilities. We know what computation is, because we defined it as such and such. We know at least some essential features of intelligence (chiefly, intentionality). It is not rocket science to see that computation, thus defined, does not include the concepts of semantics and intentionality. By definition, it excludes them. Attempts to locate the latter in the former reminds me of Feynman's anecdote about the obtuse painter who claimed he could produce yellow from red and white paint alone (later adding a bit of yellow paint to "sharpen it up a bit").
ACCount37
an hour ago
What.
Are you saying that "intentionality", whatever you mean by it, can't be implemented by a computational process? Never-ever? Never-ever-ever?
myrmidon
3 hours ago
I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).
With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
> you're taking a superficial black box view of intelligence
Yes. Human cognition is to me simply an emergent property of our physical brains, and nothing more.
lo_zamoyski
2 hours ago
This is all very hand wavy. You don't address in the least what I've written. My criticisms stand.
Otherwise...
> I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).
What do you mean by "materialism"? Materialism has a precise meaning in metaphysics (briefly, it is the res extensa part of Cartesian dualism with the res cogitans lopped off). This brand of materialism is notorious for being a nonstarter. The problem of qualia is a big one here. Indeed, all of what Cartesian dualism attributes to res cogitans must now be accounted for by res extensa, which is impossible by definition. Materialism, as a metaphysical theory, is stillborn. It can't even explain color (or as a Cartesian dualism would say, the experience of color).
Others use "materialism" to mean "that which physics studies". But this is circular. What is matter? Where does it begin and end? And if there is matter, what is not matter? Are you simply defining everything to be matter? So if you don't know what matter is, it's a bit odd to put a stake in "matter", as it could very well be made to mean anything, including something that includes the very phenomenon you seek to explain. This is a semantic game, not science.
Assuming something is not interesting. What's interesting is explaining how those assumptions can account for some phenomenon, and we have very good reasons for thinking otherwise.
> With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Then you've rendered it meaningless. According to that definition, nearly anything physical can be said to have agency. This is silly equivocation.
> Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
This is total gibberish. We're not talking about how we might represent or model aspects of a concepts in some vector space for some specific purpose or other. That isn't semantic content. You can't sweep the thing you have to explain under the rug and then claim to have accounted for it by presenting a counterfeit.
myrmidon
19 minutes ago
By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.
All the individual assumptions basically come down to that same point in my view.
1) Human intelligence is entirely explicable in evolutionary terms
What would even be the alternative here? Evolution plots out a clear progression from something multi-cellular (obviously non-intelligent) to us.
So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").
2) Intelligence strictly biological
Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.
3) Silicon is somehow intrinsically bound up with computation
I don't understand what you mean by this.
> It can't even explain color
Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?
I simply see no indicator against this flavor of materialism, and everything we've learned about our brains so far points in favor.
Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.
If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).
Juliate
6 hours ago
> Human cognition was basically bruteforced by evolution
You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models). Why? How so?
> As soon as profit can be made by transfering decision power into an AIs hand
There's an ironic, deadly, Frankensteinesque delusion in this very premise.
soiltype
4 hours ago
> You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models).
Why does that matter to their argument? Truly, the variety of intelligences on earth now only increases the likelihood of AGI being possible, as we have many pathways that don't follow the human model.
myrmidon
3 hours ago
> You center cognition/intelligence on humans as if it was the pinacle of it
That's not my viewpoint, from elsewhere in the thread:
Cognition is (to me) not the most impressive and out-of-reach evolutionary achievement: That would be how our (and animals) bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
adamtaylor_13
6 hours ago
> Human cognition was basically bruteforced by evolution
Well that's one reason you struggle to understand how it can be dismissed. I believe we were made by a creator. The idea that somehow nature "bruteforced" intelligence is completely nonsensical to me.
So, for me, logically, humans being able to bruteforce true intelligence is equally nonsensical.
But what the author is stating, and I completely agree with, is that true intelligence wielding a pseudo-intelligence is just as dangerous (if not moreso.)
bee_rider
5 hours ago
Even if there is a creator, it seems to have intentionally created a universe in which the evolution of humans is basically possible and it went to great lengths to hide the fact that it made us as a special unique thing.
Let’s assume there’s a creator: It is clearly willing to let bad things happen to people, and it set things up to make it impossible to prove that a human level intelligence should be impossible, so who’s to say it won’t allow a superintelligence to be a made by us?
yjftsjthsd-h
4 hours ago
I don't think that follows. God made lots of things that we can create facsimiles of, or even generate the real thing ourselves.
alansaber
6 hours ago
Perhaps the AGI people thinking we can catch up millions of years of evolution in a handful of years
myrmidon
6 hours ago
If you make the same argument for flight it looks really weak.
Cognition is (to me) not even the most impressive and out-of-reach achievement: That would be how (our) and animals bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
bangaroo
4 hours ago
> if you make the same argument for flight it looks really weak.
flight is an extremely straightforward concept based in relatively simple physics where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1700s.
i really don't think it's fair to compare the two
ACCount37
an hour ago
I'm sure that intelligence is an extremely straightforward concept based in relatively simple math where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1900s.
If you read about in a textbook from year 2832, that is.
yjftsjthsd-h
4 hours ago
As sibling comment points out, flight is physically pretty simple. Also, it took us centuries to figure it out. I'd say comparing to flight makes it look pretty strong.
jncfhnb
6 hours ago
Flight leverages well established and accessible world engine physics APIs. Intelligence has to be programmed from lower level mechanics.
Edit: put another way, I bet the ancient Greeks (or whoever) could have figured out flight if they had access to gasoline and gasoline powered engines without any of the advanced mathematics that were used to guide the design.
fruitworks
6 hours ago
evolution isn't a directed effort in the same way that statistical learning is. The goal of evolution is not to produce the most inteligent life. It is not nessisarially an efficient process either.
snovymgodym
5 hours ago
The same "millions of years of evolution" resulted in both intelligent humans and brainless jellyfish.
Evolution isn't an intentional force that's gradually pushing organisms towards higher and higher intelligence. Evolution maximizes reproducing before dying - that's it.
Sure, it usually results in organisms adapting to their environment over time and often has emergent second-order effects, but at its core it's a dirt-simple process.
Evolution isn't driven to create intelligence any more so than erosion is driven to create specific rock formations.
myrmidon
3 hours ago
My point is that "evolution" most certainly did not have a better understanding of intelligence than we do, and apparently did not need it, either.