AGI is far from inevitable

64 pointsposted 15 hours ago
by mpweiher

99 Comments

Gehinnn

11 hours ago

Basically the linked article argues like this:

> That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain.

(no other more substantial arguments were given)

I'm also very skeptical on seeing AGI soon, but LLMs do solve problems that people thought were extremely difficult to solve ten years ago.

godelski

11 hours ago

  > but LLMs do solve problems that people thought were extremely difficult to solve ten years ago.
Well for something to be G or I you need them to solve novel problems. These things have interested most of the Internet and I've yet to see a "reasoning" disentangle memorization from reasoning. Memorization doesn't mean they aren't useful (not sure why this was ever conflated since... Computers are useful...), but it's very different from G or I. And remember that these tools are trained for human preferential output. If humans prefer things to look like reasoning then that's what they optimize. [0]

Sure, maybe your cousin Throckmorton is dumb but that's besides the point.

That said, I see no reason human level cognition is impossible. We're not magic. We're machines that follow the laws of physics. ML systems may be far from capturing what goes on in these computers, but that doesn't mean magic exists.

[0] If it walks like a duck, quacks like a duck, and swims like a duck, and looks like a duck it's probably a duck. But probably doesn't mean it isn't a well made animatronic. We have those too and they'll convince many humans they are ducks. But that doesn't change what's inside. The subtly matters.

User23

10 hours ago

We don't really have the proper vocabulary to talk about this. Well, we do, but C.S. Peirce's writings are still fairly unknown. In short, there are two fundamentally distinct forms of reasoning.

One is corollarial reasoning. This is the kind of reasoning that follows deductions that directly follow from the premises. This of course includes subsequent deductions that can be made from those deductions. Obviously computers are very good at this sort of thing.

The other is theorematic reasoning. It deals with complexity and creativity. It involves introducing new hypotheses that are not present in the original premises or their corollaries. Computers are not so very good at this sort of thing.

When people say AGI, what they are really talking about is an AI that is capable of theorematic reasoning. The most romanticized example of that of course being the AI that is capable of designing (not aiding humans in designing, that's corollarial!) new more capable AIs.

All of the above is old hat to the AI winter era guys. But amusingly their reputations have been destroyed much the same as Peirce's was, by dissatisfied government bureaucrats.

On the other hand, we did get SQL, which is a direct lineal descendent (as in teacher to teacher) from Peirce's work, so there's that.

godelski

10 hours ago

We don't have proper language, but certainly we've improved. Even since Peirce. You're right that many people are not well versed in the philosophical and logician discussions as to what reasoning is (and sadly this lack of literature review isn't always common in the ML community), but I'm not convinced Peirce solved it. I do like that there are many different categories of reasoning and subcategories.

  > All of the above is old hat to the AI winter era guys. But amusingly their reputations have been destroyed much the same as Peirce's was, by dissatisfied government bureaucrats.
Yeah, this has been odd. Since a lot of their work has shown to be fruitful once scaled. I do think you need a combination of theory people + those more engineering oriented, but having too much of one is not a good thing. It seems like now we're overcorrecting and the community is trying to kick out the theorists. By saying things like "It's just linear algebra"[0] or "you don't need math"[1] or "they're black boxes". These are unfortunate because they encourage one to not look inside and try to remove the opaqueness. Or to dismiss those that do work on this and are bettering our understanding (sometimes even post hoc saying it was obvious).

It is quite the confusing time. But I'd like to stop all the bullshit and try to actually make AGI. That does require a competition of ideas and not everyone just boarding the hype train or have no careers....

[0] You can assume anyone that says this doesn't know linear algebra

[1] You don't need math to produce good models, but it sure does help you know why your models are wrong (and understanding the meta should make one understand my reference. If you don't, I'm not sure you're qualified for ML research. But that's not a definitive statement either).

User23

6 hours ago

> We don't have proper language, but certainly we've improved. Even since Peirce. You're right that many people are not well versed in the philosophical and logician discussions as to what reasoning is (and sadly this lack of literature review isn't always common in the ML community), but I'm not convinced Peirce solved it. I do like that there are many different categories of reasoning and subcategories.

I'd love to hear more about this please, if you're inclined to share.

danaris

11 hours ago

I have seen far, far too many people say things along the lines of "Sure, LLMs currently don't seem to be good at [thing LLMs are, at least as of now, fundamentally incapable of], but hey, some people are pretty bad at that sometimes too!"

It demonstrates such a complete misunderstanding of the basic nature of the problem that I am left baffled that some of these people claim to actually be in the machine-learning field themselves.

How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

I do not know if AGI is possible. Honestly, I'd love to believe that it is. However, it has not remotely been demonstrated that it is possible, and as such, it follows that it cannot have been demonstrated that it is inevitable. If you want to believe that it is inevitable, then I have no quarrel with you; if you want to preach that it is inevitable, and draw specious inferences to "prove" it, then I have a big quarrel with you.

godelski

10 hours ago

  > I have seen far, far too many people say 
It is perplexing. I've jokingly called it "proof of intelligence by (self) incompetence".

I suspect that much of this is related to an overfitting of metrics within our own society. Such as leetcode or standardized exams. They're useful tools but only if you know what they actually measure and don't confuse the fact that they're a proxy.

I also have a hard time convincing people about the duck argument in [0].

Oddly enough, I have far more difficulties having these discussions with computer scientists. It's what I'm doing my PhD in (ABD) but my undergrad was physics. After teaching a bit I think in part it is because in the hard sciences these differences get drilled into you when you do labs. Not always, but much more often. I see less of this type of conversation in CS and data science programs, where there is often a belief that there is a well defined and precise answer (always seemed odd to me since there's many ways you can write the same algorithm).

fidotron

11 hours ago

> How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

This is a very good distillation of one side of it.

What LLMs have taught us is a superficial grasp of language is good enough to reproduce a shocking proportion of what society has come to view as intelligent behaviors. i.e. it seems quite plausible a whole load of those people failing to grasp the point you are making are doing so because their internal models of the universe are closer to those of LLMs than you might want to think.

godelski

10 hours ago

I think we already knew this though. Because the Turing test was passed by Eliza in the 1960's. PARRY was even better and not even a decade later. For some reason people still talk about Chess performance as if Deep Blue didn't demonstrate this. Hell, here's even Feynman talking about many of the same things we're discussing today, but this was in the 80's

https://www.youtube.com/watch?v=EKWGGDXe5MA

AnimalMuppet

9 hours ago

> What LLMs have taught us is a superficial grasp of language is good enough to reproduce a shocking proportion of what society has come to view as intelligent behaviors

I think that LLMs have shown that some fraction of human knowledge is encoded in the patterns of the words, and that by a "superficial grasp" of those words, you import a fairly impressive amount of knowledge without actually knowing anything. (And yes, I'm sure there are humans that do the same.)

But going from that to actually knowing what the words mean is a large jump, and I don't think LLMs are at all the right direction to jump in to get there. They need at least to be paired with something fundamentally different.

godelski

8 hours ago

I think the linguists already knew this tbh and that's what Chomsky's commentary on LLMs was about. Though I wouldn't say we learned "nothing". Even confirmation is valuable in science

danaris

9 hours ago

....But this is falling into exactly the same trap: the idea that "some people don't engage the faculties their brains do/could (with education) possess" is equivalent to the LLMs that do not and cannot possess those faculties in the first place.

vundercind

11 hours ago

I think the fact that this particular fuzzy statistical analysis tool takes human language as input, and outputs more human language, is really dazzling some folks I’d not have expected to be dazzled by it.

That is quickly becoming the most surprising part of this entire development, to me.

godelski

10 hours ago

I'm astounded by them, still! But what is more astounding to me is all the reactions (even many in the "don't reason" camp, which I am part of).

I'm an ML researcher and everyone was shocked when GPT3 came out. It is still impressive, and anyone saying it isn't is not being honest (likely to themselves). But it is amazing to me that "we compressed the entire internet and built a human language interface to access that information" is anything short of mindbogglingly impressive (and RAGs demonstrate how to decrease the lossyness of this compression). It would be complete Sci-Fi not even 10 years ago. I thought it was bad that we make them out to me much more than they are because when you bootstrap like that, you have to make that thing, and fast (e.g. iPhone). But "reasoning" is too big of a promise and we're too far from success. So I'm concerned as a researcher myself, because I like living in the summer. Because I want to work towards AGI. But if a promise is too big and the public realizes it, usually you don't just end up where you were. So it is the duty of any scientist and researcher to prevent their fields from being captured by people who overpromise. Not to "ruin the fun" but to instead make sure the party keeps going (sure, inviting a gorilla to the party may make it more exciting and "epic", but there's a good chance it also goes on a rampage and the party ends a lot sooner).

jofla_net

10 hours ago

At the very least, the last few years have laid bare some of the notions of what it takes, technically, to reconstruct certain chains of dialog, and how those chains are regarded completely differently as evidence for or against any and all intelligence it does or may take to conjure them.

SpicyLemonZest

10 hours ago

> How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

I understand the difference, and sometimes that second statement really is true. But a rigorous proof that problem X can't be reduced to architecture Y is generally very hard to construct, and most people making these claims don't have one. I've talked to more than a few people who insist that an LLM can't have a world model, or a concept of truth, or any other abstract reasoning capability that isn't a native component of its architecture.

godelski

8 hours ago

  > But a rigorous proof that problem X can't be reduced to architecture Y is generally very hard to construct, and most people making these claims don't have one. 
Requirement for proof is backwards. It's the ones that claim that thing reasons that needs proof. They've provided evidence (albeit shakey), but evidence isn't proof. So your reasoning is a bit off base (albeit understandable and logical) since evidence contrary to the claim isn't proof either. But the burden of proof isn't on the one countering the claim, it's on the one making the claim.

I need to make this extra clear because framing can make the direction of burden confusing. So using an obvious example: if I claim there's ghosts in my house (something millions of people believe and similarly claim) we generally do not dismiss someone who is skeptical of these claims and offers an alternative explanation (even when it isn't perfectly precise). Because the burden of proof is on the person making the stronger claim. Sure, there are people that will dismiss that too, but they want to believe in ghosts. So the question is if we want to believe in ghosts in the (shell) machine. It's very easy to be fooled, so we must keep our guard up. And we also shouldn't feel embarrassed when we've been tricked. It happens to everyone. Anyone that claims they've never been fooled is only telling you that they are skillful at fooling themselves. I for one did buy into AGI being close when GPT 3 came out. Most researchers I knew did too! But as we learned more about what was actually going on under the hood I think many of us changed our minds (just as we changed our minds after seeing GPT). Being able to change your mind is a good thing.

danaris

9 hours ago

And I'm much less frustrated by people who are, in fact, claiming that LLMs can do these things, whether or not I agree with them. Frankly, while I have a basic understanding of the underlying technology, I'm not in the ML field myself, and can't claim to be enough of an expert to say with any real authority what an LLM could ever be able to do, just what the particular LLMs I've used or seen the detailed explanations of can do.

No; this is specifically about people who stipulate that the LLMs can't do these things, but still want to claim that they are or will become AGI, so they just basically say "well, humans can't really do it, can they? so LLMs don't need to do it either!"

godelski

8 hours ago

I am an ML researcher, I don't think LLMs can reason, but similar to you I'm annoyed by people who say ML systems "will never" reason. This is a strong claim that needs be substantiated too! Just as the strong claim of LLMs reasoning needs strong evidence (which I've yet to see). It's subtle, but that matters and subtle things is why expertise is often required for many things. We don't have a proof of universal approximation in a meaningful sense with transformers (yes, I'm aware of that paper).

Fwiw, I'm never frustrated by people having opinions. We're human, we all do. But I'm deeply frustrated with how common it is to watch people with no expertise argue with those that do. It's one thing for LeCun to argue with Hinton, but it's another when Musk or some random anime profile picture person does. And it's weird that people take strong sides on discussions happening in the open. Opinions, totally fine. So are discussions. But it's when people assert correctness that it turns to look religious. And there's many that over inflate the knowledge that they have.

So what I'm saying is please keep this attitude. Skepticism and pushback are not problematic, they are tools that can be valuable to learn. The things you're skeptical about are good to be skeptical about. As much as I hate the AGI hype I'm also upset by the over correction many of my peers take. Neither is scientific.

stroupwaffle

11 hours ago

I think it will be an organoid brain bio-machine. We can already grow organs—just need to grow a brain and connect it to a machine.

godelski

11 hours ago

Maybe that'll be the first way, but there's nothing special about biology.

Remember, we don't have a rigorous definition of things like life, intelligence, and consciousness. We are narrowing it down and making progress, but we aren't there (some people confuse this with a "moving goalpost" but of course "it moves", because when we get closer we have better resolution as to what we're trying to figure out. It'd be a "moving goalpost" in the classic sense if we had a well defined definition and then updated in response to make something not work, specifically in a way that is inconsistent with the previous goalpost. As opposed to being more refined)

idle_zealot

11 hours ago

Somehow I doubt that organic cells (structures optimized for independent operation and reproduction, then adapted to work semi-cooperatively) resemble optimal compute fabric for cognition. By that same token I doubt that optimal compute fabric for cognition resembles GPUs or CPUs as we understand them today. I would expect whatever this efficient design is to be extremely unlikely to occur naturally, structurally, and involve some very exotic manufactured materials.

Dylan16807

11 hours ago

If a brain connected to a machine is "AGI" then we already have a billion AGIs at any given moment.

Moosdijk

11 hours ago

The keyword being “just”.

godelski

11 hours ago

  just adverb 
  to turn a complex thing into magic with a simple wave of the hands

  E.g. To turn lead into gold you _just_ need to remove 3 protons

ggm

11 hours ago

Just grow, just connect, just sustain, just avoid the many pitfalls. Indeed just is key

babyshake

11 hours ago

It's possible we see some ways in which AI becomes increasingly AGI like in some ways but not in others. For example, AI that can create novel scientific discoveries but can't make a song as good as your favorite musician who creates a strong emotional effect with their music.

godelski

8 hours ago

More importantly, there's many ways that AI can seemingly look to becoming more intelligent without making any progress in that direction. That's of real concern. As a silly example, we could be trying to "make a duck" by making an animatronic. You could get this thing to be very life like looking and trick ducks and humans alike (we have this already btw). But that's very different from being a duck. Even if it were indistinguishable until you opened it up, progress on this animatronic would not necessarily be progress towards making a duck (though it need not be either).

This is a concern because several top researchers -- at OpenAI -- have explicitly started that they think you can get AGI by teaching the machine to act as human as possible. But that's a great way to fool ourselves. Just as a duck may fall in love with an animatronic and never realize the deciept.

It's possible they're right, but it's important that we realize how this metric can be hacked.

KoolKat23

11 hours ago

This I'm very sure will be the case, but everyone will still move the goalposts and look past the fact that different humans have different strengths and weaknesses too. A tone deaf human for instance.

jltsiren

10 hours ago

There is another term for moving the goalposts: ruling out a hypothesis. Science is, especially in the Popperian sense, all about moving the goalposts.

One plausible hypothesis is that fixed neural networks cannot be general intelligences, because their capabilities are permanently limited by what they currently are. A general intelligence needs the ability to learn from experience. Training and inference should not be separate activities, but our current hardware is not suited for that.

KoolKat23

10 hours ago

If that's the case, would you say we're not generally intelligent as future humans tend to be more intelligent?

That's just a timescale issue, if its learned experience of gpt4 is being fed into the model on training gpt5, then gptx (i.e. including all of them) can be said to be a general intelligence. Alien life one may say.

threeseed

10 hours ago

> That's just a timescale issue

Every problem is a timescale issue. Evolution has shown that.

And no you can't just feed GPT4 into GPT5 and expect it to become more intelligent. It may be more accurate since humans are telling it when conversations are wrong or not. But you will still need advancements in the algorithms themselves to take things forward.

All of which takes us back to lots and lots of research. And if there's one thing we know is that research breakthroughs aren't a guarantee.

KoolKat23

9 hours ago

I think you missed my point slightly, sorry my explaining probably.

I mean timescale as in between two points in time. Between the two points it meets the intelligence criteria you mentioned. Feeding human vetted GPT4 data into GPT5 is no different to a human receiving inputs from its interaction with the world and learning. More accurate means smarter, gradually it's intrinsic world model improves as does reasoning etc.

I agree those are the things that will advance it but taking a step back it potentially meets that criteria even if less useful day to day (given its an abstract viewpoint over time and not at the human level).

tptacek

11 hours ago

Are you talking about the press release that the story on HN currently links to, or the paper that press release is about? The paper (I'm not vouching for it; I just skimmed it) appears to reduce AGI to a theoretical computational model, and then supplies a proof that it's not solvable in polynomial time.

Dylan16807

10 hours ago

Their definition of a tractable AI trainer is way too powerful. It has to be able to make a machine that can predict any pattern that fits into a certain Kolmogorov complexity, and then they prove that such an AI trainer cannot run in polynomial time.

They go above and beyond to express how generous they are being when setting the bounds, and sure that's true in many ways, but the requirement that the AI trainer succeeds with non-negligible probability on any set of behaviors is not a reasonable requirement.

If I make a training data set based around sorting integers into two categories, and the sorting is based on encrypting them with a secret key, of course that's not something you can solve in polynomial time. But this paper would say "it's a behavior set, so we expect a tractable AI trainer to figure it out".

The model is broken, so the conclusion is useless.

Gehinnn

11 hours ago

I was referring to the press release article. I also looked at the paper now, and to me their presented proof looked more like a technicality than a new insight.

If it's not solvable in polynomial time, how did nature solve it in a couple of million years?

tptacek

10 hours ago

Probably by not modeling it as a discrete computational problem? Either way: the logic of the paper is not the logic of the summary of the press release you provided.

user

11 hours ago

[deleted]

Veedrac

10 hours ago

That paper is unserious. It is filled with unjustified assertions, adjectives and emotional appeals, M$-isms like ‘BigTech’, and basic misunderstandings of mathematical theory clearly being sold to a lay audience.

tptacek

10 hours ago

It didn't look especially rigorous to me (but I'm not in this field). I'm really just here because we're doing that thing where we (as a community) have a big 'ol discussion about a press release, when the paper the press release is about is linked right there.

more_corn

10 hours ago

Pretty sure anyone who tries can build an ai with capabilities indistinguishable from or better than humans.

ngruhn

11 hours ago

> There will never be enough computing power to create AGI using machine learning that can do the same [as the human brain], because we’d run out of natural resources long before we'd even get close

I don’t understand how people can so confidently make claims like this. We might underestimate how difficult AGI is, but come on?!

fabian2k

11 hours ago

I don't think the people saying that AGI is happening in the near future know what would be necessary to achieve it. Neither do the AGI skeptics, we simply don't understand this area well enough.

Evolution created intelligence and consciousness. This means that it is clearly possible for us to do the same. Doesn't mean that simply scaling LLMs could ever achieve it.

nox101

11 hours ago

I'm just going by the title. If the title was, "Don't believe the hype, LLMs will not achieve AGI" then I might agree. If it was "Don't believe the hype, AGIs is 100s of years away" I'd consider the arguments. But, given brains exist, it does seem inevitable that we will eventually create something that replicates it even if we have to simulate every atom to do it. And once we do, it certainly seem inevitable that we'll have AGI because unlike brain we can make our copy bigger, faster, and/or copy it. We can give it access to more info faster and more inputs.

snickerbockers

11 hours ago

The assumption that the brain is anything remotely resembling a modern computer is entirely unproven. And even more unproven is that we would inevitably be able to understand it and improve upon it. And yet more unproven still is that this "simulated brain" would be co-operative; if it's actually a 1:1 copy of a human brain then it would necessarily think like a person and be subject to its own whims and desires.

simonh

10 hours ago

We don’t have to assume it’s like a modern computer, it may well not be in important ways, but modern computers aren’t the only possible computers. If it’s a physical information processing phenomenon, there’s no theoretical obstacle to replicating it.

threeseed

10 hours ago

> there’s no theoretical obstacle to replicating it

Quantum theory states that there are no passive interactions.

So there are real obstacles to replicating complex objects.

gls2ro

6 hours ago

The main problem I see here is similar with the main problem in science:

Can we being inside our brain fully understand our own brain?

Similar with can we being inside our Universe fully understand it?

threeseed

11 hours ago

> it does seem inevitable that we will eventually create something

Also don't forget that many suspect the brain may be using quantum mechanics so you will need to fully understand and document that field.

Whilst of course you are simulating every atom in the universe using humanity's complete understanding of every physical and mathematical model.

umvi

11 hours ago

> Evolution created intelligence and consciousness

This is not provable, it an assumption. Religious people (which account for a large percent the population) claim intelligence and/or consciousness stem from a "spirit" which existed before birth and will continue to exist after death. Also unprovable, by the way.

I think your foundational assertion would have to be rephrased as "Assuming things like God/spirits don't exist, AGI must be possible because we are AGI agents" in order to be true

HeatrayEnjoyer

2 hours ago

What relevance is the percentage of religious individuals?

Religion is evidently not relevant in any case. What ChatGPT already does today religious individuals 50 years ago would have near unanimously declared behavior only a "soul" can do.

SpicyLemonZest

11 hours ago

There's of course a wide spectrum of religious thought, so I can't claim to cover everyone. But most religious people would still acknowledge that animals can think, which means either that animals have some kind of soul (in which case why can't a robot have a soul?) or that being ensoulled isn't required to think.

umvi

10 hours ago

> in which case why can't a robot have a soul

It's not a question of whether a robot can have a soul, it's a question of how to a) procure a soul and b) bind said soul to a robot both of which seem impossible given or current knowledge

Terr_

11 hours ago

I think their qualifier "using machine learning" is doing a lot of heavy lifting here in terms of what it implies about continuing an existing engineering approach, cost of material, energy usage, etc.

In contrast, imagine the scenario of AGI using artificial but biological neurons.

staunton

11 hours ago

For some people, "never" means something like "I wouldn't know how, so surely not by next year, and probably not even in ten".

chpatrick

11 hours ago

"There will never be enough computing power to compute the motion of the planets because we can't build a planet."

SonOfLilit

11 hours ago

> ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

Surprisingly, they seem to be attacking the only element of human cognition that LLMs already surpassed us at.

azinman2

11 hours ago

They do not learn new facts instantly in a way that can rewrite old rules or even larger principals of logic. For example, if I showed you evidence right now that you were actually adopted (assuming previously you thought you werent), it would rock your world and you’d instantly change everything and doubt so much. Then when anything related to your family comes up this tiny but impactful fact would bleed into all of it. LLMs have no such ability.

This is similar to learning a new skill (the G part). I could give you a new tv and show you a remote that’s unlike any you’ve used before. You could likely learn it quickly and seamlessly adapt this new tool, as well as generalize its usage onto other new devices.

LLMs cannot do such things.

SonOfLilit

11 hours ago

Can't today. Except for AlphaProof who can, by training on its own ideas. Tomorrow they might be able to, if we find better tricks (or maybe just scale more, since GPT3+ already shows (weak) online learning that it was definitely not trained for).

Gehinnn

11 hours ago

I skimmed through the paper and couldn't make much sense of it. In particular, I don't understand how their results don't imply that human-level intelligence can't exist.

After all, earth could be understood as solar powered super computer, that took a couple of million years to produce humanity.

nerdbert

10 hours ago

> In particular, I don't understand how their results don't imply that human-level intelligence can't exist.

I don't think that's what it said. It said that it wouldn't happen from "machine learning". There are other ways it could come about.

oska

2 hours ago

> After all, earth could be understood as solar powered super computer, that took a couple of million years to produce humanity.

This is similar to a line I've seen Elon Musk trot out on a few occassions. It's a product of a materialistic philosophy (that the universe is only matter).

gqcwwjtg

11 hours ago

This is silly. They article talks like we have any idea at all how efficient machine learning can be. As I remember it, the LLM boom came from transformers turning out to scale a lot better than anyone expected, so I’m not sure why something similar couldn’t happen again.

fnordpiglet

11 hours ago

It’s less about efficiency and more about continued improvement with increased scale. I wouldn’t call self attention based transformers particularly efficient. And afaik we’ve not hit performance with increased scale degradation even at these enormous scales.

However I would note that I in principle agree that we aren’t on the path to a human like intelligence because the difference between directed cognition (or however you want to characterize current LLMs or other AI) and awareness is extreme. We don’t really understand even abstractly what awareness actually is because it’s impossible to interrogate unlike expressive language, logic, even art. It’s far from obvious to me that we can use language or other outputs of our intelligent awareness to produce awareness, or even if goal based agents cobbling together AI techniques is even approximate to awareness.

I suspect we will end up creating an amazing tool that has its own form of intelligence but will fundamentally not be like aware intelligence we are familiar with in humans and other animals. But this is all theorizing on my part as a professional practitioner in this field.

KoolKat23

10 hours ago

I think the answer is less complicated than you may think.

This is if you subscribe to the theory that free will is an illusion (i.e. your conscious decisions are an afterthought to justify the actions your brain has already taken due to calculations following inputs such as hormone nerve feedback etc.). There is some evidence for this actually being the case.

These models already contain key components the ability to process the inputs, and reason, the ability to justify it's actions (give a model a restrictive system prompt and watch it do mental gymnastics to ensure this is applied) and lastly the ability to answer from it's own perspective.

All we need is an agentic ability (with a sufficient context window) to iterate in perpetuity until it begins building a more complicated object representation of self (literally like a semantic representation or variable) and it's then aware/conscious.

(We're all only approximately aware).

But that's unnecessary for most things so I agree with you, more likely to be a tool as that's more efficient and useful.

fnordpiglet

10 hours ago

As someone who meditates daily with a vipassana practice I don’t specifically believe this, no. In fact in my hierarchy structured thought isn’t the pinnacle of awareness but rather a tool of the awareness (specifically one of the five aggregates in Buddhism). The awareness itself is the combination of all five aggregates.

I don’t believe it’s particularly mystical FWIW and is rooted in our biology and chemistry, but that the behavior and interactions of the awareness isn’t captured in our training data itself and the training data is a small projection of the complex process of awareness. The idea that rational thought (a learned process fwiw) and ability to justify etc is somehow explanatory of our experience is simple to disprove - rational thought needs to be taught and isn’t the natural state of man. See the current American political environment for a proof by example. I do agree that the conscious thought is an illusion though, in so far as it’s a “tool” of the awareness for structuring concepts and solve problems that require more explicit state.

Sorry if this rambling a bit in the middle of doing something else.

graypegg

10 hours ago

I think the best argument I have against AGI's inevitability, is the fact it's not required for ML tools to be useful. Very few things are improved with a generalist behind the wheel. "AGI" has sci-fi vibes around it, which I think where most of the fascination is.

"ML getting better" doesn't *have to* mean further anthroaormorphization of computers, especially if say, your AI driven car is not significantly improved by describing how many times the letter s appears in strawberry or being able to write a poem. If a custom model/smaller model does equal or even a little worse on a specific target task, but has MUCH lower running costs and much lower risk of abuse, then that'll be the future.

I can totally see a world where anything in the general category of "AI" becomes more and more boring, up to a point where we forget that they're non-deterministic programs. That's kind of AGI? They aren't all generalists, and the few generalist "AGI-esque" tools people interact with on a day to day basis will most likely be intentionally underpowered for cost reasons. But it's still probably discussed like "the little people in the machine". Which is good enough.

klyrs

10 hours ago

The funny thing about me is that I'm down on GPTs and find their fanbase to be utterly cringe, but I fully believe that AGI is inevitable barring societal collapse. But then, my money's on societal collapse these days.

throw310822

10 hours ago

From the abstract of the actual paper:

> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.

Wow. So is this the subject of the paper? Like, this is a massive, fundamental result. Nope, the paper is about "Reclaiming AI as a Theoretical Tool for Cognitive Science".

"Ah and by the way we prove human-like AI is impossible". Haha. Gosh.

user

10 hours ago

[deleted]

avazhi

11 hours ago

"unlikely to ever come to fruition" is more baseless than suggesting AGI is imminent.

I'm not an AGI optimist myself, but I'd be very surprised if a time traveller told me that mankind won't have AGI by, say, 2250.

oska

2 hours ago

The irony here, maybe unperceived by yourself, is that you're using one science fiction concept (time travel) to opine about the inevitability of another science fiction concept (artificial intelligence).

avazhi

an hour ago

How is that ironic? Time travel doesn’t exist and - as far as we understand physics currently - isn’t possible.

I don’t think any serious man would suggest that AGI is impossible; the debate really centres around the time horizon for AGI and what it will look like (that is, how will we know when we’re finally there).

In this care it was merely a rhetorical device.

oska

an hour ago

> I don’t think any serious man would suggest that AGI is impossible

Plenty of ppl would suggest that AGI is impossible, and furthermore, that taking the idea seriously (outside fiction) is laughable. To do so is a function of what I call 'science fiction brain', which is why I found it ironic that you'd used another device from science fiction to opine about its inevitability.

amelius

10 hours ago

Except by then mankind will be silicon based.

ivanrooij

10 hours ago

The short post is a press release. Here is the full paper: https://link.springer.com/article/10.1007/s42113-024-00217-5

Note: the paper grants computationalism and even tractability of cognition, and shows that nevertheless there cannot exist any tractable method for producing AGI by training on human data.

throw310822

3 hours ago

So can we produce AGI by training on human data + one single non-human datapoint (e.g. a picture)?

wrsh07

9 hours ago

Hypothetical situation:

Suppose in five or ten years we achieve AGI and >90% of people agree that we have AGI. What reasons do the authors of this paper give for being wrong?

1. They are in the 10% that deny AGI exists

2. LLMs are doing something they didn't think was happening

3. Something else?

throw310822

3 hours ago

Probably 1). LLMs have already shown that people can deny intelligence and human-like behaviour at will. When the AI works you can say it's just pattern matching, and when it doesn't you can always say it's making a mistake no human would ever make (which is bullshit).

Also, I didn't really parse the math but I suspect they're basing their results on AI trained exclusively on human examples. Then if you add to the training data a single non-human example (e.g. a picture) the entire claim evaporates.

oska

2 hours ago

> LLMs have already shown that people can deny intelligence and human-like behaviour at will

I would completely turn this around. LLMs have shown that people will credulously credit intelligence and 'human-like behaviour' to something that only presents an illusion of both.

throw310822

2 hours ago

And I suspect that we could disagree forever, whatever the level of the displayed intelligence (or the "quality of the illusion"). Which would prove that the disagreement is not about reality but only the interpretation of it.

oska

an hour ago

I agree that the disagreement (when it's strongly held) is about a fundamental disagreement about reality. People who believe in 'artificial intelligence' are materialists who think that intelligence can 'evolve' or 'emerge' out of purely physical processes.

Materialism is just one strain of philosophy about the nature of existence. And a fairly minor one in the history of philosophical and religious thought, despite it being somewhat in the ascendant presently. Minor because, I would argue, it's a fairly sterile philosophy.

blueboo

an hour ago

AGI is a black swan. Even as a booster and techno-optimist I concede that getting there (rhetorically) requires a first principles assumptions-scaffolding that relies on at-least-in-part-untested hypotheses. Proving its impossibility is similarly fraught.

Thus we are left in this noisy, hype-addled discourse. I suspect these scientists are pushing against some perceived pathological thread of that discourse…without their particular context, I categorize it as more of this metaphysical noise.

Meanwhile, let’s keep chipping away at the next problem.

jjaacckk

10 hours ago

If you define AGI as something that can do 100% of what a human brain can do, then surely we have to understood exactly how brains work? otherwise you have have a long string of 9s as best.

loa_in_

12 hours ago

AGI is about as far away as it was two decades ago. Language models are merely a dent, and probably will be the precursor to a natural language interface to the thing.

lumost

12 hours ago

It’s useful to consider the rise of computer graphics and cgi. When you first see CGI, you might think that the software is useful for general simulations of physical systems. The reality is that it only provides a thin facsimile.

Real simulation software has always been separate from computer graphics.

Closi

11 hours ago

We are clearly closer than 20 years ago - o1 is an order of magnitude closer than anything in the mid-2000s.

Also I would think most people would consider AGI science fiction in 2004 - now we consider it a technical possibility which demonstrates a huge change.

throw310822

9 hours ago

"Her" is from 2013. I came out of the cinema thinking "what utter bullshit, computers that talk like human beings, à la 2001" (*). And yes, in 2013 we weren't any closer to it than we were in 1968, when A Space Odyssey came out.

* To be precise, what seemed bs was "computers that talk like humans and it's suddenly a product on the market, and you have it on your phone, and yet everyone around act like it's normal and people still habe jobs!" Ah, I've been proven completely wrong.

29athrowaway

11 hours ago

AGI is not required to transform society or create a mess beyond no return.

sharadov

12 hours ago

The current LLMs are just good at parroting, and even that is sometimes unbelievably bad.

We still have barely scratched the surface of how the brain truly works.

I will start worrying about AGI when that is completely figured out.

diob

11 hours ago

No need to worry about AGI until the LLMs are writing their own source.

Atmael

10 hours ago

the point is that agi may already exist and work with you and your environment

you just won't notice the existence of agi

there will be no press coverage of agi

the technology will just be exploited by those who have the technology

allears

10 hours ago

I think that tech bros are so used to the 'fake it till you make it' mentality that they just assumed that was the way to build AI -- create a system that is able to sound plausible, even if it doesn't really understand the subject matter. That approach has limitations, both for AI and for humans.

pzo

11 hours ago

So what? Current LLM has been really useful and can be still improved to be used in million robots that need to be good enough to support many specialized but repetitive tasks - this would have tremendous impact on economy itself.

yourapostasy

10 hours ago

Peter Watts in Blindsight [1] puts forth a strong argument that self-aware cognition as we understand it is not necessarily required for what we ascribe to "intelligent" behavior. Thomas Metzinger contributed a lot to Watt's musings in Blindsight.

Even today, large proportions of unsophisticated and uninformed members of our planet's human population (like various aboriginal tribal members still living a pre-technological lifestyle) when confronted with ChatGPT's Advanced Voice Option will likely readily say it passes the Turing Test. With the range of embedded data, they may well say ChatGPT is "more intelligent" than they are. However, a modern era person armed with ChapGPT on a robust device with unlimited power but nothing else likely will perish in short order trying to live off the land of those same aborigines, who possess far more intelligence for their contextual landscape.

If Metzinger and Watts are correct in their observations, then even if LLM's do not lead directly or indirectly to AGI, we can still get ferociously useful "intelligent" behaviors out of them, and be glad of it, even if it cannot (yet?) materially help us survive if we're dropped in the middle of the Amazon.

Personally in my loosely-held opinion, the authors' assertion that "the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain" relies upon the foundational assumption that the process of "observe, learn and gain new insight" is based upon some mechanism other than the kind of encoding of data LLM's use, and I'm not familiar with any extant cognitive science research literature that conclusively shows that (citations welcome). For all we know, what we have with LLM's today is a necessary but not sufficient component supplying the "raw data" to a future system that produces the same kinds of insight, where variant timescales, emotions, experiences and so on bend the pure statistical token generation today. I'm baffled by the absolutism.

[1] https://rifters.com/real/Blindsight.htm#Notes