southernplaces7
3 days ago
>Generative AI doesn't have a coherent understanding of the world
The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't. That would require self-directed reasoning and self-awareness, both of which it lacks based on any available evidence. (though there's no shortage of irrational defenders here who somehow leap to say that there's no difference between consciousness in humans and the pattern matching of AI technology today, because they happened to have a "conversation" with ChatGPT and etc).
_heimdall
2 days ago
> The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't.
And this right here is why its so frustrating to me that the term "AI" is used for LLMs. They are impressive for certain tasks, but they are nothing close to artificial intelligence and were never designed to be.
lemoncookiechip
2 days ago
> And this right here is why its so frustrating to me that the term "AI" is used for LLMs. They are impressive for certain tasks, but they are nothing close to artificial intelligence and were never designed to be.
We've had the debate if animals were self-aware in the first place, some seem to be based on studies, some don't. But all animals have intelligence, and the ability to problem solve to different degrees, and they most certainly learn based on experience and evolution.
Artificial Intelligence means intelligence that was created artificially, how intelligent and whether or not it is self-aware (it's not), is a different debate from "it's not intelligence".
ChatGPT put it best itself:
> Yes, I possess artificial intelligence, which enables me to process information, answer questions, solve problems, and engage in conversations. However, my intelligence is based on patterns and data, not personal experience or consciousness. Let me know how I can assist you!
AI isn't Skynet, Wintermute or HAL 9000, those are fictional and are what we dubbed ASI (Super), not AGI (General) or ANI (Narrow), ASI is self-aware, the others are simple intelligence that can either do one thing, or multiple things to different degrees of quality comparatively to humans (not always better, not always worse).
So yes, LLMs have intelligence, and you can claim the same for other software. Whether or not they're equal to or superior to humans is irrelevant since animals are inferior to us in intelligence to different degrees.
At the end of the day Intelligence can mean different things to different people, so I can't just outright tell you that you're right or wrong, I can only express my viewpoint and why I think LLMs as well as some other software are AIs.
EDIT: Should also note that words take on different meanings, and AI is an easy buzzword to use for the general public, so the word took on extra meaning, and it did so decades ago. We've been calling NPCs in video games AI for decades. There's likely other examples that don't come to mind where AI has been used far longer than LLMs.
mdp2021
a day ago
> There's likely other examples that don't come to mind where AI has been used far longer than LLMs
We started in 1956. https://en.wikipedia.org/wiki/Dartmouth_workshop
furyofantares
2 days ago
I love that "AI" is suddenly a high bar that the best AI technologies we have don't pass, when the field is 70 years old and includes, you know, straightforward search algorithms.
I mean, I get it. These things are mainstream and close enough in some ways to what we see in science fiction that the term evokes the sci-fi meaning of the term rather than the computer science meaning.
_heimdall
2 days ago
I can't speak to 70 years ago, but in my 30ish years I don't think I have raised the bar of what I'd consider AI. I never thought of text prediction or auto complete as AI for example, though out of context I could see many 70 years ago considering it AI.
I've always considered AI as a much more complex concept than simply appearing to be human in speech or text. Extremely complex prediction algorithms that appear on the surface to be creative wouldn't have met my bar either.
An LLM can predict how a human may write a white paper, for example. It will sound impressive, but the model and related algorithms seem to have absolutely no way of logically working through a problem or modelling the world to know if the ideas proposed in the paper might work. Its a really impressive sounding impersonator, but it is nothing like intelligence.
furyofantares
2 days ago
Have you been balking at video game AI this whole time? At chess engines when they were pure search? Or at chess engines now?
It's actually long been noted that the bar keeps rising, that when we figure something out it stops being called AI by a lot of people. I don't think that's individuals raising their bar necessarily though, but stuff becomes more mainstream or younger people being introduced to it.
_heimdall
2 days ago
I've never considered chess engines or video games to have AI, I guess in that sense you could say I balked at it.
> It's actually long been noted that the bar keeps rising, that when we figure something out it stops being called AI by a lot of people.
That's interesting, I've seen it the other way around. In the 90s I don't remember anything related to AI depicting something as simple as fancy text prediction or human-like speech. Going back further and Hal from Space Odyssey didn't seem like an AI that was just predictive text.
Its always possible my expectations were born from science fiction and not academic research, but that's a pretty big gap if academics considered AI to only require natural language processing and text prediction.
MiguelX413
2 days ago
I also never considered chess engines or video games to have AI.
armada651
2 days ago
The bar keeps raising because the term used for it promises human-level intelligence from a machine.
The same thing could be said about Virtual Reality where the bar raises each time the tech gets closer to making VR indistinguishable from reality.
Melonotromo
2 days ago
Explain to me how a normal human is intelligent then?
How often have you talked to someone and showed them evidence through logic or other means and they just don't get it? Or just don't want to accept it?
Are people who do not properly self reflect, non-intelligent?
AI only needs to be as good as a human and cheaper
NBJack
2 days ago
Strangely enough, it is actually solid proof that they have a mental model; they just reject your attempts to amend it. This is where many assert several fundamental decisions of that model are actually more driven by the heart rather than the brain.
A LLM has no such concept. It is not truly capable of accepting your rational arguments; it may nod emphatically in its response after a few sentences, but once your input leaves its context window, all bets are off. We can retrain it with your particular conversation, and we will increase likelihood it will more readily agree with you, but that is likely a drop in the bucket of the larger corpus.
But this is well past technology and into philosophy, of which I can safely say my mental model for the topic would likely be rather frustrating to others.
_heimdall
2 days ago
> AI only needs to be as good as a human and cheaper
Setting aside the current limitations of LLMs for a moment, this is just a really unfortunate view on a new tech in my opinion. Its nothing unique here so I don't mean to single you out. I've seen it plenty of other places. The idea, though, that we effectively imply that most humans are idiots and we just need to invent something that is as good at being a functional idiot but for much less money leaves humanity in a terrible place. What does the world look like if we succeed at that?
To you main question, though, I think differences in human opinions and ideas can be easily misconstrued as a lack of intelligence. Our would also consider environmental factors of the average person.
I'll stick with the US as its what I am most familiar with. The average american has what I'd consider a high level of stress, chronic health conditions, one or more pharmaceutical prescriptions, and our food system is sorely lacking and borderline poisonous. With all those factors in play I can't put too much weight behind my view of the average person's intelligence.
mdp2021
a day ago
> The average ... has what I'd consider a high level of stress, chronic health conditions, one or more pharmaceutical prescriptions, and our food system is sorely lacking and borderline poisonous. With all those factors in play I can't put too much weight behind my view of the average person's intelligence
Affluent societies that abandon an ideal of personal development bound to forces that bring to a catastrophic configuration of society, with devastating effects: the absurdity, the risks and impact, the diversion from propriety merit all focus.
I shall add: the Great Scarcity today is... Societies.
southernplaces7
2 days ago
For one thing, people often seem stupid to others not so much because they really are so, but because they don't happen to share another's specific context, views and opinions of certain things. Agreement is a terrible way to measure intelligence but so many of us do it.
Secondly, the very fact that you can't convince some people to "get it" (whatever the hell that thing may be, regardless of whether you're the one who's actually failing to get something at the same time) is evidence of their self directed reasoning. For better or worse, it's there, along with a sense of self that even the ostensibly dumbest person has, excepting maybe those with truly profound mental disabilities.
None of these things apply to LLMs, so why not drop the semantics about similarity or much less equivalence? Sel awareness is a sensation that you yourself feel every day as a conscious human being in your own right, and no evidence whatsoever indicates that any LLM has even a smidge of it.
mdp2021
a day ago
"Intelligence" is an ability: some have it, and in large amount and complexity, some don't; some use it regularly, intensively and/or extensively, some don't.
AI is a set of engineered tools and they need to be adequate for the job.
When we speak about general Intelligence, it will have to be acute and reliable. Surely the metric will have to be well above low bars for humans, already because for that we already have humans available, and because, as noted, that low bar level is easily not «adequate for the job».
mdp2021
2 days ago
"Understanding" in the article just means "building a true and coherent world model, a faithful representation". The LLM in question does not: e.g. it has some "belief" in entities (e.g. streets on a map) which are not there.
That kind of understanding has no relation with «self-directed reasoning and self-awareness».
sdwr
2 days ago
"Understanding" and "self-awareness" is as much a political tool as a practical definition. Pigs are self-aware, and understand, but we don't give them the space to express it.
AI is currently missing pieces it needs to be a "complete being", but saying there's nothing there is falling into the same reductive, "scientific" nitpicking that brought us "fish can't feel pain, they just look like it and act like it".
Glemitii
2 days ago
50% of people voted in a rapist.
Why? For sure not out of logic.
The assumption is that if you just read a lot of information, more than a normal human ever can read that you get something which can be similar to what a human builds in their brain through living.
LLM as the end result of a adult human state.
It's not that far fetched.
Especially when you see how hard it is to discuss things purely logical and not through feelings, opinions and following behavior patterns of others.