entropyneur
6 hours ago
This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing.
A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.
keiferski
5 hours ago
It matters if your civilizational system is built on assigning rights or responsibilities to things because they have consciousness or "interiority." Intelligence fits here just as well.
Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)
If we just treat intelligence as a descriptive quality and apply it to LLMs, we quickly realize the absurdity of saying a chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.
AIPedant
5 hours ago
"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?
The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.
throwawayqqq11
3 hours ago
Well yes, any creation tries to anticipate some reaction, be it audience, environment, or only the creators one.
A prediction is just a reaction to a present state, which is the simplest definition of intelligence: The ability to (sense and) react to something. I like to use this definition, instead of "being able to predict", because its more generic.
The more sophisticated (and directed) the reaction is, the more intelligent the system must be. Following this logic, even a traffic light is intelligent, at least more intelligent than a simple rock.
From that perspective, the question of why a creator produced a piece of art becomes unimportant to determine intelligence, since the simple fact that he did is sign of intelligence already.
entropyneur
5 hours ago
> Did David Lynch make Mulholland Drive because he predicted it would be a good movie?
He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.
Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.
whilenot-dev
2 hours ago
> how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive
As it stands, AI is a tool and requires artists/individuals to initiate a process. How many AI made artifacts do you know that enjoy the same cultural relevance as their human made counterparts? Novels, music, movies, shows, games... anything?
You're arguing that the types of film cameras play some part in the significant identity that makes Mulholland Drive a work of art, and I'd disagree. While artists/individuals might gain cultural recognition, the tool on its own rarely will. A tool of choice can be an inspiration for a work and gain a certain significance (e.g. the Honda CB77 Super Hawk[0]), but it seems that people always strive to look for the human individual behind any process, as it is generally accepted that the complete body of works tells a different story that any one artifact ever can.
Marcel Duchamp's Readymade[1] (and the mere choice of the artist) gave impact to this cultural shift more than a century ago, and I see similarities in economic and scientific efforts as well. Apple isn't Apple without the influence of a "Steve Jobs" or a "Jony Ive" - people are interested in the individuals behind companies and institutions, while at the same time also tend to underestimate the amount of individuals that makes any work an artifact - but that's a different topic.
If some future form of AI will transcend into a sentient object that isn't a plain tool anymore, I'd guess (in stark contrast to popular perception) we'll all lose interest rather quickly.
[0]: https://en.wikipedia.org/wiki/Honda_CB77#Zen_and_the_Art_of_...
gilleain
4 hours ago
> unless he was completely unlike anyone I've ever met,
I mean ... he is David Lynch.
We seem to be defining "predicted" to mean "any vision or idea I have of the future". Hopefully film directors have _some_ idea of what their film should look like, but that seems distinct from what they expect that it will end up.
MrScruff
5 hours ago
It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.
Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.
simianwords
5 hours ago
"David Lynch made Mullholland Drive because he was intelligent" is also absurd.
peterashford
5 hours ago
But "An intelligent creature made Mullholland Drive" is not
pu_pe
5 hours ago
How would you define intelligence? Surely not by the ability to make a critically acclaimed movie, right?
koonsolo
5 hours ago
I look at it the complete opposite way: humans are defining intelligence upwards to make sure they can perceive themselves better than a computer.
It's clear that humans consider humans as intelligent. Is a monkey intelligent? A dolphin? A crow? An ant?
So I ask you, what is the lowest form of intelligence to you?
(I'm also a huge David Lynch fan by the way :D)
AIPedant
4 hours ago
If you look at my comment history you will see that I don't think LLMs are nearly as intelligent as rats or pigeons. Rats and pigeons have an intuitive understanding of quantity and LLMs do not.
I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish.
peterashford
5 hours ago
Im not sure what that gets you. I think most people would suggest that it appears to be a sliding scale. Humans, dolphins / crows, ants, etc. What does that get us?
koonsolo
5 hours ago
Well, is an LLM more intelligent than an ant?
DonaldFisk
2 hours ago
I think that intelligence requires, or rather, is the development and use of a model of the problem while the problem is being solved, i.e. it involves understanding the problem. Accurate predictions, based on extrapolations made by systems trained using huge quantities of data, are not enough.
ACCount37
4 hours ago
From a practical standpoint, all the talk of "true understanding", "sentience" and the likes is pointless.
The only real and measurable thing is performance. And the performance of AI systems only goes up.
vrighter
3 hours ago
But only goes up in the sense that it's getting closer to a horizontal asymptote. Which is not really that good.
ACCount37
3 hours ago
It does, but the limit isn't "human performance". AI isn't bounded by human performance. The limit is the saturation of the benchmark in question.
Which is solvable with better benchmarks.
cantor_S_drug
6 hours ago
Imagine LLM is conscious (as Anthropic wants us to believe). Imagine LLM is made to train on so much data which is far beyond what its parameter count allows for. Am I hurting the LLM by causing it intensive cognitive strain?
entropyneur
5 hours ago
I agree that whether AI is conscious is an important question. In fact, I think it's the most important question above our own existential crisis. Unfortunately, it's also completely hopeless at our current level of knowledge.
adastra22
6 hours ago
Why would that hurt?
cantor_S_drug
an hour ago
You are made to memorize entire encyclopedia but you have biological limit of only 1000 facts.
wagwang
6 hours ago
Predict and create, that's all that matters.