slibhb
10 months ago
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
beloch
10 months ago
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".
LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.
israrkhan
10 months ago
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
TheOtherHobbes
10 months ago
As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.
Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.
There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.
AI can reinforce that. But - ironically - it can also be very good at subverting it.
hn_throwaway_99
10 months ago
> As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
This really seems like an "akshually" argument to me...
Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.
> Obviously this is really wishing for domestic robots, not AI
I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.
lannisterstark
10 months ago
> I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine.
I don't mean to sound insensitive, but, how? Literal hours?
Qworg
10 months ago
The wits in robotics would say we already have domestic robots - we just call them dishwashers and washing machines. Once something becomes good enough to take the job completely, it gets the name and drops "robotic" - that's why we still have robotic vacuums.
tshaddox
10 months ago
I think that’s a bit silly. The reason we don’t commonly refer to a dishwasher as a robot isn’t because dishwashers exist and we only use “robot” for things that don’t exist.
(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)
It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).
Qworg
10 months ago
What are robots or not is a point of debate - there are many different definitions.
Generally, it has to automate a task with some intelligence, so dishwashers qualify. It isn't a existence proof (nor did I state that).
tshaddox
10 months ago
I'm more interested in how we regularly use the term, rather than how we might attempt to come up with a rigorous definition (particularly when that rigorous definition conflicts awkwardly with regular usage).
My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.
mylittlebrain
10 months ago
Similarly, we already have AI, which is really MI (Machine Intelligence). Long before the current hype cycle the defense industry and others have been using the same tools being applied now. Of course, there are differences, such as scale and architecture, etc.
j_bum
10 months ago
Oh that’s an interesting idea.
I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.
tshaddox
10 months ago
> maybe you haven't noticed but there's a machine washing your clothes
Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”
This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.
bdhcuidbebe
10 months ago
> there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.
The term AI clearly has lost all its meaning, so thank you for making it so apparent.
GeoAtreides
10 months ago
>there's a good chance it has at least some very basic AI in it.
lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances
bad_user
10 months ago
I have yet to enjoy any of the "creative" slop coming out of LLMs.
Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.
Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.
I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".
ChrisMarshallNY
10 months ago
> I have yet to enjoy any of the "creative" slop coming out of LLMs.
Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?
I mean, that's AI "creativity," at its peak!
[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)
bad_user
10 months ago
I don't find that very funny. It's interesting to see what AI can do, but wait a month or two and watch it again.
Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.
This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.
ChrisMarshallNY
10 months ago
Seems that you may have a point. As noted in another comment[0], the [rather puerile] lyrics were completely bro-sourced. They used Suno to mimic an old-style band.
ninkendo
10 months ago
I haven’t cried from laughing like this in a good while, thanks!
codethief
10 months ago
Apparently, the lyrics were not AI-generated, see https://www.reddit.com/r/Music/comments/1byjm7m/comment/l0wm...
ChrisMarshallNY
10 months ago
Good find!
A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).
schwartzworld
10 months ago
We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.
aaronbaugher
10 months ago
It was kind of funny to see the shift in the media reaction when they realized the new batch of machines are better at replacing writers than at replacing truckers.
__MatrixMan__
10 months ago
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
hn_throwaway_99
10 months ago
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves
Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.
Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.
__MatrixMan__
10 months ago
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?
My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.
It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.
protocolture
10 months ago
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.
And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.
Lerc
10 months ago
"LLMs are statistical models"
I see this referenced over and over again to trivialise AI as if it is a fait acompli.
I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
lelandbatey
10 months ago
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.
It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).
Lerc
10 months ago
That's a very archaic view of AI, like 70's era symbolic AI.
vacuity
10 months ago
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
slibhb
10 months ago
I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.
> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.
To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.
Lerc
10 months ago
>I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.
Point taken. As lelandbatey said, your comment seems to be the one case where it's not meant to trivialise.
>I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.
The (regardless of the program they run) suggests you think that AI cannot be achieved by algorithmic means. That runs a little counter to the belief that it is possible to build thinking machines unless you think those future machines have some non algorithmic enhancement that takes them beyond machines,
I do not assume we will "accidentally" create thinking machines, but I certainly think it's not impossible.
On the other hand I suspect the best chance we have of understanding consciousness will be by attempting to build one.
BeetleB
10 months ago
Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.
Then when word processors came around, it was expected that faculty members will type it up themselves.
I don't know if there were fewer secretaries as a result, but professors' lives got much worse.
He misses the old days.
user
10 months ago
zusammen
10 months ago
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.
jhbadger
10 months ago
This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.
Balgair
10 months ago
[flagged]
n4r9
10 months ago
I've only read the first Foundation novel by Asimov. But what you write applies equally well to many other Golden Age authors e.g. Heinlein and Bradbury, plus slightly later writers like Clarke. I doubt there was much in the way of autism awareness or diagnosis at the time, but it wouldn't be surprising if any of these landed somewhere on the spectrum.
Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.
throwanem
10 months ago
Heinlein doesn't develop his characters? Oh, come on. You can't have read him at all!
n4r9
10 months ago
[The italics and punctuation suggest your comment is sarcastic, but I'm going to treat it as serious just in case.]
Yeah, I'd say characterisation is a weakness of his. I've read Stranger in a Strange Land, The Moon is a Harsh Mistress, Starship Troopers, and Double Star. Heinlein does explore characters more than, say, Clark, but he doesn't go much for internal change or emotional growth. His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
throwanem
10 months ago
Excuse me for giving the impression of a pedant, but do you mean Clarke, as in Arthur C., there? I've been trying since I first read your comment to puzzle out to whom by that name you could possibly be referring in this context, and it's only just dawned on me to wonder if you simply have not bothered to learn the spelling of the name you intended to mention.
n4r9
10 months ago
Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
throwanem
10 months ago
> Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
In entire fairness, I was distracted by you having said he and his contemporaries must all have been autistic, as if either you yourself were remotely competent to embark upon any such determination, or as though it would in some way indict their work if they were.
I'm sure you would never in a million years dare utter "the R-slur" in public, though I would guess that in private the violation of taboo is thrilling. That's fine as far as it goes, but you really should not expect to get away with pretending you can just say "autistic" to mean the same thing and have no one notice, you blatantly obvious bigot.
throwanem
10 months ago
Thank you for confirming, especially at such effort, when a simple "No, I haven't; I just spend too much time uncritically reading feminism Twitter," would have amply sufficed. There's an honesty to this response in spite of itself, and in spite of itself I respect that.
Balgair
10 months ago
I sincerely have no idea if any of your comments in this thread are sarcastic or not. (This comment is also not sarcastic FYI).
Generally, I also agree that Heinlein's characters are one dimensional and could benefit from greater character growth, though that was a bit of a hallmark of Golden Age sci-fi.
throwanem
10 months ago
"Teenage boy wish fulfillment" is well beneath any reasonable standard of criticism, and I've addressed that with about as much respect as it deserves.
There is much worthy of critique in Heinlein, especially in his depiction of women. I've spent about a quarter century off and on both reading and formulating such critiques, much more recently than I've spent meaningful time with his fiction. I've also read what he had to say for himself before he died, and what Mrs. Heinlein - she kept the name - said about him after. If we want to talk about, for example, how the themes of maternal incest and specifically feminine embodiment of weakly superhuman AGI in his later work reflect a degree of senescence and the wish for a supercompetent maternal figure to whom to surrender the burden of responsibility, or if we want to talk about how Heinlein seems to spend an enormous amount of time just generally exploring stuff from female characters' perspectives that an honest modern inquiry would recognize as fumbling badly but earnestly in the direction of something like a contemporary understanding of gender, then we could talk about that.
No one wants to, though. You can't use anything like that as a stick to beat people with, so it never gets a look in, and those as here who care nothing for anything of the subject save if it looks serviceable as a weapon claim to be the only ones in the talk who are honest. They don't know the man's work well enough to talk about the years he spent selling stories that absolutely revolve around character development, which exist solely to exemplify it! Of course these are universally dismissed as his 'juveniles' - a few letters shy of 'juvenilia' - because science fiction superfans are all children and so are science fiction superhaters, neither of whom knows how to respond in any way better than a tantrum on the rare occasion of being told bluntly it's well past time they grew up.
But they're the honest ones. Why not? So it goes. It's a conversation I know better than to try to have, especially on Hacker News; if I don't care for how it's proceeding, I've no one but myself to blame.
n4r9
10 months ago
Not sure if it will help me saying this, but that's a disappointingly dismissive and avoidant response well below HN standards. I'm very willing to engage with any counter-arguments in good faith. I don't use Twitter (or Mastodon, or BlueSky, or TikTok, or Facebook, or Threads etc...), but I do enjoy discussing sci fi of different periods on Goodreads groups.
throwanem
10 months ago
It seems filthy rich of you to claim good faith at this time, but I have recently begun to gather that in some quarters lately, it is considered offensively unreasonable to expect working knowledge of any material as a prerequisite for participating competently in discussion thereof. So though your claim is facially false, I ironically can't fairly consider that it is other than honestly made. Your precepts are in any case your problem. Good luck with it, you Hacker News expert.
n4r9
10 months ago
I've now gone through two well-known pieces of critical analysis of Heinlein's work, and found that they broadly (and in places exactly) agree with my initial sentiment. Far from "feminist twitter", they are written by serious science fiction critics during Heinlein's lifetime. Below are some quoations and references. I have yet to find compelling evidence to the contrary in my research.
------------------------------------------------------------------------
> Heinlein's male characters may be divided into two categories: the competent and the incompetent. The incompetents are of little use in the practical world. They function mainly as caricatures for purposes of contrast, satire, and humor, and include such types as the spoiled brat, the jellyfish father, the pompous blowhard, and the bungling meddler. The competent male characters are divided into two types: the stock competent and the Heinlein hero.
> There are a goodly share of failures due to Heinlein's discomfort, and his subsequent exclusion of emotions and arm's-length distancing of the intimate.
> The Heinlein heroine ... stands as a—pardon the expression—male chauvinist tribute to the hero, implying that women—even such as the heroine—enjoy being dominated. ... Heinlein himself could not break away from his own emotional attachment to the obedient female.
Ronald Sarti. “Variations on a Theme: Human Sexuality in the Work of Robert A. Heinlein.” (1978)
https://www.enotes.com/topics/robert-heinlein-61736/criticis...
------------------------------------------------------------------------
> The Moon Is a Harsh Mistress is totally a story of process rather than character. Heinlein has always been more interested in how machines and societies work than in why people act, and this is probably more true of this novel than any of his others. And it is the center of what is wrong with it as a story ... Heinlein has always had a weakness for forcing emotion, possibly because his characters themselves are unemotional. When Heinlein wants us to approve a character or a position, or to feel moved, instead of giving us a natural emotional reason growing out of the story or, alternatively, underplaying, he is all too likely to try to find a button in us to push.
> Heinlein's concern with his religion [in Stranger In A Strange Land] is so great, unfortunately, that he lets all character development go hang. Mike Smith is lessened by his super powers. ... Jubal Harshaw, too, is lessened by his super powers -- doctor, lawyer, etc.; his multiple training seems a gratuitous gift from Heinlein without reason or explanation. He redeems himself somewhat by his crusty nature, but I find him suspect. He is too pat. Some of the minor characters have life at the beginning of the story and then lose it, overcome by the flood of talk that engulfs the last half of the novel. Which secretary sleeps with Mike his first time out? They are so lacking in definition that it is impossible to tell. Jill Boardman supposedly loves Ben Caxton, but won't sleep with him. She will, however, go off around the country with Mike on a sleep-in basis. Why? I can't say. At any time it would not surprise me for her to unscrew her foot and stick it in her ear -- she is capable of anything. Ben Caxton's motivations are equally unclear.
> Basically, Heinlein has used the same general characters in story after story, and has kept these characters limited ones. ... There is one unique and vivid human Heinlein character, but he is a composite of Joe-Jim Gregory, Harriman, Waldo, Lazarus Long, Mr. Kiku and many others, rather than any one individual. I call the composite the Heinlein Individual. ... Outside of this Heinlein Individual, there is usually a small supporting cast of side men in any one book. Their most striking feature is their competence, reflecting that of the Heinlein Individual. Beyond that, however, hardly any attempt is made to individualize them, for, after all, they are no more than supporting characters, and if lead characters are not described, what can be expected for less important players? After this small circle, Heinlein ordinarily relies on caricature, and he has a number of set pieces which he produces as needed.
Alexei Panshin. "Heinlein In Dimension" (1968)
https://www.panshin.com/critics/Dimension/hdcontents.html#Co...
wubrr
10 months ago
> LLMs are statistical models trained on human-generated text.
I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...
slibhb
10 months ago
> Also, human brains are arguably statistical models trained on human-generated/collected data as well...
I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.
wubrr
10 months ago
Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).
skydhash
10 months ago
What we generate is probably a function of our sensory data + what we call creativity. At least humans still have access to the sensory data, so we can separate the two (with various success).
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
wubrr
10 months ago
> At least humans still have access to the sensory data
I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...
> So it embed how we may use words, but not why we use this word and not others.
Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
skydhash
10 months ago
> I don't understand this point - we can obviously collect sensory data and use that for training.
Sensory data is not the main issue, but how we interpret them.
In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.
But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.
> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.
So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
jstanley
10 months ago
> there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs
I don't buy it. I think our eyes are approximately as fine as we perceive them to be.
When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.
wubrr
10 months ago
> In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.
> there's a lot of training done with corrections when we say a sentence incorrectly.
There's a lot of the same training for LLMs.
> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.
throwaway7783
10 months ago
One can look at creativity as discovery of a hitherto unknown pattern in a very large space of patterns.
No reason to think an LLM (a few generations down the line if not now) cannot do that
skydhash
10 months ago
Not really, sometimes it's just plausible lies. We distort the world, but respects some basic rules, making it believable. Another difference from LLMs is that we can store this distortion and lay upon it as $TRUTH.
And we can distort quite far (see cartoons in drawing, dubstep in music,...)
throwaway7783
10 months ago
What you are saying does not seem to contradict what I'm saying. Any distortion would be another hitherto unknown pattern.
827a
10 months ago
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.
My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.
The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.
However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.
It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.
wubrr
10 months ago
Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.
827a
10 months ago
For sure; we don't have a way of comparing the architectural substrate of human intelligence versus LLM intelligence. We don't even have a way of comparing the architectural substrate of one human brain with another.
Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.
On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.
This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.
matheusd
10 months ago
Attempting to summarize your argument (please let me know if I succeeded):
Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?
If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?
827a
10 months ago
Well, my argument is more-so directed at the people who say "well, the human brain is just a statistical model with training data". If I say: Both birds and airplanes are just a fuselage with wings, then proceed to dump billions of dollars into developing better wings; we're missing the bigger picture on how birds and airplanes are different.
LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.
But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.