I've read many posts and comments at this point that describe LLMs in very reductionist language. Eg, from the article:
> They’re a trillion numbers in a trenchcoat; not logical, in either a machine or a mental sense, but stochastic.
Many of these posts and comments claim that human minds are substantially different ("better" is implied). The evidence is a sort of broad gesturing at explanations of how LLMs are implemented ("math") and how they work ("guess the next word"). And because of these facts, we should treat them in a particular way, or certain things will never happen.
I've been trying to look past the obvious straw man here and to actually think critically about this tech as well as compare it to my own experience and (admittedly very limited) understanding of the human brain.
In more ways than feels comfortable, it seems entirely possible to me that these things actually are or could be really close to the ways that our own minds work.
Our own minds/consciousness are ultimately based on physical processes, I don't think anyone would dispute that. At some point, the physical phenomena in our brains presumably result in the emergent behavior of thinking and consciousness. We have no idea how it works, but it's our lived experience. Why can't that be the case for silicon-based rather than carbon-based processes? How can we say with any certainty that it's not happening elsewhere if we don't know how it works?
Reducing their function to "guessing the next word" sounds an awful lot like what happens when I start talking to someone. I have an idea of what I want to say, but I almost never have a sentence planned out when I start it.
The article puts "thinking" and "hallucination" in scare quotes. But I mean – the way that they appear to think by working through problems with language mirrors my own "thinking" very closely.
It says "They’re not thinking. They’re not hallucinating"; the exercise of figuring out why is left to the reader. If you've ever talked to a 3 or 4 year old, or someone who's tired, you may have had similar experiences re: hallucinations.
These are all pretty surface level examples, but as I use the tools more and learn more about how they work I'm not seeing any significant evidence that counters the examples.
I do think it's probably dangerous and unhealthy to really anthropomorphize AI/LLMs. They're obviously not human even if they're thinking, and they're being made and shaped by companies (and training sets) that exist in a predominantly capitalist world (but then again, I guess we are too).
I assume similar lines of thinking being discussed somewhere, but I haven't found much (and I feel like I'm reading about AI all day). Curious to hears others' thoughts and/or to be pointed to wherever this stuff is being talked about.