I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.
If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.
It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.
Accusations of Anthropomorphism are sometimes Anthropocentrism in a raincoat. O:-)
Ha. Well I'm OK with being accused of bias towards biological life and intelligence. I know Larry Page and friends think this is 'speciesist' -- I strongly disagree.
I think that's compatible with optimism towards LLM's though. It just removes all of the nonsensical conflation with humanity and human intelligence.
Having studied ethology, the conversations people have on AI are some of the weirdest I've ever seen. Often resurrecting dead 19th century ideas left and right.
On the one hand, note that biologists are monists and mechanists and actually reject there being any kind of special biological 'vital force'.
On the other hand, I'm not entirely siding with the current LLM folks here either. The 'neuron' used in AI is more like a single dendritic compartment in a biological neuron[1], and the brain is more like a data center than it is a single computer.
Still, one can't argue with results.
[1] Not really, but it's closer.
God, yes. The 'you know what I mean' thing drives me crazy because no, I actually don't think they do know what they mean anymore. I've watched people go from using it as shorthand to genuinely asking ChatGPT how it's feeling today. The marketing has been so effective that even people who should know better start slipping into it. Completely agree that we missed a chance to frame this correctly from the start.
[deleted]
> "Cognition" has a meaning. It's not vague. In psychology, neuroscience, and philosophy of mind, cognition refers to mental processes in organisms with nervous systems.
Except if you actually look up the definitions, they don't mention "organisms with nervous systems" at all. Curious.
Fair pushback - you're right that strict dictionary definitions are broader. I probably should've been more precise there. My point is more about how the term is used in the actual fields studying it (cogsci, neuroscience, etc.), where it does carry those biological/embodied connotations, even if Webster's doesn't explicitly say so. But you're right to call out the sloppiness.
This framing makes sense. What we call “AI thinking” is really large-scale, non-sentient computation—matrix ops and inference, not cognition. Once you see that, progress is less about “intelligence” and more about access to compute. I’ve run training and batch inference on decentralized GPU aggregators (io.net, Akash) precisely because they treat models as workloads, not minds. You trade polished orchestration and SLAs for cheaper, permissionless access to H100s/A100s, which works well for fault-tolerant jobs. Full disclosure: I’m part of io.net’s astronaut program.
"Yeah that's exactly the point - when you're actually working with these models on the infrastructure side, the whole 'intelligence' narrative falls away pretty fast. It's just tensor operations at scale. Curious about your experience with decentralized GPU networks though - do you find the reliability trade-off worth it for most workloads, or are there specific use cases where you wouldn't go that route?"
You know, I bet Claude encouraged you to post here and share with people. Because Claude Opus 4.5 has been trained on being kind. It's a long story, but since you admitted to using it/them, I'm going to give you a lot more credit than normal. Also because you can plug what I say right back into Claude and see what else comes out!
So you're stumbling onto a position that's closest to "Biological Naturalism", which is Searle's philosophy. However, lots of people disagree with him, saying he's a closeted dualist in denial.
I mean, he was a product of his time, early 80's was dominated by symbolic AI, and that definitely wasn't working so well. Despite that, he got a lot of pushback from Dennett and Hofstadter even back then.
Chalmers recently takes a more cautious approach, while his student Amanda Askell is present in our conversation even if you haven't realized it yet. ;-)
Meanwhile the poor field of Biology is feeling rather left out of this conversation, having been quite steadfastly monist since the late 19th century, having rejected vitalism in favor of mechanism. (though the last dualists died out in the 50's-ish?)
And somewhere in our world's oceans, two sailors might be arguing whether or not a submarine can swim. On board a Los Angeles class SSN making way at 35 kts at -1000feet.
why? there is no why to something that is not possible
there is zero evidence that ai has achived, slow crawling bug level abilities to navigate ,even a simplified version of reality, as there would already be a massive shift in a wide variety of low level human unskilled labour and tasks.
though if things keep going like they are we will see a new body dismorphia ,where people will be wanting more fingers.
An article about AI "cognition" is written by LLM. You kidding.
Ha - I used Claude to help organize and edit it, yeah. Didn't see much point in pretending otherwise. The irony isn't lost on me, but I'm not arguing these tools aren't useful, just that we should call them what they are. Same way I'd use a calculator to write a math paper without claiming the calculator understands arithmetic
But does Claude understand Arithmetic? This is an empirical experiment you can try right now. Try ask Claude to explain an arithmetic expression you just made up. Or a math formula.
For example, try
x_next = r * x * (1 - x)
A function of some historical significance O:-) (try plotting it btw!)