mrweasel
2 days ago
An absolute enjoyable read. It also raises a good point, regarding the Turing test. I have a family member who teaches adults and as she pointed out: You won't believe how stupid some people are.
As critical as I might be of LLMs, I fear that they already outpaced a good portion of the population "intellectually". There's a lower level, which modern LLMs won't cross, in terms of lack of general knowledge or outright stupidity.
We may have reached a point where we can tell that we're talking to a human, because there's no way a computer would lack such basic knowledge or display similar levels of helplessness.
voxleone
a day ago
I sometimes feel a peculiar resonance with these models: they catch the faintest hints of irony and return astoundingly witty remarks, almost as if they were another version of myself. Yet all of the problems, inconsistencies, and surprises that arise in human thought stem from something profoundly differen, which is our embodied experience of the world. Humans integrate sensory feedback, form goals, navigate uncertainty, and make countless micro-decisions in real time, all while reasoning causally and contextually. Cognition is active, multimodal, and adaptive; it is not merely a reflection of prior experience a continual construction of understanding.
And then there are some brilliant friends of mine, people with whom a conversation can unfold for days, rewarding me with the same rapid, incisive exchange we now associate with language models. There is, clearly, an intellectual and environmental element to it.
skybrian
2 days ago
Whenever we're testing LLM's against people we need to ask "which people?" Testing a chess bot against random undergrads versus chess grandmasters tells us different things.
From an economics perspective, maybe a relevant comparison is to people who do that task professionally.
TeodorDyakov
a day ago
Gdpeval