ben_w
a month ago
How what, exactly?
Do you mean "they have a style that you can learn, surely you can recognise this style!", yes, but the default style is ~ a mix of Kenyan (because they were cheap RHLF trainers) with some of how word processors will auto-replace -- with — (plus people like me who were doing em-dashes manually for ages), and also it's trivial to just ask them to use a different style.
Biggest tells are that they respond with superhuman speed, and that they're moderately competent across an inhumanly broad range of skills while being masters of almost nothing.
That, and they keep openly saying "I'm a bot" in various ways e.g. "As a large language model…", that they have generally been expressly trained to not pretend to be a person, whereas the actual Turing Test is an AI that is trying to pretend to be a person — but again, these are things you can get around very easily by just asking the model to respond in certain ways.
oldputput
a month ago
I guess it’s the complete lack of any intentionality, if I were to try and describe it better. It’s just so apparent that, well, here I am submitting an Ask HN trying to understand. I hope I’m not coming across as flippant, I don’t mean to, it’s genuine curiosity, I just don’t understand how anyone could defend their position that any LLM ‘passes’.
ben_w
a month ago
There's no sense of flippancy, don't worry about that.
It's just… the "tells" are just easy to remove, is all.
oldputput
a month ago
It appears you don’t believe any pass the test. I’m more trying to ask of someone who does believe they pass, ‘how can you?’ To me it’s one of those things that‘s so painstakingly obvious that the inherent limitations of words comes into play when trying to describe it.
ben_w
a month ago
They can pass or fail depending on how they're configured.
The default configuration is easy to spot.
It's easy to change that configuration and make them hard to spot.
The default style, that is easy to spot, is also a style that actual real humans use, leading to false positives. I myself have been accused of being a LLM here on this site, just from my style.
oldputput
a month ago
I guess I disagree. The ‘easy to spot’ never goes away, regardless of configuration, so long as the conversation continues. In the spirit of this site, good-faith intellectual curiosity, I will refrain from continuing this conversation in a manner which would reveal your nature as a human or not, and remove the hint of taste of that palpable irony from my mouth (I joke) but it wouldn’t add anything to the discussion anyway. But it would be trivial. Am I making any sense?
ben_w
a month ago
Sure, I've had similar conversations myself, where there seemed to be some kind of disconnect.
Hope you get something more useful from someone else :)
oldputput
a month ago
Do you believe any LLMs pass the test? You personally?
ben_w
a month ago
Are you asking if they fool me, or if I believe they fool people in general?
The former, how would I tell? I'd have to find someone whose writing style I don't know to be the human comparison, in order to blind myself.
The latter: https://arxiv.org/abs/2503.23674
oldputput
a month ago
Yes, you personally. Do you believe you could be fooled by any LLM, unable to ascertain if the words came a human or not? By simply asking questions and getting responses?