LLMs show a "highly unreliable" capacity to describe own internal processes

3 pointsposted 17 hours ago
by pseudolus

1 Comments

robocat

13 hours ago

Perhaps it's the training material! People confabulate their reasoning all the time (rationalisation etc).

I regularly laugh at the double standards we want to apply to AI. So often we seem to demand AI to beat a standard that most people don't.