LLMs show a "highly unreliable" capacity to describe own internal processes

3 pointsposted 3 months ago
by pseudolus

1 Comments

robocat

3 months ago

Perhaps it's the training material! People confabulate their reasoning all the time (rationalisation etc).

I regularly laugh at the double standards we want to apply to AI. So often we seem to demand AI to beat a standard that most people don't.