ChatGPT, at Age Two

3 pointsposted 11 hours ago
by mdp2021

2 Comments

mdp2021

11 hours ago

Recently I wanted to find references to a piece of news I heard of. Together with traditional search engines, I used LLM+RAG: the latter narrated a plausible story that mirrored the content of my query (after "Provide references about the news according to which") and added detail. But no references. Further queries revealed that plainly the intention was "Oh, I thought you wanted to hear a story".

I think that reveals particularly well the framework that constitutes an intrinsic huge risk, the design fault that can compromise the whole endeavour.

No, we do not want to hear stories. Not when we want the Finance Officer to report about the situation; not when we have to compare current legislation with future actions; not when we try to establish the trajectories of planets or the details of metabolism of cells... When we assess, we assess reality; when we plan and act, it must be based on reality.

So, there remains that base tendency which (among other issues) is especially sinister in the context of the intended purposes - those speakers that are not allergic to "inventing nice stories" we put near the bottom of trust, with a "liability!" red light flashing behind them. It's a fundamental critical issue that must be brought to full control.

user

7 hours ago

[deleted]