GPTs are language models, not "fact and truth" models. They don't even know what facts are, they just know that "if I use this word in this place, it won't sound unusual". They get rewarded for saying things that users find compelling, not necessarily what's true (and again, they have no reference to ground truth).
LLMs are like car salesmen. They learn to say things they think you want to hear in order to buy a car (upvote a response). Sometimes that's useful and truthful information, other times it isn't. (In LLMs' defense, car salesmen lie more intentionally.)
This is good info. Too many products have hyperbolic promises but ultimately fail operationally in the real world because they are simply lacking.
It is important that this be repeated ad nauseum with AI since it seems there are so many "true believers" who are willing to distort that material reality of AI products.
At this point, I am not convinced that it can ever "get better". These problems seem inherent and fundamental with the technology and while they could possibly be mitigated to an acceptable level, we really should not do that because we can just use traditional algorithms then that are far easier on compute and the environment. And far more reliable. There really isn't any advantage or benefit.
I'm puzzled by this -- what are you hoping the reader takes away from your post?
Are GPTs perfect? - No.
Do GPTs make mistakes? - Yes.
Are they a tool that enable certain tasks to be done much quicker? - Absolutely.
Is there an incredible amount of hype around them? - Also yes..
So you're saying we shouldn't expect an intelligence from an advanced auto-complete algorithm?..
Wow, what a surprise!