efitz
5 hours ago
I would trust AI a lot more if it gave answers more like:
“Source A on date 1 said XYX”
“Source B …”
“Synthesizing these, it seems that the majority opinion is X but Y is also a commonly held opinion.”
Instead of what it does now, which is make extremely confident, unsourced statements.
It looks like the copyright lawsuits are rent-seeking as much as anything else; another reason I hate copyright in its current form.
akira2501
an hour ago
> which is make extremely confident,
One of the results the LLM has available to itself is a confidence value. It should, at the very least, provide this along with it's answer. Perhaps if it did people would stop calling it 'AI'.'
pavon
41 minutes ago
My understanding is that this confidence value is not a measure of how likely something is correct/true, but more along the lines of how likely that sentence would be. Including it could be more misleading than helpful, for example if it is repeating commonly misunderstood information.
ethernot
35 minutes ago
I'm not sure that it's possible to produce anything reasonable in that space. It would need to know how far it is away from correct to provide a usable confidence value otherwise it'd just be hallucinating a number in the same way as the result.
An analogy. Take a former commuter friend of mine, Mr Skol (named after his favourite breakfast drink). Seen on a minibus I had to get to work years ago, we shared many interesting conversations. Now he was a confident expert on everything. If asked to rate his confidence in a subject it would be a good 95% at least. However he spoke absolute garbage because his brain was rotten away from drinking Skol for breakfast, and the odd crack chaser. I suspect his model was still better than GPT-4o. But an average person could determine the veracity of his arguments.
Thus confidence should be externally rated as an entity with knowledge cannot necessarily rate itself for it has bias. Which then brings in the question of how do you do that. Well you'd have to do the research you were going to do anyway and compare. So now you've used the AI and done the research which you would have had to do if the AI didn't exist. So the AI at this point becomes a cost over benefit if you need something with any level of confidence and accuracy.
Thus the value is zero unless you need crap information, which is at least here, never, unless I'm generating a picture of a goat driving a train or something. And I'm not sure that has any commercial value. But it's fun at least.
CaptainFever
5 hours ago
ChatGPT Search provides this, by the way, though it relies a lot on the quality of Bing search results. Consensus.app does this but for research papers, and has been very useful to me.
maronato
5 hours ago
More often than not in my experience, clicking these sources takes me to pages that either don’t exist, don’t have the information ChatGPT is quoting, or ChatGPT completely misinterpreted the content.