> Hallucination and creativity feel very related to me.
Why? I see them as just sampling errors.
Sure a mistake can spark inspiration sometimes, but creativity is much more than mistakes.
> I understand hallucinating as simply being misaligned with the space humans feel appropriate to interpolate between
These language models are next-token predictors. The way the next token is predicted is by sampling a probability space outputted by the model.
That sampling process can be non deterministic.
Hallucinations are when that sampling results in tokens that come together to create a false or otherwise unintended statement.
You can just as well think of everything a model outputs as a hallucination, but we train the model to output a space what we want them to hallucinate is more likely. Otherwise it just outputs meaningless noise.
“Hallucinate” is really an awful word for what it’s trying to describe.
> You can just as well think of everything a model outputs as a hallucination
Exactly. Don't forget that an important factor in the success of GPT3 was RLHF, which is essentially training the model to produce "hallucinations" that are more acceptable on average to human trainers.
> Sure a mistake can spark inspiration sometimes, but creativity is much more than mistakes.
It looks like creativity has many steps but being able to come with novel, unprompted stuff is important, as long as you are able to discard the bullshit earlier.
"Hallucination" is only a problem if later layers (or additional networks) can't detect and remove it
> "Hallucination" is only a problem if later layers (or additional networks) can't detect and remove it
Yeah I mean sure. Anything is only a problem if it goes undetected.
The issue is that if you rely on statistical model, you’ll always have hallucinations, so you can’t filter statistical output with another statistical model if you need real guarantees.
Many products don’t need those guarantees though.
LLM’s are too unpredictable for many practical uses so I’d guess better predictability is better. Hopefully the change the paper proposes will help!
But here’s a case for the other side: sure, most mistakes are just errors, but evolution happens via “mistakes.” Also, LLM’s often deliberately add add randomness at inference time.
> evolution happens via “mistakes.”
That’s a nice slogan, but it’s a gross oversimplification.
In the natural world, you can say that mistakes in DNA replication leads to evolution, but that’s discounting the entire process of natural selection.
Same with creativity.
Look at Picasso. His was a technically brilliant realistic painter at 15, but his work later in life evolved to be more abstract and weird.
I don’t think that was the result of mistakes, but rather intentionally breaking patterns he learned in his youth.
To oversimplify, evolution is a generate-and-test process and the evaluation step is critical. Something needs to decide which variations are better. Often, with generative AI, it’s people who judge the results. Still, generating interesting examples (the brainstorming phase) plays some role in that.
I don’t know a whole lot about Picasso’s art, but I imagine the way he evaluated his own work played an important role, in being able to see that sometimes creative accidents are interesting.
Hallucinate is an awful word because of what it is trying to describe.
Hallucination describes the same feature you just called "non deterministic sampling", but exclusively the cases that we don't like. It would be really convenient if we could actually draw that line, but we can't. If non-determinism is a core feature, then that feature will be present in every case; including the ones we find desirable, and the ones we find undesirable.
> Surely there's a trade-off...
For one, speed and memory. They have twice as many Q and K weights in the attention blocks, leading to a ~10% reduction in throughput on their H100 (table 7 in appendix A).
they mention similar performance to vanilla transformer with significantly reduced param count though
I mean it doesn’t necessarily needs 2x QK to match that performance, in terms of accuracy, of a regular transformer right?
not all hallucinations are creativity
Imaginate that for a RAG application, the model is supposed to follow the given documents