LLMs don't hallucinate – they hit a structural boundary (RCC theory)

5 pointsposted 13 hours ago
by formerOpenAI

4 Comments

formerOpenAI

13 hours ago

I’ve been investigating a pattern in LLM failures that didn’t make sense when explained through data quality or model scale.

Hallucinations, planning drift after ~8–12 steps, and long-chain self-consistency collapse all show the same signature: they behave like boundary effects, not “errors.”

This led me to formalize something I call RCC — Recursive Collapse Constraints. I didn’t “invent” it. The structure was already there in how embedded inference systems operate without access to their container or global frame. I simply articulated the geometry behind the failures.

Key idea: When an LLM predicts from a non-central observer position, its inference pushes against a boundary it cannot see. The further it moves away from its local frame, the more it collapses into hallucination-like drift. Architecture can reduce noise, but not remove the boundary.

I’m sharing this here because I’d like technically-minded people to challenge (or refine) the framework. If you work on reasoning, planning, or model stability, I’m especially interested in counterexamples.

Happy to answer questions directly. I’m the author of the RCC write-up.

user

11 hours ago

[deleted]

formerOpenAI

13 hours ago

OP here — adding a bit more color.

RCC isn’t a new model or training method. It’s basically a boundary effect you get when a predictor has no access to its own internal state or to the “container” it’s running inside.

What stood out to me is that when the model steps too far outside its grounded reference frame, the probability space it’s sampling from starts to warp — things stop being orthogonal in the way the model implicitly assumes. What we call “hallucination” looks more like a geometric drift than random noise.

I’m not pitching this as some grand unifying theory — just a lens that helped me understand why scaling cleans up certain failure modes but leaves others untouched.

If anyone has examples of models that maintain long-chain consistency without external grounding, I’d genuinely like to hear about them.

iFire

10 hours ago

Huh, this is a kind of impossibility theorem.

I think both formal logic and consensus algorithms flipped the problem of defining possibility and instead tried to define what was impossible.