Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

1 pointsposted 11 hours ago
by slye514

1 Comments

slye514

11 hours ago

I built an inference-time method that reduces LLM hallucination by applying toroidal geometric constraints to logit outputs. No retraining, no fine-tuning —

  it works as a plug-in on existing models.

  Results on Qwen 2.5-0.5B-Instruct:
  • 40% error reduction on factual tasks
  • +6.8% absolute improvement on TruthfulQA
  • Random/non-toroidal baselines show no effect

  The math comes from the Tonnetz — the same toroidal manifold that describes
  harmonic relationships in music theory. Periodic boundary conditions on a 2D
  torus create structured long-range coherence and a spectral gap that filters
  noise from the logit distribution.

  Try it: https://huggingface.co/spaces/paraxiom-research/topological-coherence
  Paper: https://doi.org/10.5281/zenodo.18516477
  Code: https://github.com/Paraxiom/topological-coherence
Rust crate: https://crates.io/crates/topological-coherence Total compute cost to validate: $40.