Thanks! Quick overview: Paths are deterministic, not LLM-generated. I use OpenAI text-embedding-3-large to build a word graph with K-nearest neighbors, then BFS finds the shortest path. No sampling involved. The explanations shown in-game are generated afterward by GPT-5 to explain the semantic jumps. Planning to write up the full architecture in a blog post - will share here when it's ready.
Oh that makes a lot of sense, I'm glad it works that way actually - the explanations afterwards left me wondering if it was truly explaining the connections or if it was inferring what they would be (leading to a problem a bit like how "thinking" doesn't actually show the real connections to get to an answer) I'm glad it's not doing that. Neat game and learning opportunity. (Sorry for not wording that very well - long day!)