Here - an example. You can try it yourself. Pick any topic you want that has what you consider a definitive FAQ and attach to router to something with a brain.
I asked my router to ask: "WTF is AI Horde?" and I’m posting the full outputs because this shows (a) how it behaves in different modes, and (b) how and when it’s wrong in a loud, readable way.
1) Default / YOLO mode (no KB attached). Cold question. Model argues from its own internal priors.
AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.
Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.
Confidence: Medium | Source: Model
My take: Meh. Sort of. Wrong specifics. Fine for a cheap guess, not fine for “I’m betting money on this.” Note: it clearly identifies "meh, dunno" parts. Call it a 4/10 for factual correctness, with a clear shoulder shrug.
2) Web-enabled mode
Same question, but allowed to pull from the web. Now it sez -
An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].
My take: Better. Still missing details, but broadly right. Let's call it ~7/10.
3) KB mode (FAQ scraped - >>summ new - query)
Here the model is constrained to just what’s in the attached KB summaries.
AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.
Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.
Confidence: High | Source: Docs
My take: That seems pretty solid well source-grounded. Call it 8/10 based on docs.
Finally -
4) For fun, I then asked it to rank the three prior responses against the KB facts. Here's what it said -
[3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.
[2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.
[1] is partially correct but misleading, scoring 6/10. It correctly describes GPU
Confidence: medium | Source: Docs
Point being: ungrounded self-eval is vibes. Grounded evaluation becomes boring and useful. Also, Qwen is an easy marker.
TL;DR
YOLO mode: plausible guesses, sometimes confidently wrong. Same old, same old.
Web mode: better, still imperfect.
KB mode: source-based answers with explicit limits/refusals
KB-grounded ranking: reproducible, source-anchored adjudication
Repeatable demo. Attach good model and try it yourself.