user
13 hours ago
13 hours ago
13 hours ago
Nice. Routing is the hard part. Do you have numbers on false accepts vs false escalations? (i.e., how often you keep a bad cheap answer vs unnecessarily jump to the expensive model). Benchmarks are good, but those two rates are what will make or break it in prod.
13 hours ago
Good question, these are the two metrics we obsess over: False accepts (bad response passed as good): <1% on benchmarks, ~2-3% in production pilots. This is the one that matters, we tune aggressively to keep it low. Every validator errs on the side of escalation.
False escalations (good response unnecessarily escalated): ~7-10% depending on domain. Costs you tokens, but doesn't hurt quality. The self-learning engine reduces this over time as it learns your traffic patterns.
The tradeoff is intentional: we'd rather waste some spend than serve bad answers. In practice, even with the conservative tuning, customers still see 30-60% cost reduction because the baseline.
14 hours ago
Very real problem, and the focus on validation (not just routing) is the right direction.
How do you handle cases where validation is uncertain or the domain detector is wrong: do you default conservatively, and what false-negative rates are you seeing?
13 hours ago
Yes, we default conservatively, when in doubt, escalate. A few specifics: Uncertain validation: We combine multiple signals (confidence scores, semantic similarity, format checks...). If any signal is borderline, we escalate. Better to overpay occasionally than return a bad response.
Wrong domain detection: The domain classifier isn't a gate, it selects which validator to apply. If the validator then fails, it escalates regardless. So a misclassified query still gets caught at the validation layer.
False-negative rates (good responses wrongly escalated): ~7-10% at the beginning, depending on domain. We're okay with this, it means slightly higher cost but never compromised quality. The self-learning engine tightens this over time as it sees your actual traffic patterns.
16 hours ago
The benchmark numbers look strong but MT-Bench/GSM8K are pretty narrow. Have you tested on more open-ended tasks?
15 hours ago
For open-ended tasks we use embedding similarity + confidence scoring, not just format matching. If the draft response is semantically thin, it escalates. The system also learns from your actual traffic patterns, after a few hundert queries, it knows which query shapes work on which models for your specific use case.
16 hours ago
Hey HN! I'm Sascha, a technical founder who started coding at 9 and spent the last 2 years obsessing over Small Language Models, specifically, how to squeeze every drop of performance from fast, cheap, domain-specific models before touching slow, expensive flagships.
What it does: cascadeflow is an optimization layer that sits between your app/agent and LLM providers, intelligently cascading queries between cheap and expensive models—so you stop paying Opus 4.5 prices for "What's 2+2?"
Why this matters: Most companies I've talked to are running all their AI traffic through flagship models. They're burning 40-70% of their budget on queries that a $0.15/M token model handles just fine, including reasoning tasks and tool calls. But building intelligent routing is genuinely hard. You need quality validation, confidence scoring, format checking, graceful escalation, and ideally domain understanding. Most teams don't have bandwidth to build this infrastructure.
Backstory: After working on projects with JetBrains and IBM on developer tools, I kept seeing the same pattern: teams scaling AI features or agents hit a cost wall. I started prototyping cascading initially just for my own projects. When I saw consistent 60-80% cost reductions without quality loss, I realized this needed to be a proper cost optimization framework.
How it works: Speculative execution with quality validation. We try the cheap or domain-specific model first (auto-detects 15 domains), validate response quality across multiple dimensions (length, confidence via logprobs, format, semantic alignment), and only escalate to expensive models when validation fails. Framework overhead: <2ms.
First integrations: n8n and LangChain. Both connect any two AI Chat Model nodes (cheap drafter + powerful verifier) with domain-aware routing across code, medical, legal, finance, and 11 more domains. Mix Ollama locally with GPT-5 for verification. In n8n, you can watch cascade decisions live in the Logs tab.
Benchmarks: 69% savings on MT-Bench, 93% on GSM8K, 52% on MMLU—retaining 96% of GPT-5 quality. All reproducible in `/tests/benchmarks`.
What makes it different:
- Understands 15 domains out of the box (auto-detection, domain-specific quality validation, domain aware routing) - User-tier and budget-based cascading with configurable model pipelines - Learns and optimizes from your usage patterns - Auto-benchmarks against your available models - Works with YOUR models across 7+ providers (no infrastructure lock-in) - Python + TypeScript with identical APIs - Optional ML-based semantic validation (~80MB model, CPU-only) - Production-ready: streaming, batch processing, tool calling, multi-step reasoning, cost tracking with optional OpenTelemetry export
n8n package: `npm install @cascadeflow/n8n-nodes-cascadeflow`
Would love technical feedback, especially from anyone running AI at scale who's solved routing differently, or n8n power users who can stress-test the integration. What's broken? What's missing?
14 hours ago
this is very cool.