While building AI features that rely on real-time streaming responses, I kept running into failures that were hard to reason about once things went async.
Requests would partially stream, providers would throttle or fail mid-stream, and retry logic ended up scattered across background jobs, webhooks, and request handlers.
I built ModelRiver as a thin API layer that sits between an app and AI providers and centralizes streaming, retries, failover, and request-level debugging in one place.
It’s early and opinionated, and there are tradeoffs. Happy to answer technical questions or hear how others are handling streaming reliability in production AI apps.
At what point does adding this layer become more complex than just handling streaming failures directly in the app?
If streaming behavior is still product-specific and changing fast, this adds friction. It only pays off once failure handling stabilizes and starts repeating across the system.
Why not just handle this in the application with queues and background jobs?
Queues work well before or after a request, but they’re awkward once a response is already streaming. This layer exists mainly to handle failures during a stream without spreading that logic across handlers, workers, and client code.
The pain of failures that are "hard to reason about once things went async" is real. Centralizing the retry/failover logic makes sense.
One pattern I've found useful: having a read-only view of what's actually hitting the wire before any retry logic kicks in. When you can see the raw request/response as it happens, you can tell whether the issue is your payload, the provider throttling, or something in between.
We built toran.sh for this - it's a transparent proxy that shows exactly what goes out and comes back in real-time. Different layer than what you're doing (you handle the orchestration, we just show the traffic), but they complement each other.
Curious how you handle visibility into what's actually being sent during partial stream failures?
Totally agree, that “what actually hit the wire?” view is critical once things go async.
ModelRiver already has this covered via request logs. Every request captures the full lifecycle, the exact payload sent to the provider, streaming chunks as they arrive, partial responses, errors, retries, and the final outcome. Even if a stream fails midway, you can still inspect what was sent and what came back before the failure.
So you can clearly tell whether the issue is payload shape, provider throttling, or a mid stream failure, before any retry or failover logic kicks in. That wire level visibility is core to how we approach debugging async AI requests.