helain
an hour ago
If you don’t want to reinvent all of this yourself, this is exactly the problem we’re solving at Ailog.
Most local LLM setups break down because people try to use the model as both the reasoning engine and the memory store. That doesn’t scale. What works in production is a layered approach: external long-term memory (vector DB + metadata), short-term working state, aggressive summarization, and strict retrieval and evaluation loops.
That’s what we built at https://www.ailog.fr . We provide a production-ready RAG stack with persistent memory, retrieval controls, grounding checks, and evaluation tooling so models can handle long-horizon, multi-step tasks without blowing up the context window. It works with local or hosted models and keeps memory editable, auditable, and observable over time.
You can still build this yourself with Ollama, Chroma/Qdrant, and a custom orchestrator, but if you want something already wired, tested, and scalable, that’s the niche we’re filling.
Happy to answer questions or share architecture details if useful.