Bibinprathap
16 hours ago
I wanted to share a bit about where the project is headed. The goal is to make this the go-to framework for any serious enterprise AI application where trust and depth of reasoning are non-negotiable.
Our immediate roadmap is focused on:
Expanded LLM Support: We're working on adding support for more open-source models (like the latest from Mistral and Llama) as well as improving integration with closed-source APIs for teams that use them in a hybrid environment.
More Data Connectors: We're prioritizing connectors for common enterprise knowledge sources like Confluence, SharePoint, and Salesforce to make ingestion seamless.
Performance Benchmarking: We're building a public benchmark to quantitatively show the performance lift of Graph RAG on complex, multi-hop questions compared to traditional vector-search RAG.
Simplified Deployment: We're finalizing a one-click Docker Compose setup and a Helm chart for easier deployment on Kubernetes.
We're building this in the open and would love to hear what the community thinks we should prioritize. What features would make this most useful for you?