storystarling
14 days ago
Does the generated llms.txt optimize for token density? I've been working on some ingestion pipelines using LangGraph and found that verbose text files can get expensive quickly when scraping for RAG. Curious if you're stripping unnecessary tokens here to keep input costs down.