Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)

25 pointsposted 7 hours ago
by linggen

10 Comments

linggen

7 hours ago

Hi HN, I’m the author.

Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.

I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.

Y_Y

6 hours ago

How can it stay on your device if you use Claude?

linggen

5 hours ago

Good question. Linggen itself always runs locally.

When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.

Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.

ithkuil

5 hours ago

Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.

linggen

4 hours ago

Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.

The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.

Y_Y

4 hours ago

That's fine, but it's a very different claim to the one you made at first.

In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.

linggen

an hour ago

I do have a local model path (Qwen3-4B) for testing.

The tradeoff is simply model quality vs locality, which is why Linggen focuses on controlling retrieval rather than claiming zero data ever leaves the device. Using a local LLM is straightforward if that’s the requirement.

linggen

2 hours ago

That’s true — Linggen can’t control the behavior of Claude or any other cloud LLM.

What it can control is the retrieval boundary: what gets selected locally and exposed to the model. If nothing is returned, nothing is sent.

If a strict zero-exfiltration setup is required, then a fully local model would indeed be the right option.

gostsamo

6 hours ago

How is it better than keeping project documentation and telling the agent to load the necessary parts? does it compress the info somehow or helps with context management?

linggen

6 hours ago

Compared to plain docs, Linggen indexes project knowledge into a vector store that the LLM can query directly.

The key difference is that it works across projects. While working on project A, I can ask: “How does project B send messages?” and have that context retrieved and applied, without manually opening or loading docs.