It’s a local code analysis engine that builds a semantic index of your repo (symbols, imports, comments, etc.), adds vector embeddings, and exposes search + context tools you can plug into LLM workflows. Think smarter code search + context assembly, but offline and fully under your control.
My devs like that it doesn’t try to be too “magical” it’s predictable, hackable, and works well with existing workflows (like Aider or LiteLLM).
As a PM, I like that it respects privacy, is extensible, and helps us experiment with AI-assisted workflows without vendor lock-in.
If you’re exploring custom LLM agents or just want better context for local dev tooling, it’s worth a look.
Feels like a great foundation for building your own Copilot-like tools without sending code to the cloud. Very modular, easy to extend, and works surprisingly well out of the box. Bonus: comes with CLI tools and supports OpenAI function-calling + MCP.
It’s a local code analysis engine that builds a semantic index of your repo (symbols, imports, comments, etc.), adds vector embeddings, and exposes search + context tools you can plug into LLM workflows. Think smarter code search + context assembly, but offline and fully under your control.
My devs like that it doesn’t try to be too “magical” it’s predictable, hackable, and works well with existing workflows (like Aider or LiteLLM).
As a PM, I like that it respects privacy, is extensible, and helps us experiment with AI-assisted workflows without vendor lock-in.
If you’re exploring custom LLM agents or just want better context for local dev tooling, it’s worth a look.
Feels like a great foundation for building your own Copilot-like tools without sending code to the cloud. Very modular, easy to extend, and works surprisingly well out of the box. Bonus: comes with CLI tools and supports OpenAI function-calling + MCP.
[dead]