TL;DR
kit
gives any LLM-powered app IDE-grade knowledge of a repo. Search, extract symbols, build dependency graphs, or ask for AI summaries in two lines of Python—or expose everything as JSON-schema tools the model can call by itself.
from kit import Repository
repo = Repository("./")
print(repo.search_text("TODO"))
print(repo.get_summarizer().summarize_file("api/app.py"))
Prefer MCP? Run python -m kit.mcp
and you’re all set.
Why we built it
We use kit
to power Cased’s DevOps agent. We think the building blocks for developer tools
should be free and open source.
Quick tour
Capability | One-liner | Use-case |
---|---|---|
File tree | repo.get_file_tree() | tree-view in chat |
Text search | repo.search_text("AuthError") | ”grep inside ChatGPT” |
Symbols | repo.extract_symbols("services/payments.py") | jump-to-definition w/o LSP |
Dependencies | repo.get_dependency_analyzer("python").build_dependency_graph() | import spaghetti viz |
Summaries | repo.get_code_summary(file_path="api/app.py") | PR descriptions |
Deep dive – a multi-turn session
- Open repo
{
"name": "open_repository",
"arguments": { "path_or_url": "/workspace/my-app" }
}
- User: “Where is
calculate_price
defined?” - Model ➜
search_code
→ findsservices/pricing.py:120
. - Model âžś
get_file_content
for that file, responds with snippet. - User: “Why does it return NaN?”
- Model âžś
extract_symbols
→ isolates the function →get_code_summary
→ explains in plain English.
No if/else orchestration in your backend—just register the schema.
Running in-process vs. server
- Server mode –
python -m kit.mcp
; great for polyglot stacks, isolates heavy indexing. - In-process – import
Repository
, wrap what you need, register those wrappers as tools; zero IPC overhead.
Get started
pip install cased-kit # core + FastAPI server
python -m kit.mcp # starts on localhost:8000
Docs: https://kit.cased.com | GitHub: https://github.com/cased/kit
Happy hacking!