⭐ Star us · memtrace.io · npm · Discord
Memtrace turns your codebase into a live knowledge graph that AI coding agents can query in milliseconds — every function, class, call edge, and version, across every session, without re-reading files or breaking things they can't see.
Get your fleet on shared structural memory in under 90 seconds.
Structural · zero LLM calls · Bi-temporal · time-travel queries · Replay-aware · zero blind refactors
Three things, every release.
🧭 Run a fleet of coding agents on the same repo without merge hell. Each agent reads the same call graph, sees the same blast radius, inherits the same temporal history. No collisions. No stale context.
🔁 Replay any refactor with full causal awareness. Agents see exactly what depends on what, and what changed when. No more "I refactored a function and 14 tests broke that nobody saw."
⚡ Index a 50k-file repo in under 90 seconds. Rust + Tree-sitter, $0 in API costs, 16 languages, fully local. Your code never leaves your machine.
memtrace-demo.mp4
| Operation | Memtrace | Best alternative | Δ |
|---|---|---|---|
| Index 1,500 files | 1.5s · $0 | Mem0: 31 min · $10–50 | ~1,200× faster |
| Exact symbol query (acc@1, lat) | 96.6% · 0.07 ms | GitNexus: 97.0% · 8.95 ms | 128× lower latency |
| Graph callers recall (Django) | 81.6% | GitNexus: 5.3% | 15.4× |
| Incremental re-index p95 | 42.5 ms | CodeGrapher: 613.7 ms | 14.4× |
| Hybrid acc@1 (Django, 3K cases) | 73.9% | GitNexus: 38.6% | 1.91× |
| RSS / process | 26 MB | ChromaDB: 1,060 MB | 41× tighter |
| Languages | 16+ (Tree-sitter) | varies | — |
Reproducible benchmark suite: benchmarks/. Same machine, same corpora, same adapter contract. Ground truth from Python's ast and pyright LSP — never from any tool's own index. No system gets a home-field advantage in the dataset.
Detailed breakdowns: BENCHMARKS-v0.3.22.md · BENCHMARKS-v0.3.29.md
Memtrace is in private beta. We're rolling out access in batches to keep the feedback loop tight — every cohort lands in a Discord channel where we ship fixes from real bug reports inside a week.
→ Join the waitlist at memtrace.io.
Already have access? npm install -g memtrace and you're indexing in 90 seconds. Full setup below.
🔒 Privacy. Memtrace runs entirely on your machine. Source code never leaves it. The only network traffic is license validation, aggregate node/edge counts, and opt-out crash telemetry — no source, no file paths, no symbol names. Full breakdown: PRIVACY.md, TELEMETRY.md. Disable telemetry with
MEMTRACE_TELEMETRY=off.
Good code-intelligence tools already exist. GitNexus and CodeGrapherContext build AST-based graphs that work for "what's in my repo right now."
Memtrace is a bi-temporal episodic structural knowledge graph. It builds on the same AST foundation and adds two dimensions:
- Temporal memory — every symbol carries its full version history. Six scoring algorithms (impact, novelty, recency, directional, compound, overview) let agents ask different temporal questions: "what changed?", "what's unexpected?", "what'll break?".
- Cross-service API topology — Memtrace maps HTTP call graphs between repositories, detecting which services call which endpoints across your architecture.
On top of that, the structural layer is comprehensive:
| Symbols are nodes | functions, classes, interfaces, types, endpoints |
| Relationships are edges | CALLS, IMPLEMENTS, IMPORTS, EXPORTS, CONTAINS |
| Community detection | Louvain algorithm identifies architectural modules automatically |
| Hybrid retrieval | Tantivy BM25 + vector embeddings + Reciprocal Rank Fusion + cross-encoder rerank |
| Rust-native | compiled binary, no Python/JS runtime overhead, sub-8 ms p95 query latency |
The agent doesn't just search your code. It remembers it.
Mem0 and Graphiti are strong conversational memory engines designed for tracking entity knowledge (e.g. User -> Likes -> Apples). They excel at that. For code intelligence specifically, the tradeoff is that they rely on LLM inference to build their graphs — which adds cost and time when processing thousands of source files.
Graphiti processes data through add_episode(), which triggers multiple LLM calls per episode — entity extraction, relationship resolution, deduplication. At ~50 episodes/minute (source), ingesting 1,500 code files takes 1–2 hours.
Mem0 processes data through client.add(), which queues async LLM extraction and conflict resolution per memory item (source). Bulk ingestion with infer=True (default) means every file passes through an LLM pipeline. Throughput is bounded by your LLM provider's rate limits.
Both accumulate $10–50+ in API costs for large codebases because every relationship is inferred rather than parsed.
Memtrace takes a different approach: it indexes 1,500 files in 1.2–1.8 seconds for $0.00 — no LLM calls, no API costs, no rate limits. Native Tree-sitter AST parsers resolve deterministic symbol references (CALLS, IMPLEMENTS, IMPORTS) locally. The tradeoff is that Memtrace is purpose-built for code — it doesn't handle conversational entity memory the way Mem0 and Graphiti do.
Memtrace exposes a full structural toolkit via the Model Context Protocol.
|
Search & Discovery
Relationships
Impact Analysis
Code Quality
|
Temporal Analysis
Graph Algorithms
API Topology
Indexing & Watch
|
Memtrace ships skills/guidance that teach agents how to use the graph. They fire automatically based on what you ask — no prompt engineering required.
| Skill | You say… |
|---|---|
memtrace-search |
"find this function", "where is X defined" |
memtrace-relationships |
"who calls this", "show class hierarchy" |
memtrace-evolution |
"what changed this week", "how did this evolve" |
memtrace-impact |
"what breaks if I change this", "blast radius" |
memtrace-quality |
"find dead code", "complexity hotspots" |
memtrace-graph |
"show me the architecture", "find bottlenecks" |
memtrace-api-topology |
"list API endpoints", "service dependencies" |
memtrace-index |
"index this project", "parse this codebase" |
memtrace-cochange |
"what else changes with this", "hidden coupling" |
Plus 8 workflow skills that chain multiple tools with decision logic: memtrace-first, codebase-exploration, change-impact-analysis, incident-investigation, refactoring-guide, continuous-memory, episode-replay, and session-continuity.
Six scoring algorithms for different temporal questions:
| Mode | Best for |
|---|---|
compound |
General-purpose "what changed?" — weighted blend of impact, novelty, recency |
impact |
"What broke?" — ranks by blast radius (in_degree^0.7 × (1 + out_degree)^0.3) |
novel |
"What's unexpected?" — anomaly detection via surprise scoring |
recent |
"What changed near the incident?" — exponential time decay |
directional |
"What was added vs removed?" — asymmetric scoring |
overview |
Quick module-level summary |
Uses Structural Significance Budgeting to surface the minimum set of changes covering ≥80% of total significance.
| Editor / Agent | MCP Tools (25+) | Skills / Guidance | Install |
|---|---|---|---|
| Claude Code | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Claude Desktop | ✅ | ✅ | Automatic — shared with Claude Code |
| Cursor (v2.4+) | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Codex CLI | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Windsurf | ✅ | ✅ | npm install -g memtrace — fully automatic |
| VS Code (Copilot) | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Hermes | ✅ | ✅ | npm install -g memtrace — fully automatic |
| OpenCode | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Kiro | ✅ | Steering | npm install -g memtrace — fully automatic |
| Cline / Roo Code | ✅ | — | Add MCP server manually |
| Any MCP client | ✅ | — | Add MCP server manually |
Skills are workflow prompts that teach the agent how to chain tools. Kiro does not use SKILL.md, so Memtrace writes equivalent auto steering files instead.
npm install -g memtraceHandles everything — binary, 17 skills, MCP server, plugin, marketplace. One command, both editors.
For manual setup:
claude plugin marketplace add https://github.com/syncable-dev/memtrace-public.git
claude plugin install memtrace-skills@memtrace --scope user
claude mcp add memtrace -- memtrace mcp -e MEMTRACE_ARCADEDB_BOLT_URL=bolt://localhost:7687npm install -g memtrace handles everything automatically. Cursor v2.4+ reads the same SKILL.md format as Claude.
For project-local install (skills travel with your repo):
npx memtrace-skills install --only cursor --localThe installer also writes skills/guidance and MCP configuration for the newer agent surfaces:
| Agent | Global skills / guidance | Global MCP config | Project-local support |
|---|---|---|---|
| Codex | ~/.agents/skills/ |
~/.codex/config.toml |
.agents/skills/, .codex/config.toml |
| Windsurf | ~/.codeium/windsurf/skills/ |
~/.codeium/windsurf/mcp_config.json |
.windsurf/skills/; MCP remains user-level |
| VS Code / Copilot | ~/.copilot/skills/ |
VS Code user mcp.json |
.github/skills/, .vscode/mcp.json |
| Hermes | ~/.hermes/skills/ |
~/.hermes/config.yaml |
user-level only |
| OpenCode | ~/.config/opencode/skills/ |
~/.config/opencode/opencode.json |
.opencode/skills/, opencode.json |
| Kiro | ~/.kiro/steering/ |
~/.kiro/settings/mcp.json |
.kiro/steering/, .kiro/settings/mcp.json |
Install only selected integrations:
npx memtrace-skills install --only codex,windsurf,vscode,hermes,opencode,kiroInstall project-local config where supported:
npx memtrace-skills install --only codex,vscode,opencode,kiro --localFor Cline, Roo Code, or any client that only needs MCP tools, add this server manually:
{
"mcpServers": {
"memtrace": {
"command": "memtrace",
"args": ["mcp"],
"env": {
"MEMTRACE_ARCADEDB_BOLT_URL": "bolt://localhost:7687"
}
}
}
}| Editor | Config file |
|---|---|
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| VS Code (Copilot) | .vscode/mcp.json in your project root |
| Codex | ~/.codex/config.toml or .codex/config.toml |
| Hermes | ~/.hermes/config.yaml |
| OpenCode | ~/.config/opencode/opencode.json or project opencode.json |
| Kiro | ~/.kiro/settings/mcp.json or .kiro/settings/mcp.json |
| Cline | Cline MCP settings in the extension panel |
memtrace uninstall # removes skills, MCP server, plugin, settings
npm uninstall -g memtraceAlready ran npm uninstall first? The cleanup script is at ~/.memtrace/uninstall.js:
node ~/.memtrace/uninstall.jsnpm install -g memtrace ships a small main package + a platform-specific binary (one of @memtrace/darwin-arm64, @memtrace/linux-x64, @memtrace/win32-x64). If memtrace start ever says "Could not find binary for your platform":
# Re-run install, asking npm to keep optional deps
npm install -g memtrace --include=optional
# Or refresh from latest
memtrace install # built-in self-update
npm install -g memtrace@latest --force
# Or install the platform binary directly (Apple Silicon shown — swap for your platform)
npm install -g @memtrace/darwin-arm64This typically only happens on machines where npm is configured to skip optional dependencies (corporate npmrc, certain CI caches).
Rust · Go · TypeScript · JavaScript · Python · Java · C · C++ · C# · Swift · Kotlin · Ruby · PHP · Dart · Scala · Perl — and more via Tree-sitter.
Memtrace runs locally — first index is CPU/RAM intensive, subsequent queries and incremental indexing are much lighter.
| Minimum | Recommended | |
|---|---|---|
| CPU | 4 cores | 8+ cores for large monorepos |
| Memory | 8 GB RAM | 16–32 GB RAM |
| Disk | 5 GB free | 10–20 GB free |
| GPU | Not required | Not required |
| Node.js | ≥ 18 | Current LTS |
| Git | Required for temporal analysis | Full repo history for best results |
Since v0.3.17 Memtrace ships with opt-out telemetry that helps us catch crashes, regressions, and performance issues before someone files an issue.
- Collected: app-start events, indexing/embedding durations, panic reports, WARN/ERROR log lines from Memtrace's own crates.
- NOT collected: source code, file contents, symbol names, embeddings, repository names or paths, branch names, commit data.
- Sanitisation: every payload is run through a sanitiser that strips home-dir paths, token-shaped strings, and email addresses before it touches disk.
Disable with one env var:
MEMTRACE_TELEMETRY=off memtrace start # per-run
export MEMTRACE_TELEMETRY=off # permanent (~/.zshrc, ~/.bashrc)Or in your editor's MCP config: "env": { "MEMTRACE_TELEMETRY": "off" }.
Full breakdown — including the on-disk queue layout, where data is stored on the receiving end, and how to inspect what would have shipped — is in TELEMETRY.md.
Proprietary EULA. Free to use during private beta and after general availability for individual developers. Indexer + database (MemDB) are closed-source.
Benchmark suite under MIT in benchmarks/ — fully reproducible, no proprietary code required to run them.
memtrace.io · Discord · npm · Issues
Built by Syncable · Copenhagen 🇩🇰