Architecture Overview
Synapses is written in Go and designed as a local-first, graph-based context manager for AI coding agents. It parses codebases into a relational graph, persists them in SQLite, and serves context slices over the MCP protocol.
Entry Point
The main binary lives at cmd/synapses/main.go. It parses flags, loads configuration, and boots one of two runtime modes.
Runtime Modes
Stdio mode — A single MCP connection over stdin/stdout. This is the default when launched by an AI agent (Claude Code, Cursor, etc.). One process per project.
Daemon mode — An HTTP server that manages multiple projects simultaneously. Useful for federation and multi-repo setups. Listens on a configurable port and exposes the same MCP tool surface over HTTP transport.
Key Packages
| Package | Purpose |
|---|---|
internal/graph | In-memory graph engine — nodes, edges, BFS ego-graph carving, PageRank |
internal/mcp | MCP server implementation and tool handlers (get_context, search, validate, etc.) |
internal/store | SQLite persistence — graph DB and knowledge DB |
internal/watcher | Filesystem watcher with debounced incremental re-parse |
internal/brain | LLM enrichment pipeline — summarization, rule enforcement, context building |
internal/federation | Cross-project graph linking and multi-repo support |
internal/config | Configuration loading (synapses.json) and defaults |
internal/parser | AST parsers — 49+ languages via tree-sitter grammars |
Dual Database Design
Synapses maintains two separate SQLite databases:
- Graph DB — Stores the code graph: nodes, edges, call sites, file hashes, embeddings, and FTS indexes. This is the structural representation of the codebase.
- Knowledge DB — Stores agent-facing state: plans, tasks, session state, dynamic rules, memories, annotations, and ADRs. This is the intelligence layer.
Separating them keeps the graph DB fast for frequent rebuilds while the knowledge DB accumulates long-lived state.
File Watcher
The watcher monitors the project directory for changes and triggers incremental re-parsing:
- Filesystem events are collected with a 150ms debounce window to batch rapid saves.
- Content hashing determines whether a file actually changed (avoids re-parsing on metadata-only changes like touches).
- Only changed files are re-parsed, and their nodes/edges are replaced in the graph.
- The graph index is rebuilt asynchronously after parse completes.
Request Flow
A typical get_context request flows through:
- MCP handler receives the tool call
- Graph engine performs BFS ego-graph carving from the target entity
- Token budget constrains the response size
- Brain pipeline optionally enriches with summaries and architectural insight
- Formatted context packet is returned to the agent