SynapsesOS
Internals

Architecture Overview

Synapses is written in Go and designed as a local-first, graph-based context manager for AI coding agents. It parses codebases into a relational graph, persists them in SQLite, and serves context slices over the MCP protocol.

Entry Point

The main binary lives at cmd/synapses/main.go. It parses flags, loads configuration, and boots one of two runtime modes.

Runtime Modes

Stdio mode — A single MCP connection over stdin/stdout. This is the default when launched by an AI agent (Claude Code, Cursor, etc.). One process per project.

Daemon mode — An HTTP server that manages multiple projects simultaneously. Useful for federation and multi-repo setups. Listens on a configurable port and exposes the same MCP tool surface over HTTP transport.

Key Packages

PackagePurpose
internal/graphIn-memory graph engine — nodes, edges, BFS ego-graph carving, PageRank
internal/mcpMCP server implementation and tool handlers (get_context, search, validate, etc.)
internal/storeSQLite persistence — graph DB and knowledge DB
internal/watcherFilesystem watcher with debounced incremental re-parse
internal/brainLLM enrichment pipeline — summarization, rule enforcement, context building
internal/federationCross-project graph linking and multi-repo support
internal/configConfiguration loading (synapses.json) and defaults
internal/parserAST parsers — 49+ languages via tree-sitter grammars

Dual Database Design

Synapses maintains two separate SQLite databases:

  • Graph DB — Stores the code graph: nodes, edges, call sites, file hashes, embeddings, and FTS indexes. This is the structural representation of the codebase.
  • Knowledge DB — Stores agent-facing state: plans, tasks, session state, dynamic rules, memories, annotations, and ADRs. This is the intelligence layer.

Separating them keeps the graph DB fast for frequent rebuilds while the knowledge DB accumulates long-lived state.

File Watcher

The watcher monitors the project directory for changes and triggers incremental re-parsing:

  1. Filesystem events are collected with a 150ms debounce window to batch rapid saves.
  2. Content hashing determines whether a file actually changed (avoids re-parsing on metadata-only changes like touches).
  3. Only changed files are re-parsed, and their nodes/edges are replaced in the graph.
  4. The graph index is rebuilt asynchronously after parse completes.

Request Flow

A typical get_context request flows through:

  1. MCP handler receives the tool call
  2. Graph engine performs BFS ego-graph carving from the target entity
  3. Token budget constrains the response size
  4. Brain pipeline optionally enriches with summaries and architectural insight
  5. Formatted context packet is returned to the agent