0 Language Parsers  ·  0 MCP Tools  ·  Local-first. Always.

The nervous system
for AI agents.

Synapses carries the signals your agents need to act precisely — structured world model, persistent memory, and coordination, running entirely on your machine.

$ brew install SynapsesOS/tap/synapses
<0ms
Context in milliseconds. Not megabytes.
0 langs
Typed graph, exact answers. Not guesses.
Sovereignty
Your data, your storage. We never see it.
scroll to explore

What we stand for

Three values.
Non-negotiable.

A nervous system has three non-negotiable properties. Ours too.

Speed

Context delivered in milliseconds — not seconds. BFS ego-graph slices give agents exactly what they need for the current task. No full-file dumps. No round trip to a remote index. The graph runs on your machine, one hop away.

Context in <50ms  ·  Local graph  ·  No cloud round-trip

Accuracy

Typed graph edges — CALLS, IMPLEMENTS, IMPORTS, DATA_FLOW — not fuzzy embeddings. When your agent asks "what breaks if I change this?", the answer is precise and complete. No hallucinated relationships. No missing callers. Exact.

Typed graph  ·  Exact impact analysis  ·  No embeddings

Sovereignty

Your data, your storage. Always. SynapsesOS is the engine — you choose where it runs and where the data lives. We never see it, store it, or touch it.

Solo Local machine. SQLite at ~/.cache/synapses/
Cloud Your own VPS or cloud VM. Your infra.
Team Shared storage on GitHub, S3, or anywhere you choose.
Synapses never stores your data  ·  Bring your own storage

The Missing Piece

A brain without
a nervous system.

Without a nervous system, a brain can't sense its environment, can't remember between states, and can't coordinate action. It's brilliant and completely isolated. That's the state of every AI agent running on your codebase today.

No sensory input

Your agent sees raw text, not structure. It doesn't know what calls what, what implements what, or what breaks when something changes. It's operating without nerves.

Memory resets every session

The nervous system doesn't reset every time you go to sleep. But your agent's does. Decisions made, patterns learned, architectural lessons — gone. Same context rebuilt every time. Same mistakes made twice.

Multiple agents without coordination

Two signals firing without coordination cause damage. Two agents editing the same file, making conflicting decisions, duplicating work — with nothing to connect them. The more agents, the more noise.

Synapses is the nervous system that was missing.

How It Works

Connect the nervous system

01

Install & Index

One binary. Run synapses init and SynapsesOS parses your entire codebase into a typed graph — functions, structs, interfaces, call edges, data flows. Stored locally in SQLite. No cloud, no data leaving your machine.

                $ synapses init
 Parsed 4,150 functions
 Indexed 15,382 edges
 Watching for changes
              
02

Connect your AI agent

SynapsesOS speaks MCP — the Model Context Protocol. Add it to Claude Code, Cursor, Windsurf, or any MCP-compatible agent in seconds. Your agent now has 45+ tools for graph queries, memory, and coordination.

                // .claude/mcp.json
{
  "synapses": {
    "command": "synapses",
    "args": ["serve"]
  }
}
              
03

Your agent acts precisely

Instead of grep noise, your agent gets precise BFS ego-graph slices — exactly what's needed, nothing more. Impact analysis, call chains, session memory, multi-agent coordination. The signals fire.

                 get_context("NewWalker")
 20 callers found
 3 breaking changes detected
 Memory: last seen 2 sessions ago
              

What a nervous system does

Everything an agent needs to actually understand its world.

45+ MCP tools. Context, memory, coordination, and governance — in one local binary.

World Model

Sensory System

AST-based typed code graph across 49+ languages. Agents sense structure, not text — exact relationships, not similarity guesses. Precise BFS slices of exactly what's relevant to the current task. Not file dumps.

Memory

Persistent Memory

Three-tier memory: live session state, anchored code insights that auto-invalidate when code changes, and durable project knowledge. What was learned last Tuesday is available next Monday.

Coordination

Coordination Without Collision

Work claims, conflict detection, message bus, agent registry. Multiple agents share one graph without conflict. Built for the world where you run more than one.

Analysis

Signal Propagation

"What breaks if I change this?" Answered precisely. Full typed relationship graph — not similarity matching. The difference between approximation and truth. Agents act with confidence instead of guessing.

Sovereignty

Local-First by Conviction

No latency penalty. No trust requirement. No dependency. No conflict of interest. The nervous system runs where the brain lives. Your world model belongs to you, not a vendor.

Governance

Architectural Rules Enforcement

Define constraints between modules — "graph layer cannot call store layer." Agents validate plans before writing code. Violations caught before they're committed. Your architecture stays intentional.

Generalization

Code now. Every domain soon.

The graph engine is domain-abstract by design. Code-specific logic lives in a pluggable parser layer. Infrastructure, APIs, docs next. The same nervous system connects every part of your system.

Protocol-Level Neutrality

One MCP server. Any agent.

SynapsesOS speaks the Model Context Protocol — the open standard for agent tools. If your agent supports MCP, it works. We don't care if the agent is Claude, Cursor, or something that doesn't exist yet.

A Claude Code
Cursor
Windsurf
Z Zed
Gemini
Any MCP Client

Built on the open  Model Context Protocol  — not locked to any vendor.

See the difference

Graph queries vs grep

Ask the same question two ways. One gives you files. The other gives you understanding.

without synapses
grep
$ grep -r "NewWalker" .
./parser/parser.go:34:func NewWalker(
./parser/walker_test.go:12:    w := NewWalker(
./cmd/main.go:89:    walker := NewWalker(
... 67 more matches

✗ No call hierarchy
✗ No impact analysis
✗ No cross-file context
✗ No memory of past sessions
with synapses
get_context
 get_context("NewWalker")

 NewWalker — parser.go:34
  70 callers across 12 packages
  Implements: Walker interface
  Calls: tree-sitter, LuaParser×66

 Impact: changing signature breaks
  cmd/main.go, watcher.go + 4 more

 Memory: last modified 2 sessions ago
  "refactored error handling in walker"

FAQ

Common questions

Synapses is the nervous system for AI agents. It gives agents a structured world model of your codebase, persistent memory that survives across sessions, multi-agent coordination, and architectural rules enforcement — in one local binary. Not an LLM. Not a plugin. The complete set of systems that turn isolated agents into ones that sense, remember, and coordinate.

No. We do not reason. We make reasoning possible. SynapsesOS sits between your AI agent and the world it operates in, providing the structured world model, persistent memory, and coordination that LLMs cannot provide for themselves. The LLM is the brain. Synapses is the nervous system — the part that senses the environment, carries signals, and remembers what happened.

RAG retrieves similar text using embeddings. SynapsesOS maintains a typed relational graph of actual code structure — CALLS, IMPLEMENTS, IMPORTS, DATA_FLOW edges between real entities. It can tell your agent exactly which functions call a given function, what an interface implements, or precisely what breaks when a signature changes. The difference is the difference between approximation and truth.

You do, entirely. SynapsesOS is the engine — you choose where it runs and where the data lives. Local machine? Data stays in ~/.cache/synapses/. Your own cloud VM? Data stays on your infrastructure. Shared team storage? Wherever you choose. We never store your data, never see it, and have no cloud storage of any kind. Cloud indexing services need your code to leave your machine. We don't.

SynapsesOS implements the Model Context Protocol (MCP) — the open standard supported by Claude Code, Cursor, Windsurf, Zed, and other AI coding assistants. Add one entry to your MCP config pointing at the synapses binary, and your agent immediately gains 45+ tools: structured context queries, session memory, impact analysis, multi-agent coordination, and architectural rule enforcement.

Yes. Run a central SynapsesOS instance with shared storage — on GitHub, S3, or any storage your team controls. Multiple agents coordinate through the shared graph: work claims prevent collisions, conflict detection surfaces overlapping edits, and the agent message bus lets them share decisions.

SynapsesOS supports 49+ languages across all major ecosystems — Go, TypeScript, Python, Java, Rust, C, C++, C#, Swift, Ruby, PHP, Kotlin, Scala, Lua, Elixir, Haskell, OCaml, Dart, R, Julia, Perl, and many more, plus a generic fallback parser for any unlisted language.

The graph engine is domain-abstract by design. Code-specific logic lives in a pluggable parser layer. Infrastructure (Terraform, k8s), APIs (OpenAPI, GraphQL), documentation, and data pipelines are next. Swap the parser, and the same architecture — context, memory, coordination, rules — works for any structured domain.

Email us at hello@synapsesos.com — we read everything. For bugs and feature requests, GitHub Issues is the best place: github.com/SynapsesOS/synapses.

Download

Ready to fire your first synapse?

One binary. Zero telemetry. Your code stays on your machine.
Give your agents the nervous system they're missing. Free and open source.

Homebrew

brew install SynapsesOS/tap/synapses

Go install

go install github.com/SynapsesOS/synapses@latest

View all releases on GitHub  ·  MIT License  ·  Free forever