SynapsesOS
Getting Started

Core Concepts

This page explains the key ideas that make Synapses work. Understanding these concepts will help you get the most out of Synapses and configure it effectively for your projects.

Code Graph

At the heart of Synapses is a directed graph that models your codebase’s structure.

Nodes

Every meaningful code entity becomes a node in the graph:

  • Functions and Methods — individual callable units, including their signatures and receiver types
  • Structs and Classes — data structures and their fields
  • Interfaces and Traits — behavioral contracts
  • Files — source files as container nodes
  • Packages and Modules — groupings of related code

Each node stores metadata: its name, file location, line range, language, visibility (exported or internal), and an optional summary.

Edges

Relationships between nodes are represented as typed, directed edges:

Edge TypeMeaning
CALLSFunction A calls function B
IMPLEMENTSType A satisfies interface B
IMPORTSPackage A imports package B
EMBEDSStruct A embeds struct B
CONTAINSFile or package contains a child entity
DATA_FLOWSData passes from entity A to entity B
RETURNSFunction returns a specific type
RECEIVESMethod has a specific receiver type

These edges are extracted from AST analysis, not from text pattern matching. This means the graph captures actual structural relationships, not guesses based on string similarity.

49+ Language Parsers

Synapses ships with AST-based parsers for over 49 languages, organized into tiers by depth of analysis:

  • Tier 0 — Full structural extraction (Go, TypeScript, Python, Rust, Java, C#, and more)
  • Tier 1 — Functions, types, and basic relationships
  • Tier 2 — File-level entities and import tracking

Even for languages in lower tiers, Synapses extracts enough structure to provide useful context. The graph is always more useful than raw text search.

Context Carving

Context carving is how Synapses answers the question: “Given this entity, what does the agent need to know?”

The algorithm works by performing a breadth-first search (BFS) starting from a root node — the entity the agent is asking about. As the search walks outward through the graph, it collects related nodes and edges.

Two parameters control the carving:

  • Decay factor — Each hop away from the root reduces the relevance score. Close neighbors get high scores; distant relatives get lower ones. This ensures the most structurally relevant context is prioritized.
  • Token budget — The total amount of context (measured in tokens) that will be returned. Synapses fills the budget by including nodes in order of relevance score, stopping when the budget is exhausted.

The result is a focused subgraph — an “ego-graph” centered on the entity you care about, containing exactly enough context for the agent to work effectively without overwhelming its context window.

Intents

Not all tasks need the same kind of context. When you’re debugging, you care about callers and data flow. When you’re reviewing, you care about interfaces and contracts. Synapses models this through intents.

Each intent adjusts the edge weights used during context carving:

IntentPrioritizes
modifyDirect callers and callees, sibling methods, field access patterns
debugCall chains, data flow, error propagation paths
reviewInterface implementations, public API surface, test coverage
addPackage structure, existing patterns, naming conventions
planHigh-level architecture, package dependencies, module boundaries
understandBalanced exploration across all edge types

When an agent calls get_context(mode="intent") with an intent, Synapses uses these weights to shape the BFS traversal. The same root node can produce different context slices depending on what the agent is trying to do.

Brain Tiers

Synapses includes an optional 4-tier brain system that uses a local LLM (via Ollama) to enrich the code graph with higher-level understanding.

Tier 0: Reflex

Automatic summarization. When new nodes are indexed, Tier 0 generates concise one-line summaries. These summaries appear in search results and context slices, giving agents quick orientation without reading full source code.

Tier 1: Sensory

Rule explanation. Tier 1 processes architectural rules and annotations, generating human-readable explanations of why a rule exists and what it enforces. This helps agents understand constraints, not just follow them.

Tier 2: Specialist

Deep enrichment. Tier 2 analyzes complex functions and types, producing detailed behavioral descriptions: what a function does, what edge cases it handles, what assumptions it makes. This is particularly valuable for legacy code or dense algorithms.

Tier 3: Architect

Coordination and high-level reasoning. Tier 3 operates at the package and module level, analyzing architectural patterns, identifying potential design issues, and providing strategic context for planning tasks.

Brain tiers are entirely optional. Synapses works fully without them — they add richness but are not required for core functionality. When enabled, enrichment runs asynchronously and does not block indexing or queries.

Sessions

Synapses maintains persistent sessions for each connected agent. A session tracks:

  • Identity — which agent is connected and what project it’s working on
  • Task memory — plans created with tasks(action="create_plan"), individual tasks tracked with tasks(action="pending") and tasks(action="update")
  • Working state — current branch, recent changes, modified files

Sessions survive agent restarts. When an agent calls session_init, Synapses restores the session state, including any pending tasks from previous sessions. This gives agents cross-session continuity — they can pick up complex multi-step work exactly where they left off.

Task Memory

Agents can create structured plans and track task completion:

  1. tasks(action="create_plan") — Define a multi-step plan with individual tasks
  2. tasks(action="pending") — Retrieve incomplete tasks from the current or previous sessions
  3. tasks(action="update") — Mark tasks as complete, blocked, or in progress

This task system means agents don’t need to re-derive what they were working on. The context manager remembers.

Scale System

Synapses adapts its behavior based on the size of your codebase. The scale is determined by the number of nodes in the code graph:

ScaleNode CountCharacteristics
Micro< 100Small scripts or libraries. Agents can often hold the entire codebase in context.
Small100 - 499Typical small projects. Targeted context carving starts providing value.
Medium500 - 1,999Most production applications. Context carving is essential — agents can’t read everything.
Large2,000+Large codebases and monorepos. Agents should rely on Synapses tools exclusively and avoid raw file scanning.

The scale classification appears in the session_init response and is used to generate agent instructions. For large projects, Synapses explicitly tells agents to use graph-based tools rather than filesystem scanning, since grep and glob produce too much noise at that scale.

Putting It Together

Here’s how these concepts work in a typical interaction:

  1. An agent starts a session and calls session_init. Synapses returns the project identity (including scale), any pending tasks, and the current working state.
  2. The agent needs to modify a function. It calls get_context(mode="intent") with the function name and the modify intent.
  3. Synapses runs a BFS context carve from that function node, using modify-weighted edges. It collects callers, callees, related types, and sibling methods until the token budget is filled.
  4. The result includes node summaries from the brain system (if enabled), giving the agent quick descriptions of each related entity.
  5. The agent makes the change and calls tasks(action="update") to record progress. The next session will pick up where this one left off.

This cycle — session, context, work, record — is the core loop that makes AI agents more effective with Synapses.