SynapsesOS
Reference

Brain Configuration

The Brain is Synapses’ in-process LLM enrichment system. It uses local Ollama models to add semantic understanding to the code graph — auto-summarization, rule explanations, architectural analysis, and multi-agent coordination.

{
"brain": {
"enabled": true,
"intelligence_mode": "optimal",
"ollama_url": "http://localhost:11434",
"model": "qwen3.5:2b",
"ingest": true,
"enrich": true
}
}

Fields

FieldTypeDefaultDescription
enabledboolfalseActivate LLM enrichment
intelligence_modestringauto"optimal" (8GB), "standard" (16GB), "full" (32GB+)
ollama_urlstringlocalhost:11434Ollama API endpoint
modelstringqwen3.5:2bPrimary model for all tiers
fast_modelstringqwen3.5:2bBulk ingestion model
model_ingeststringOverride model for Tier 0 (Reflex)
model_guardianstringOverride model for Tier 1 (Sensory)
model_enrichstringOverride model for Tier 2 (Specialist)
model_orchestratestringOverride model for Tier 3 (Architect)
model_archiviststringOverride model for archival tasks
db_pathstring~/.synapses/brain.sqliteBrain database location
ingestboolAuto-summarize entities on file save
enrichboolLLM enrichment on get_context calls
context_builderbool|nullnullContext packet building. null inherits from enabled

Intelligence Modes

ModeRAM RequiredModelNotes
optimal8 GBqwen3.5:2b (~1.5 GB)Shares identity across tiers. Best for laptops.
standard16 GBqwen3.5:4b (~2.7 GB)Separate guardian identity. Better analysis.
full32 GB+qwen3.5:4bModels pinned in RAM. Fastest responses.

Brain Tiers

The brain operates as a 4-tier pipeline:

Tier 0 — Reflex (Ingestor + Pruner)

  • Auto-summarizes code entities on file save
  • Generates 1-sentence summary + 1-3 domain tags per entity
  • ~500ms latency per entity
  • Strips boilerplate from web content

Tier 1 — Sensory (Guardian)

  • Generates plain-English explanations for architectural rule violations
  • Provides actionable fix suggestions
  • ~800ms per violation (cached after first hit)
  • Circuit breaker falls back if Ollama is slow

Tier 2 — Specialist (Enricher + Context Builder)

  • Two-pass design: deterministic (~5ms) + optional LLM (~2-3s)
  • Deterministic pass: SDLC phase detection, complexity scoring
  • LLM pass: 2-sentence architectural insight + concerns
  • Builds structured Context Packets with 7 sections

Tier 3 — Architect (Orchestrator)

  • Multi-agent conflict resolution
  • Suggests non-conflicting work scope for parallel agents
  • ~1-2s per coordination request

Prerequisites

Brain requires Ollama running locally:

Terminal window
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull the default model
ollama pull qwen3.5:2b
# Verify
ollama list

Disabling Brain

If you don’t want LLM enrichment (e.g., air-gapped environment):

{
"brain": {
"enabled": false
}
}

All brain features degrade gracefully — deterministic analysis still works, LLM insights are simply omitted.