Brain Configuration
The Brain is Synapses’ in-process LLM enrichment system. It uses local Ollama models to add semantic understanding to the code graph — auto-summarization, rule explanations, architectural analysis, and multi-agent coordination.
{ "brain": { "enabled": true, "intelligence_mode": "optimal", "ollama_url": "http://localhost:11434", "model": "qwen3.5:2b", "ingest": true, "enrich": true }}Fields
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Activate LLM enrichment |
intelligence_mode | string | auto | "optimal" (8GB), "standard" (16GB), "full" (32GB+) |
ollama_url | string | localhost:11434 | Ollama API endpoint |
model | string | qwen3.5:2b | Primary model for all tiers |
fast_model | string | qwen3.5:2b | Bulk ingestion model |
model_ingest | string | — | Override model for Tier 0 (Reflex) |
model_guardian | string | — | Override model for Tier 1 (Sensory) |
model_enrich | string | — | Override model for Tier 2 (Specialist) |
model_orchestrate | string | — | Override model for Tier 3 (Architect) |
model_archivist | string | — | Override model for archival tasks |
db_path | string | ~/.synapses/brain.sqlite | Brain database location |
ingest | bool | — | Auto-summarize entities on file save |
enrich | bool | — | LLM enrichment on get_context calls |
context_builder | bool|null | null | Context packet building. null inherits from enabled |
Intelligence Modes
| Mode | RAM Required | Model | Notes |
|---|---|---|---|
| optimal | 8 GB | qwen3.5:2b (~1.5 GB) | Shares identity across tiers. Best for laptops. |
| standard | 16 GB | qwen3.5:4b (~2.7 GB) | Separate guardian identity. Better analysis. |
| full | 32 GB+ | qwen3.5:4b | Models pinned in RAM. Fastest responses. |
Brain Tiers
The brain operates as a 4-tier pipeline:
Tier 0 — Reflex (Ingestor + Pruner)
- Auto-summarizes code entities on file save
- Generates 1-sentence summary + 1-3 domain tags per entity
- ~500ms latency per entity
- Strips boilerplate from web content
Tier 1 — Sensory (Guardian)
- Generates plain-English explanations for architectural rule violations
- Provides actionable fix suggestions
- ~800ms per violation (cached after first hit)
- Circuit breaker falls back if Ollama is slow
Tier 2 — Specialist (Enricher + Context Builder)
- Two-pass design: deterministic (~5ms) + optional LLM (~2-3s)
- Deterministic pass: SDLC phase detection, complexity scoring
- LLM pass: 2-sentence architectural insight + concerns
- Builds structured Context Packets with 7 sections
Tier 3 — Architect (Orchestrator)
- Multi-agent conflict resolution
- Suggests non-conflicting work scope for parallel agents
- ~1-2s per coordination request
Prerequisites
Brain requires Ollama running locally:
# Install Ollamacurl -fsSL https://ollama.ai/install.sh | sh
# Pull the default modelollama pull qwen3.5:2b
# Verifyollama listDisabling Brain
If you don’t want LLM enrichment (e.g., air-gapped environment):
{ "brain": { "enabled": false }}All brain features degrade gracefully — deterministic analysis still works, LLM insights are simply omitted.