SynapsesOS
Reference

Scale System

Overview

Synapses classifies projects by scale based on the number of semantic nodes (functions, methods, structs, interfaces — not files). Scale affects tool guidance, default behaviors, and performance optimizations.

Scale Thresholds

ScaleNode CountExamples
micro< 100Small scripts, single-package tools
small100 – 499Typical CLI tools, small web apps
medium500 – 1,999Medium web services, libraries
large2,000+Large monoliths, frameworks, monorepos

Behavior Per Scale

Micro (< 100 nodes)

  • Direct file reading (Read/Grep) is often faster than Synapses tools
  • Minimal tool guidance in session_init
  • Context carving is fast — default settings work well

Small (100–499 nodes)

  • Synapses tools preferred for exploration
  • Moderate tool guidance
  • Default context carve settings are optimal

Medium (500–1,999 nodes)

  • Strongly prefer Synapses tools over direct file scanning
  • Context budgets become important — tune token_budget
  • Impact analysis (get_impact) becomes very useful

Large (2,000+ nodes)

  • Always use Synapses tools — direct scanning produces too much noise
  • Use get_context(mode="intent") with intent for best results
  • Consider lowering decay_factor for tighter context focus
  • FlatGraph (SoA layout) provides performance benefits
  • PPR (Personalized PageRank) may be used for more accurate relevance

Scale in session_init

When you call session_init, the response includes:

{
"identity": {
"scale": "large",
"file_count": 530,
"package_count": 240,
"function_count": 3997,
"edge_count": 16743,
"tool_guidance": "For large projects (500+ functions), use get_context with task_id..."
}
}

The tool_guidance field provides scale-appropriate recommendations that AI agents can follow.

Automatic Optimizations

Synapses applies these optimizations automatically based on scale:

  • Graph index rebuilding: Deferred for large graphs (async rebuild after parse)
  • FlatGraph cache: Enabled when use_flat_graph: true — SoA layout for cache-friendly BFS
  • Watcher backlog: If > 500 files pending, triggers full re-index instead of incremental
  • Content hashing: SHA-256 content hash to detect real changes (avoids re-parsing on touch-only updates)