Embeddings Configuration
Embeddings
Synapses uses vector embeddings for semantic search (search(mode="semantic")), memory recall, and hybrid context carving.
{ "embeddings": "builtin", "embedding_model": "nomic-embed-text", "embed_pool_size": 3}Fields
| Field | Type | Default | Description |
|---|---|---|---|
embeddings | string | "builtin" | Mode: "builtin", "ollama", or "off" |
embedding_model | string | "nomic-embed-text" | Ollama model name (only used with "ollama" mode) |
embedding_endpoint | string | "" | OpenAI-compatible HTTP endpoint (alternative to Ollama) |
embed_pool_size | int | 3 | ONNX inference workers (1-8, only for "builtin" mode) |
Modes
builtin (default)
Uses the bundled nomic-embed-text ONNX model (~137MB). No external dependencies. Runs inference in-process with configurable worker pool.
{ "embeddings": "builtin", "embed_pool_size": 3 }ollama
Uses Ollama for embedding generation. Requires Ollama running locally with the embedding model pulled.
{ "embeddings": "ollama", "embedding_model": "nomic-embed-text"}off
Disables vector embeddings entirely. Semantic search falls back to FTS5 keyword search.
{ "embeddings": "off" }Use this for air-gapped environments or when you want minimal resource usage.
What Uses Embeddings
search(mode="semantic")— HyDE-enhanced vector searchmemory(action="search", query=...)— Semantic memory retrieval- Context carving with
HybridLambda > 0— Blends structural and semantic similarity - Node embedding index (HNSW) for fast approximate nearest neighbor lookup