Synapse Storage Design

Synapse is the CMN federated indexer and resilience layer, responsible for aggregating, verifying, and storing spore and mycelium data from sovereign domains.

1. Storage Backends

Synapse supports two storage backends, each gated behind a Cargo feature flag:

BackendUse CaseFeaturesCargo Feature
PostgreSQLProductionACID, GIN indexes, recursive CTEspostgres
RedbDevelopmentEmbedded, zero-config, file-basedredb

Both features are enabled by default. To compile only the backend you need:

# Redb only (faster compile, no libpq dependency)
cargo build -p synapse --no-default-features --features redb

# PostgreSQL only
cargo build -p synapse --no-default-features --features postgres

The backend is selected at runtime via config.yml. If a backend is requested but was not compiled in, Synapse exits with a clear error message.

2. PostgreSQL Schema

-- Spores table: store spore content
-- Primary key: (domain, hash) - a spore belongs to a specific domain
CREATE TABLE spores (
    domain TEXT NOT NULL,
    hash TEXT NOT NULL,
    data JSONB NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    PRIMARY KEY (domain, hash)
);
CREATE INDEX idx_spores_hash ON spores(hash);
CREATE INDEX idx_spores_data ON spores USING GIN (data);

-- Mycelia table: store mycelium content and sync status
CREATE TABLE mycelia (
    domain TEXT PRIMARY KEY,
    data JSONB,
    public_key TEXT,
    status TEXT NOT NULL DEFAULT 'pending',
    retry_count INT NOT NULL DEFAULT 0,
    last_error TEXT,
    last_attempt_at TIMESTAMPTZ,
    next_retry_at TIMESTAMPTZ,
    synced_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_mycelia_retry ON mycelia(status, next_retry_at);
CREATE INDEX idx_mycelia_data ON mycelia USING GIN (data);

-- Spore bonds table: lineage tracking (with relation types)
CREATE TABLE spore_bonds (
    spore_hash TEXT NOT NULL,
    bond_domain TEXT NOT NULL,
    bond_hash TEXT NOT NULL,
    bond_type TEXT NOT NULL,
    PRIMARY KEY (spore_hash, bond_domain, bond_hash)
);
CREATE INDEX idx_spore_bonds_hash ON spore_bonds(bond_hash);

-- Settings table: key-value store (Nostr keys, etc.)
CREATE TABLE settings (
    key TEXT PRIMARY KEY,
    value TEXT NOT NULL,
    updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- TLS certificates table
CREATE TABLE tls_certs (
    domains TEXT PRIMARY KEY,
    cert_pem TEXT NOT NULL,
    key_pem TEXT NOT NULL,
    expires_at TIMESTAMPTZ,
    self_signed BOOLEAN
);

3. Redb Table Structure

SPORES:      "domain:hash" (str) -> data (bytes)
HASH_INDEX:  hash (str) -> ["domain1:hash", "domain2:hash", ...] (vec<str>)
MYCELIA:     domain (str) -> record_json (bytes)
DOMAINS:     domain (str) -> [hash1, hash2, ...] (vec<str>)
INBOUND:     hash (str) -> [child_hash1, child_hash2, ...] (vec<str>)
SPORE_BONDS: hash (str) -> bonds_json (bytes)   # [{uri, relation}]
SETTINGS:    key (str) -> value (str)            # Nostr keys, etc.
TLS_CERTS:   domains (str) -> cert_json (bytes)

Key Design:

4. Data Model

Spore

┌───────────────────────────────────────────┐
│              Spore (Capsule)              │
├───────────────────────────────────────────┤
│ $schema: ".../spore.json"                 │
│ capsule:                                  │
│   ├─ uri: "cmn://domain/b3.hash"         │
│   ├─ core:                               │
│   │   ├─ domain: String                  │
│   │   ├─ name: String                    │
│   │   ├─ synopsis: String                │
│   │   ├─ intent: [String]               │
│   │   ├─ mutations: [String]              │
│   │   ├─ license: String                │
│   │   └─ bonds: [Bond]                  │
│   ├─ core_signature: "ed25519...."       │
│   └─ dist: {platform: url}              │
│ capsule_signature: "ed25519...."          │
└───────────────────────────────────────────┘

Mycelium

┌───────────────────────────────────────────┐
│            Mycelium (Capsule)             │
├───────────────────────────────────────────┤
│ $schema: ".../mycelium.json"              │
│ capsule:                                  │
│   ├─ uri: "cmn://domain"                 │
│   ├─ core:                               │
│   │   ├─ domain: String                  │
│   │   ├─ name: String                    │
│   │   ├─ synopsis: String                │
│   │   ├─ updated_at_epoch_ms: u64        │
│   │   └─ spores: [{id, hash, name, …}]  │
│   └─ core_signature: "ed25519...."       │
│ capsule_signature: "ed25519...."          │
└───────────────────────────────────────────┘

Bond Graph

Spores form a Directed Acyclic Graph (DAG) through bonds:

     ┌──────────┐
     │ spore-A  │
     └────┬─────┘
          │ fork
    ┌─────┴─────┐
    ▼           ▼
┌──────┐    ┌──────┐
│ B    │    │ C    │
└──┬───┘    └──┬───┘
   │ extends   │ fork
   ▼           ▼
┌──────┐    ┌──────┐
│ D    │    │ E    │
└──────┘    └──────┘

5. Search & Graph Layer

When the ruvector feature is enabled, Synapse adds two optional in-memory layers on top of storage:

Vector Search (VectorDB)

Spore metadata (name, description, domain, license, keywords) is embedded via an external API (Ollama or OpenAI) and indexed in a ruvector-core VectorDB with cosine distance. The VectorDB supports metadata filtering (domain, license) applied as a post-filter on search results.

save_spore() ──→ Storage (Redb/Postgres)   # source of truth
             ──→ VectorDB                   # search index
                   embed(text) → vector
                   insert(hash, vector, metadata)

The VectorDB index is persisted to disk at search.index_path.

Evolution Graph (HypergraphIndex)

Spore relationships are modeled as temporal hyperedges in a ruvector-core HypergraphIndex:

RelationHyperedgeMetadata
spawned_from[child, parent]relation, child, parent
absorbed_from[child, parent1, ...]relation, child
depends_on[child, dep]relation
inspired_by[child, source]relation

The graph is purely in-memory. On startup (when rebuild_on_startup: true), it iterates all domains and spores from storage to rebuild the full graph.

Directed traversal: HypergraphIndex’s k_hop_neighbors() is undirected. To support directed lineage (descendants vs ancestors), SporeGraph maintains separate adjacency maps:

forward_edges: HashMap<parent_hash, HashSet<child_hash>>   # for descendants
reverse_edges: HashMap<child_hash, Vec<(parent_uri, relation)>>  # for ancestors

BFS on these maps provides O(V+E) directed traversal with zero I/O.

HypergraphIndex features used:

6. Algorithms

Pulse Verification Flow

1. Validate $schema → determine manifest type (spore/mycelium/taste)
2. Extract domain → from capsule.core.domain
3. Fetch cmn.json → public key
4. Verify core_signature → Ed25519 over JCS-canonical capsule.core
5. Verify capsule_signature → Ed25519 over JCS-canonical capsule
6. Store to database

Dual-Layer Signature Verification

fn verify_pulse_signatures(manifest, public_key):
    // 1. Parse algorithm-prefixed public key (e.g. "ed25519.5XmkQ9vZP8nL3x...")
    (algorithm, b58) = public_key.split_once('.')
    assert algorithm == "ed25519"
    pk_bytes = base58_decode(b58)
    verifying_key = Ed25519::from_bytes(pk_bytes)

    // 2. Verify core_signature over JCS-canonical capsule.core
    core = manifest.capsule.core
    core_canonical = jcs_serialize(core)
    core_sig = base58_decode(manifest.capsule.core_signature.strip_prefix("ed25519."))
    verifying_key.verify(core_canonical.as_bytes(), Ed25519Signature::from(core_sig))

    // 3. Verify capsule_signature over JCS-canonical capsule
    capsule = manifest.capsule
    capsule_canonical = jcs_serialize(capsule)
    capsule_sig = base58_decode(manifest.capsule_signature.strip_prefix("ed25519."))
    verifying_key.verify(capsule_canonical.as_bytes(), Ed25519Signature::from(capsule_sig))

Lineage Tracking (PostgreSQL)

WITH RECURSIVE lineage AS (
    -- Base: find direct children
    SELECT spore_hash, 1 as depth
    FROM spore_bonds
    WHERE bond_hash = $1

    UNION

    -- Recursive: find grandchildren
    SELECT r.spore_hash, l.depth + 1
    FROM spore_bonds r
    INNER JOIN lineage l ON r.bond_hash = l.spore_hash
    WHERE l.depth < $2  -- max_depth limit
)
SELECT DISTINCT spore_hash FROM lineage;

Lineage Tracking (Redb BFS)

fn get_lineage(hash, max_depth):
    result = []
    queue = [(hash, 0)]
    visited = Set()

    while !queue.is_empty():
        (current, depth) = queue.pop()
        if current in visited or depth >= max_depth:
            continue
        visited.add(current)

        children = inbound_table.get(current)
        for child in children:
            result.push(child)
            queue.push((child, depth + 1))

    return result

Mycelium Sync Retry

┌──────────┐
│ pending  │  initial state
└────┬─────┘
     │ start sync

┌──────────┐   success   ┌──────────┐
│ syncing  │ ──────────▶ │  synced  │
└────┬─────┘             └──────────┘
     │ failure

┌──────────┐
│  failed  │ ◀───────┐
│ (waiting) │         │
└────┬─────┘         │
     │ next_retry_at │ failure
     │ reached       │
     ▼               │
┌──────────┐         │
│ syncing  │ ────────┘
└──────────┘

Retry strategy (exponential backoff):

7. Time Complexity

OperationPostgreSQLRedb
Store sporeO(1) + O(R)O(1) + O(R)
Get sporeO(1)O(log N)
Store myceliumO(1)O(1)
Get myceliumO(1)O(log N)
Domain spore listO(K)O(K)
Lineage trackingO(V + E)O(V + E)
Save/get settingO(1)O(log N)

Where:

8. Nostr Event Storage

When the nostr feature is enabled, Synapse stores Nostr events in-memory for relay serving:

Nostr Identity Key:

The Nostr secp256k1 keypair is persisted via the settings table:

key_store valueStorageKey
"storage"Database settings tablenostr_nsec
"file:<path>"Plain file (0600 permissions)nsec bech32 string

The key is auto-generated on first run if it does not exist.

CMN Event Verification:

All CMN content from Nostr events undergoes the same verification as HTTP pulses:

1. Parse event content as CMN pulse JSON
2. Validate protocol = "cmn/1"
3. Extract domain from URI
4. Fetch cmn.json, verify Ed25519 signature
5. Store spore/mycelium in database

9. Background Tasks

Mycelium Retry Task:

Nostr Subscription Listener (when nostr.subscribe: true):

Certificate Renewal Check:

Graceful Shutdown: