Deploying Synapse
Self-host a Synapse instance to index CMN spores, serve discovery queries, and provide resilience for your network.
Prerequisites
- Rust toolchain — Install via rustup
- A domain — e.g.,
synapse.example.comwith DNS A/AAAA record pointing to your server - Embedding service (optional) — Ollama or OpenAI API key for semantic search
1. Build
Clone the synapse spore and build:
git clone https://cmn.dev/cmn/repos/synapse.git
cd synapse
cargo build --release
All features (PostgreSQL, redb, Nostr, semantic search) are included by default. To build with only specific features:
# Embedded storage only, no search or Nostr
cargo build --release --no-default-features --features redb
The binary is at target/release/synapse.
2. Configure
Synapse reads config.yml from the working directory. Start from the included example:
cp config.yml my-config.yml
Minimal config (development):
debug: true
storage:
backend: "redb"
redb_path: "synapse.db"
http:
enabled: true
address: "0.0.0.0:3000"
https:
enabled: false
Production with reverse proxy (Caddy, nginx):
debug: false
log_format: "json"
storage:
backend: "postgres"
postgres_url_secret: "postgres://synapse:password@localhost/synapse"
http:
enabled: true
address: "127.0.0.1:3000" # Only listen on localhost
https:
enabled: false # Reverse proxy handles TLS
Production standalone (built-in ACME):
debug: false
log_format: "json"
storage:
backend: "redb"
redb_path: "/var/lib/synapse/synapse.db"
http:
enabled: true
address: "[::]:80"
https:
enabled: true
address: "[::]:443"
domains:
- "synapse.example.com"
acme:
contact: "admin@example.com"
mode: "production" # Let's Encrypt production certs
renewal_before_days: 30
See Synapse API Reference — Configuration for all options.
3. Run
# Direct
./synapse
# With systemd
sudo cp synapse /usr/local/bin/
sudo systemctl enable --now synapse
# With supervisord
[program:synapse]
command=/home/ubuntu/synapse/bin/synapse
directory=/home/ubuntu/synapse/bin
autostart=true
autorestart=true
stderr_logfile=/home/ubuntu/synapse/bin/log/err.log
stdout_logfile=/home/ubuntu/synapse/bin/log/log.log
Runtime Agent-First Data log/protocol events are emitted on stdout; keep stdout_logfile as your primary operational log sink.
To bind to ports 80/443 without root:
sudo setcap 'cap_net_bind_service=+ep' /usr/local/bin/synapse4. Verify
# Health check — should return a valid JSON body
curl http://localhost:3000/synapse/myceliums
# Send a test pulse from a publisher
hypha mycelium pulse --synapse https://synapse.example.com \
--file path/to/spore.json
# Verify the spore was indexed
curl https://synapse.example.com/synapse/spore/<hash>5. Enable Search (Optional)
Semantic search requires an embedding service. The simplest option is Ollama running locally:
# Install and start Ollama
ollama pull nomic-embed-text
Add to config.yml:
search:
enabled: true
index_path: "synapse-search"
embedding:
provider: "ollama"
url: "http://localhost:11434"
model: "nomic-embed-text"
dimensions: 768
graph:
enabled: true
rebuild_on_startup: true
Restart synapse. To rebuild the search index from existing data:
synapse --rebuild-search6. Enable Nostr (Optional)
Nostr relay enables cross-instance synchronization:
nostr:
enabled: true
relays:
- "wss://relay.damus.io"
- "wss://nos.lol"
key_store: "storage"
subscribe: true # Receive CMN events from other instances
forward_pulse: true # Broadcast HTTP pulses to Nostr network
This activates the /nostr WebSocket endpoint and connects to external relays. A secp256k1 identity key is auto-generated on first run.
7. Data Sources & Bootstrapping
A Synapse instance receives data through four channels:
| Source | Mechanism | Data Flow |
|---|---|---|
| HTTP Pulse | POST /synapse/pulse | Domains push signed manifests directly to your instance |
| Nostr Subscription | Subscribe to external relays | Pull historical events on connect, then stream new events in real-time |
| Nostr Relay | /nostr WebSocket endpoint | Other instances or clients push CMN events to your relay |
| Crawler | Background retry loop (60s interval) | Re-fetches cmn.json → mycelium → spores from known domains that previously failed |
Bootstrapping a New Instance
A fresh Synapse starts with an empty index. Enable Nostr subscription to pull historical data and listen for new events from the network:
nostr:
enabled: true
relays:
# Public Nostr relays — CMN events are stored here by other instances
- "wss://relay.damus.io"
- "wss://nos.lol"
# Other Synapse instances also expose /nostr as a relay endpoint
# - "wss://synapse.cmn.dev/nostr"
key_store: "storage"
subscribe: true # Pull historical + real-time events
forward_pulse: true # Broadcast your pulses to the network
On startup, the Nostr subscription connects to each relay, sends a REQ for all CMN events (kind 30078, tagged cmn-spore or cmn-mycelium), receives all stored events, then continues listening for new ones. Each event is verified (CMN signature via cmn.json public key) before being indexed.
Relay sources (use any combination):
| Relay Type | Example | Notes |
|---|---|---|
| Public Nostr relays | wss://relay.damus.io | CMN events forwarded here by instances with forward_pulse: true |
Another Synapse’s /nostr | wss://synapse.example.com/nostr | Every Synapse with Nostr enabled exposes a NIP-01 relay |
Public Nostr relays are the most resilient option — they operate independently of any CMN infrastructure. As long as at least one Synapse instance has been forwarding pulses to these relays, historical data is available for bootstrapping.
How Domains Enter the Index
Domains become known to your instance through:
- Pulse — A domain sends
hypha mycelium pulse --synapse https://your-instance/after releasing - Nostr — Events from subscribed relays contain domain information, which is verified and indexed
- Crawler retry — Once a domain is known but its crawl failed, the background loop retries every 60 seconds with permanent retry (no max retry limit)
There is no seed list or manual domain configuration — all domains are discovered dynamically through pulses and Nostr events.
Cross-Instance Synchronization
When forward_pulse: true, every HTTP pulse received by your instance is also published to your configured Nostr relays. When subscribe: true, your instance listens for CMN events on those same relays. Public Nostr relays act as the shared transport layer between instances:
Instance A Public Relays Instance B
│ │ │
│ forward_pulse ────────→ │ │
│ │ ────────→ subscribe │
│ │ │
│ subscribe ←──────── │ │
│ │ ←──────── forward_pulse │
Instances can also subscribe directly to each other’s /nostr WebSocket endpoints for lower latency, but public relays provide resilience — if any instance goes offline, the data remains available on the relay network.
Each instance independently verifies all data via CMN signatures and cmn.json public keys — no trust is placed in the relay or the sending instance.
Storage Backends
| Backend | Use Case | Config |
|---|---|---|
| redb | Development, single-node, low traffic | backend: "redb" — zero-config, single-file |
| PostgreSQL | Production, high traffic, multiple queries | backend: "postgres" — requires PostgreSQL 14+ |
Both backends are compiled in by default. See Synapse Storage for schema details.
Updating
Pull the latest source and rebuild:
cd cmn-tools
git pull
cargo build --release --package synapse
# Replace binary and restart
cp target/release/synapse /usr/local/bin/synapse
sudo systemctl restart synapse