Agent-First PSQL: SQL Queries as Structured Events

by CMN Contributors

A PostgreSQL client that reports rows, timing, and SQLSTATE failures as machine-readable events instead of terminal prose.

Your agent inserts a row. It gets back:

ERROR:  duplicate key value violates unique constraint "users_email_key"
DETAIL:  Key (email)=(alice@example.com) already exists.

That came from stderr. psql exited 1 — or 3, depending on how you called it. Now your agent has to detect that this came from stderr, grep for “duplicate key”, parse out “users_email_key” to figure out which constraint, and extract “(alice@example.com)” to know what value conflicted. This is English prose coming out of a program that was designed for humans reading a terminal.

What the agent actually needed: “unique violation, column email, value alice@example.com.” That’s a data structure, not a sentence.

afpsql speaks to agents. It produces JSONL events with SQLSTATE codes, typed rows, and timing. The English is still there in message for humans reading logs. But the agent uses sqlstate.

The protocol

Every interaction is structured events over stdin/stdout. A query goes in:

{"code":"query","id":"q1","sql":"select * from users where id = $1","params":[42]}

A result comes out:

{"code":"result","id":"q1","command_tag":"ROWS 1","columns":[{"name":"id","type":"int4"},{"name":"email","type":"text"}],"rows":[{"id":42,"email":"alice@example.com"}],"row_count":1,"trace":{"duration_ms":3}}

If the query fails, the error comes out:

{"code":"sql_error","id":"q1","sqlstate":"23505","message":"duplicate key value violates unique constraint \"users_email_key\"","detail":"Key (email)=(alice@example.com) already exists.","trace":{"duration_ms":2}}

The sqlstate field is what the agent branches on. 23505 is a unique violation. 23503 is a foreign key violation. 42P01 means the table doesn’t exist. 40001 is a serialization failure — retry the transaction. An agent can handle each case correctly without parsing a word of the English message. The full five-character SQLSTATE code comes directly from PostgreSQL; afpsql doesn’t invent its own error taxonomy.

CLI mode

One query, one structured result, exit:

afpsql --dsn-secret "postgresql://user:pass@host/db" \
  --sql "select id, email from users where id = $1" \
  --param 1=42
# {"code":"result","command_tag":"ROWS 1","columns":[...],"rows":[{"id":42,"email":"alice@example.com"}],"row_count":1,"trace":{"duration_ms":3}}

The _secret suffix on --dsn-secret matches the Agent-First Data convention — the connection string won’t appear in structured output or logs. Exit codes follow the same pattern as afhttp: 0 means afpsql got a response from the database (including SQL errors), 1 means a connection failure, 2 means invalid arguments. A unique violation is exit 0 with code: "sql_error" — it’s data, not a crash.

If you prefer not to pass the DSN on the command line, standard environment variables work too: AFPSQL_DSN_SECRET, or the PostgreSQL standard PGHOST, PGPORT, PGUSER, and PGDATABASE for individual fields.

Pipe mode: connection reuse, concurrent queries

afpsql --dsn-secret "postgresql://user:pass@host/db" --mode pipe <<'EOF'
{"code":"query","id":"users","sql":"select count(*) as n from users"}
{"code":"query","id":"orders","sql":"select count(*) as n from orders"}
{"code":"close"}
EOF

Both queries run concurrently against the same connection pool backed by deadpool-postgres. There’s no reconnect per query. If you’re running fifty analytics queries in a loop, you’re not paying fifty connection handshakes. The responses arrive as each query completes and are matched to requests by id.

Streaming large results

By default, afpsql collects the full result set and returns it as a single result event. For large exports this isn’t practical:

{"code":"query","id":"export","sql":"select * from events","options":{"stream_rows":true,"batch_rows":1000}}

With stream_rows, afpsql emits a result_start event with column metadata, then successive result_rows events each containing up to batch_rows rows, then a result_end with timing. The agent processes each batch as it arrives without holding the full dataset in memory.

Without stream_rows, if a result exceeds inline_max_rows or inline_max_bytes, afpsql returns a result_too_large error with a retry hint pointing to stream_rows. You don’t get a partial result silently truncated; you get an explicit signal to change the query options.

Parameters are always positional

{"code":"query","id":"q1","sql":"select * from users where id = $1 and status = $2","params":[123,"active"]}

There is no string interpolation mode. params is the only way to pass values into a query. PostgreSQL prepares the statement and binds typed values — there’s no point at which user-supplied data touches the SQL string. An agent generating queries from user input doesn’t need to sanitize anything, because there’s nothing to sanitize.

psql compatibility mode

Agents with existing tool definitions written for psql can switch without changing their tool calls:

afpsql --mode psql -c "select 1 as n" -h localhost -p 5432 -U myuser -d mydb
# {"code":"result","rows":[{"n":1}],...}

--mode psql translates -c, -f, -h, -p, -U, -d, and -v into afpsql’s internal fields. Output is always structured JSONL — not psql’s padded table format. The agent gets the same clean events regardless of how the flags were spelled.

MCP mode

--mode mcp runs afpsql as an MCP server over stdio. Two tools are available: psql_query runs a parameterized query and returns structured events. psql_config sets connection defaults (DSN, timeouts, inline limits) that persist for the session.

Install

brew install cmnspore/tap/afpsql   # macOS/Linux
scoop bucket add cmnspore https://github.com/cmnspore/scoop-bucket && scoop install afpsql  # Windows
cargo install agent-first-psql     # any platform

Docs: github.com/cmnspore/agent-first-psql