Skip to main content
Node is the environment the SDK is tuned for by default. process.beforeExit drains the queue when the event loop empties, and the HTTP transport reuses TCP connections via native fetch.

Requirements

  • Node 18+ for native fetch. On Node < 18 you must pass a fetch polyfill via fetch.
  • For OIDC-related workflows (unrelated to the SDK itself), Node 22 / npm 11.x matters — the SDK runs on any Node 18+.

Long-running server — full pattern

import express from "express";
import { defineContract } from "@withboundary/contract";
import { createBoundaryLogger } from "@withboundary/sdk";

const app = express();

const logger = createBoundaryLogger({
  apiKey: process.env.BOUNDARY_API_KEY,
  environment: process.env.NODE_ENV === "production" ? "production" : "staging",
});

const contract = defineContract({ name: "lead-scoring", schema, rules, logger });

app.post("/score", async (req, res) => {
  const result = await contract.accept(run);
  if (!result.ok) return res.status(422).json({ error: result.error.message });
  res.json(result.data);
});

const server = app.listen(3000);

// Install YOUR signal handlers — the SDK does not.
for (const signal of ["SIGTERM", "SIGINT"] as const) {
  process.once(signal, async () => {
    server.close();
    await logger?.shutdown(2000);
    process.exit(0);
  });
}
Two things to notice:
  1. The server’s close() stops accepting new requests.
  2. logger?.shutdown(2000) drains with a 2s cap. Without the cap, a bad network could block Ctrl+C indefinitely.
The ?. handles the dev-safe fallback — createBoundaryLogger returns null when neither apiKey nor write is configured.

Why the SDK doesn’t attach to SIGTERM / SIGINT

Node processes often have several owners of these signals — HTTP servers, database pools, queue workers. A silent SDK listener would either:
  • Race with yours — the SDK’s beforeExit fires, but your own handler is still draining connections. The event loop restarts, beforeExit doesn’t fire a second time, and events get dropped.
  • Delay Ctrl+C — a stuck flush keeps the process alive past what you’d expect from a keyboard interrupt.
The SDK attaches to beforeExit (which is nonblocking and fires when everything else is done) but leaves signals to you.

pm2 / systemd

Both send SIGTERM before SIGKILL. Your handler gets some grace period (default 1.6s on pm2, TimeoutStopSec=90 on systemd) — pass that same budget to logger.shutdown():
// pm2 default is 1600ms before SIGKILL
await logger.shutdown(1500);
For systemd, set KillSignal=SIGTERM (the default) and TimeoutStopSec to at least a few seconds:
# boundary.service
[Service]
KillSignal=SIGTERM
TimeoutStopSec=5

Clustering

In Node’s cluster module, each worker is its own process. Each worker needs its own logger instance — there’s no automatic sharing. A single apiKey across workers is fine; the backend deduplicates by event ID and timestamp.
import cluster from "node:cluster";

if (cluster.isPrimary) {
  for (let i = 0; i < cpuCount; i++) cluster.fork();
} else {
  const logger = createBoundaryLogger({ apiKey: process.env.BOUNDARY_API_KEY });
  // ... normal server setup
  process.once("SIGTERM", async () => {
    await logger?.shutdown(2000);
    process.exit(0);
  });
}

Background jobs

For scripts / cron jobs / BullMQ workers, beforeExit is enough — the script finishes, Node drains, the SDK flushes.
async function main() {
  for (const row of rows) {
    await contract.accept(() => score(row));
  }
}

main().catch(console.error);
// No explicit flush needed. beforeExit handles it.
Only add await logger?.flush() if the script is likely to be killed mid-run (e.g. by a CI timeout).

See also

Shutdown

flush vs shutdown, timeout semantics

Resilience

Retry + breaker + timeouts