process.beforeExit drains the queue when the event loop empties, and the HTTP transport reuses TCP connections via native fetch.
Requirements
- Node 18+ for native
fetch. On Node < 18 you must pass a fetch polyfill viafetch. - For OIDC-related workflows (unrelated to the SDK itself), Node 22 / npm 11.x matters — the SDK runs on any Node 18+.
Long-running server — full pattern
- The server’s
close()stops accepting new requests. logger?.shutdown(2000)drains with a 2s cap. Without the cap, a bad network could block Ctrl+C indefinitely.
?. handles the dev-safe fallback — createBoundaryLogger returns null when neither apiKey nor write is configured.
Why the SDK doesn’t attach to SIGTERM / SIGINT
Node processes often have several owners of these signals — HTTP servers, database pools, queue workers. A silent SDK listener would either:- Race with yours — the SDK’s
beforeExitfires, but your own handler is still draining connections. The event loop restarts,beforeExitdoesn’t fire a second time, and events get dropped. - Delay Ctrl+C — a stuck flush keeps the process alive past what you’d expect from a keyboard interrupt.
beforeExit (which is nonblocking and fires when everything else is done) but leaves signals to you.
pm2 / systemd
Both sendSIGTERM before SIGKILL. Your handler gets some grace period (default 1.6s on pm2, TimeoutStopSec=90 on systemd) — pass that same budget to logger.shutdown():
KillSignal=SIGTERM (the default) and TimeoutStopSec to at least a few seconds:
Clustering
In Node’s cluster module, each worker is its own process. Each worker needs its own logger instance — there’s no automatic sharing. A singleapiKey across workers is fine; the backend deduplicates by event ID and timestamp.
Background jobs
For scripts / cron jobs / BullMQ workers,beforeExit is enough — the script finishes, Node drains, the SDK flushes.
await logger?.flush() if the script is likely to be killed mid-run (e.g. by a CI timeout).
See also
Shutdown
flush vs shutdown, timeout semantics
Resilience
Retry + breaker + timeouts