Skip to main content
The write option lets you fan events out to a non-Boundary destination — or replace the Boundary transport entirely. write is called with the full batch every time the batcher flushes.

Signature

type CustomWrite = (events: BoundaryLogEvent[]) => void | Promise<void>;
Return a promise if your sink is async (most are). The batcher awaits it before the next flush runs.

Mirror mode — apiKey + write

Both fire on every flush. Useful for a local mirror during rollout:
createBoundaryLogger({
  apiKey: process.env.BOUNDARY_API_KEY,
  write(events) {
    for (const e of events) {
      console.log(JSON.stringify(e));
    }
  },
});
The custom sink is not wrapped in the circuit breaker — it’s your code, you own its failure semantics. The HTTP transport to Boundary is always breaker-wrapped.

Replacement mode — write only

Omit apiKey and no HTTP traffic is sent. Every event goes only to your sink.
createBoundaryLogger({
  write(events) {
    await pino.info({ events }, "boundary.batch");
  },
});
Note the dev-safe fallback: without apiKey and without write, createBoundaryLogger returns null. With either one configured, it returns a live logger.

Integrations

Pino

import pino from "pino";
const log = pino();

createBoundaryLogger({
  write(events) {
    for (const e of events) {
      log.info({ boundary: e }, "contract.run");
    }
  },
});

OpenTelemetry logs

import { logs, SeverityNumber } from "@opentelemetry/api-logs";
const otelLogger = logs.getLogger("boundary");

createBoundaryLogger({
  write(events) {
    for (const e of events) {
      otelLogger.emit({
        severityNumber: e.ok ? SeverityNumber.INFO : SeverityNumber.ERROR,
        body: e.contractName,
        attributes: e as Record<string, unknown>,
      });
    }
  },
});

Write to a file (Node only)

import { appendFile } from "node:fs/promises";

createBoundaryLogger({
  async write(events) {
    const lines = events.map((e) => JSON.stringify(e)).join("\n") + "\n";
    await appendFile("/var/log/boundary.ndjson", lines);
  },
});

Kafka / SQS / your existing bus

createBoundaryLogger({
  async write(events) {
    await producer.send({
      topic: "boundary.events",
      messages: events.map((e) => ({ value: JSON.stringify(e) })),
    });
  },
});

Failure handling

If write throws or rejects, the batcher routes the error through onError:
createBoundaryLogger({
  write: unstableSink,
  onError(err) {
    metrics.increment("boundary.write.failure");
    console.warn(err);
  },
});
The batch is considered “done” regardless — custom sinks do not retry automatically. If you need retry / durability, wrap your sink:
async function write(events: BoundaryLogEvent[]) {
  for (let attempt = 1; attempt <= 3; attempt++) {
    try {
      await downstream.send(events);
      return;
    } catch (err) {
      if (attempt === 3) throw err;
      await new Promise((r) => setTimeout(r, 500 * attempt));
    }
  }
}

Back-pressure

write runs inline with the flush, so a slow sink slows flushes — which in turn makes the queue fill up faster. If your sink can block for long, either:
  • Apply maxQueueSize aggressively (lower it to something you’re comfortable dropping), or
  • Let write return quickly and buffer on its own (e.g. push to a local ring buffer that a worker drains).

See also

Batching

Flush triggers and queue overflow

Resilience

HTTP retry, circuit breaker, rate limits