Skip to main content
Events are queued in memory and flushed to the network in batches. One flush is one network round-trip, regardless of how many events it contains.

Defaults

createBoundaryLogger({
  batch: {
    size: 20,           // flush when queue length hits this
    intervalMs: 5000,   // periodic flush every 5s, whichever comes first
    maxQueueSize: 1000, // drop-oldest overflow threshold
  },
});
A flush fires whenever either trigger fires:
  • queue.length >= size, or
  • intervalMs has elapsed since the last flush and the queue is non-empty.

Concurrent flushes coalesce

If a flush is in-flight when another trigger fires, the second trigger does not start a second network call. It joins the existing flush, and any events enqueued while the first flush is running are picked up by the next cycle. This means you never fan out parallel requests to the ingest endpoint, and back-pressure during a slow network is handled by queue buildup rather than stampede.

Queue overflow — drop-oldest

When the queue grows past maxQueueSize, the oldest events are dropped to make room for new ones. This is deliberate:
  • It bounds memory usage during a backend outage.
  • It keeps the most recent signal (the stuff your users are hitting right now) over stale signal.
  • Dropped events are reported via onError (see Resilience) so you know it’s happening.
If dropping at all is unacceptable for your workload, lower size and intervalMs so the queue empties faster, or wire a custom sink — write runs synchronously on flush and can apply its own back-pressure.

Tuning

ScenarioRecommendation
Low-volume API (< 10 runs/sec)Keep defaults.
Spiky trafficRaise size to 50-100 so spikes send in fewer round-trips.
Latency-sensitive dashboardsLower intervalMs to 1000 for near-real-time visibility.
Long-running batch jobsCall await logger.flush() at job completion instead of relying on the timer.
Memory-constrained runtimes (Workers, Lambda)Lower maxQueueSize to 100-200.

Timer vs explicit flush

The periodic timer is the safety net, not the primary flush mechanism for short-lived processes. In these environments, call await logger.flush(timeoutMs) explicitly at the end of each request:
  • Serverless functions (Lambda, Vercel Functions)
  • Edge runtimes (Cloudflare Workers, Vercel Edge)
  • Request-scoped browser contexts
See Runtime Platforms for per-runtime recipes.

Disabling the timer

Setting intervalMs: 0 disables the periodic flush entirely. Events are only flushed on size threshold, on flush(), or on shutdown().
createBoundaryLogger({
  batch: { intervalMs: 0 },  // no background timer at all
});
Useful for request-scoped loggers where you always call flush() explicitly.

See also

Shutdown

Flush + stop on process exit

Resilience

Retry, circuit breaker, rate limits