Defaults
queue.length >= size, orintervalMshas elapsed since the last flush and the queue is non-empty.
Concurrent flushes coalesce
If a flush is in-flight when another trigger fires, the second trigger does not start a second network call. It joins the existing flush, and any events enqueued while the first flush is running are picked up by the next cycle. This means you never fan out parallel requests to the ingest endpoint, and back-pressure during a slow network is handled by queue buildup rather than stampede.Queue overflow — drop-oldest
When the queue grows pastmaxQueueSize, the oldest events are dropped to make room for new ones. This is deliberate:
- It bounds memory usage during a backend outage.
- It keeps the most recent signal (the stuff your users are hitting right now) over stale signal.
- Dropped events are reported via
onError(see Resilience) so you know it’s happening.
size and intervalMs so the queue empties faster, or wire a custom sink — write runs synchronously on flush and can apply its own back-pressure.
Tuning
| Scenario | Recommendation |
|---|---|
| Low-volume API (< 10 runs/sec) | Keep defaults. |
| Spiky traffic | Raise size to 50-100 so spikes send in fewer round-trips. |
| Latency-sensitive dashboards | Lower intervalMs to 1000 for near-real-time visibility. |
| Long-running batch jobs | Call await logger.flush() at job completion instead of relying on the timer. |
| Memory-constrained runtimes (Workers, Lambda) | Lower maxQueueSize to 100-200. |
Timer vs explicit flush
The periodic timer is the safety net, not the primary flush mechanism for short-lived processes. In these environments, callawait logger.flush(timeoutMs) explicitly at the end of each request:
- Serverless functions (Lambda, Vercel Functions)
- Edge runtimes (Cloudflare Workers, Vercel Edge)
- Request-scoped browser contexts
Disabling the timer
SettingintervalMs: 0 disables the periodic flush entirely. Events are only flushed on size threshold, on flush(), or on shutdown().
flush() explicitly.
See also
Shutdown
Flush + stop on process exit
Resilience
Retry, circuit breaker, rate limits