process.beforeExit does not fire reliably — the container may be kept warm for minutes, then frozen without any lifecycle hook firing.
The rule: call await logger.flush(timeoutMs) before every return path.
AWS Lambda (Node handler)
finally is important — it runs on both success and thrown exceptions, so events for failed runs still get shipped.
Lambda execution budget
Your function’sTimeout setting is the upper bound. If it’s 3 seconds and the LLM call takes 2.5s, you have 500ms left for the flush. Pass a realistic timeout:
flush(1000) is fine.
Vercel Functions
Same pattern. Vercel Functions run on Node (or Edge if you opt in — see Next.js).Warm-invocation queue behavior
When a container is reused, the logger instance is reused too. Events that didn’t fit in one batch on the first invocation wait in the queue until:- A subsequent invocation’s
flush()drains them, or - The 5s periodic timer fires (if the container happens to be warm and the event loop happens to be unfrozen when the timer fires — unreliable).
Why not set flushOnExit: false?
You can, but there’s no downside to leaving it on. On Lambda, beforeExit rarely fires, so the hook is a no-op in practice. Leaving it enabled means the logger still drains correctly if you happen to run the same code in a long-running context (a script, a container).
Cold start cost
createBoundaryLogger is cheap — it builds a few objects and returns. You pay TCP connection setup on the first flush() (amortized across the batch). There’s no synchronous network call at construction time.
Keep the logger at module scope so it survives warm starts:
Multiple handlers in one Lambda
If one Lambda deploys multiple routes (Lambda Function URLs, API Gateway), share one logger across them. Each route flushes at the end of its own invocation:See also
Next.js
App Router specifics
Cloudflare / Workers / Edge
ctx.waitUntil pattern for Edge runtimes