Skip to main content
Serverless Node runtimes freeze the event loop between invocations. process.beforeExit does not fire reliably — the container may be kept warm for minutes, then frozen without any lifecycle hook firing. The rule: call await logger.flush(timeoutMs) before every return path.

AWS Lambda (Node handler)

import { leadContract, logger } from "./boundary";

export const handler = async (event: LambdaEvent) => {
  try {
    const result = await leadContract.accept((attempt) => runLLM(attempt));
    return {
      statusCode: result.ok ? 200 : 422,
      body: JSON.stringify(result.ok ? result.data : { error: result.error.message }),
    };
  } finally {
    await logger?.flush(1000);
  }
};
finally is important — it runs on both success and thrown exceptions, so events for failed runs still get shipped.

Lambda execution budget

Your function’s Timeout setting is the upper bound. If it’s 3 seconds and the LLM call takes 2.5s, you have 500ms left for the flush. Pass a realistic timeout:
await logger?.flush(500);  // don't exceed the function's remaining budget
For long Lambda invocations (15 min max), the default flush(1000) is fine.

Vercel Functions

Same pattern. Vercel Functions run on Node (or Edge if you opt in — see Next.js).
// api/score.ts
import { leadContract, logger } from "@/lib/boundary";

export default async function handler(req: Request) {
  try {
    const result = await leadContract.accept((attempt) => runLLM(attempt));
    return Response.json(result.ok ? result.data : { error: result.error.message }, {
      status: result.ok ? 200 : 422,
    });
  } finally {
    await logger?.flush(1000);
  }
}

Warm-invocation queue behavior

When a container is reused, the logger instance is reused too. Events that didn’t fit in one batch on the first invocation wait in the queue until:
  • A subsequent invocation’s flush() drains them, or
  • The 5s periodic timer fires (if the container happens to be warm and the event loop happens to be unfrozen when the timer fires — unreliable).
Always flush explicitly. Don’t rely on the timer for correctness.

Why not set flushOnExit: false?

You can, but there’s no downside to leaving it on. On Lambda, beforeExit rarely fires, so the hook is a no-op in practice. Leaving it enabled means the logger still drains correctly if you happen to run the same code in a long-running context (a script, a container).

Cold start cost

createBoundaryLogger is cheap — it builds a few objects and returns. You pay TCP connection setup on the first flush() (amortized across the batch). There’s no synchronous network call at construction time. Keep the logger at module scope so it survives warm starts:
// boundary.ts — evaluated once per container
export const logger = createBoundaryLogger({ apiKey });
// handler.ts
import { logger } from "./boundary";   // same instance across warm invocations

Multiple handlers in one Lambda

If one Lambda deploys multiple routes (Lambda Function URLs, API Gateway), share one logger across them. Each route flushes at the end of its own invocation:
const logger = createBoundaryLogger({ apiKey });
const leadContract = defineContract({ name: "lead", schema, logger });
const invoiceContract = defineContract({ name: "invoice", schema: invSchema, logger });

export const leadHandler = async (e) => {
  try { return await handleLead(e); }
  finally { await logger?.flush(1000); }
};

export const invoiceHandler = async (e) => {
  try { return await handleInvoice(e); }
  finally { await logger?.flush(1000); }
};
Both handlers share the queue. A leftover event from a lead invocation can be flushed by a subsequent invoice invocation if both routes are hitting the same warm container.

See also

Next.js

App Router specifics

Cloudflare / Workers / Edge

ctx.waitUntil pattern for Edge runtimes