Skip to main content
Edge runtimes (Cloudflare Workers, Vercel Edge, Deno Deploy) freeze the isolate between invocations. There is no process.beforeExit, no SIGTERM — no reliable lifecycle hook. The pattern: use the runtime’s keep-alive primitive to await the flush after the response returns.

Cloudflare Workers

import { leadContract, logger } from "./boundary";

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    const result = await leadContract.accept((attempt) => runLLM(attempt));
    const response = result.ok
      ? Response.json(result.data)
      : Response.json({ error: result.error.message }, { status: 422 });

    ctx.waitUntil(logger?.flush(1000) ?? Promise.resolve());
    return response;
  },
};
ctx.waitUntil keeps the isolate alive until the promise resolves, without blocking the client’s response. The user sees the response immediately; the flush finishes in the background up to the CPU/wall-clock limits of your Worker.

Vercel Edge Functions

Vercel Edge exposes the same waitUntil via @vercel/functions:
// app/api/score/route.ts
export const runtime = "edge";

import { waitUntil } from "@vercel/functions";
import { leadContract, logger } from "@/lib/boundary";

export async function POST(request: Request) {
  const result = await leadContract.accept((attempt) => runLLM(attempt));
  const response = result.ok
    ? Response.json(result.data)
    : Response.json({ error: result.error.message }, { status: 422 });

  waitUntil(logger?.flush(1000) ?? Promise.resolve());
  return response;
}
When @vercel/functions isn’t available (older Next versions), you can await the flush before returning — the user sees the extra latency, but events still ship:
await logger?.flush(1000);
return response;

Initialization

Module-scope initialization runs once per isolate. A single logger is reused across invocations on the same isolate — treat it the same way you’d treat a connection pool.
// boundary.ts
import { createBoundaryLogger } from "@withboundary/sdk";

export const logger = createBoundaryLogger({
  apiKey: globalThis.BOUNDARY_API_KEY,  // injected via env
  batch: {
    size: 10,
    maxQueueSize: 100,     // tighter memory budget than Node
    intervalMs: 0,         // disable timer — waitUntil handles drain
  },
});
Disabling the timer (intervalMs: 0) is recommended on Edge. The isolate freezes between invocations, so the timer is unreliable anyway.

Why createBoundaryLogger works without process.env

The SDK reads BOUNDARY_API_KEY from process.env when it’s defined. On Cloudflare Workers, process.env doesn’t exist — you have to pass the key explicitly:
// Don't do this on Workers — process.env is undefined
const logger = createBoundaryLogger({
  apiKey: process.env.BOUNDARY_API_KEY,  // throws or returns null
});

// Do this — pass the env the Worker gave you
export default {
  async fetch(request, env, ctx) {
    const logger = createBoundaryLogger({ apiKey: env.BOUNDARY_API_KEY });
    // ...
  },
};
If you want the logger at module scope (faster, reuses transport state), stash the env on globalThis from the first fetch invocation. Or use wrangler.toml bindings and pass through a factory:
let _logger: BoundaryLogger | null | undefined;

export function getLogger(env: Env) {
  if (_logger === undefined) {
    _logger = createBoundaryLogger({ apiKey: env.BOUNDARY_API_KEY });
  }
  return _logger;
}

Runtime detection on the wire

Each event’s sdk.runtime field will be undefined on Cloudflare Workers — there’s no process.versions.node and no navigator.userAgent in the isolate. That’s expected.

CPU / wall-clock budgets

Cloudflare Workers: up to 30s of wall-clock, 50ms-30s of CPU time depending on plan. ctx.waitUntil runs within those budgets — a slow flush still counts. If your flush commonly exceeds the budget, lower the batch size:
batch: {
  size: 5,            // smaller batches flush faster
  maxQueueSize: 50,
}

Durable Objects

Durable Objects have the same runtime shape as Workers. Same pattern — initialize in the class, flush via ctx.waitUntil in each method that could queue an event.

Deno Deploy

Pattern is identical — use Deno.serve and call await logger.flush() before returning. Deno Deploy does expose queueMicrotask / timers, but freezes between invocations like other edge runtimes.

See also

Vercel / Lambda

Serverless Node runtime pattern

Batching

Tune size and maxQueueSize for Edge