Boundary doesn’t sit between you and your model. It wraps your code, not your network.
Local-first execution
By default, Boundary runs entirely inside your application:- Your LLM calls go directly to your provider (OpenAI, Anthropic, etc.)
- Boundary does not proxy or intercept requests
- No data is sent to Boundary servers
- All validation, repair, and retry happens in-process
@boundary/contract has zero telemetry. No analytics, no tracking, no phone-home. It’s a pure validation library.
Optional observability (explicit opt-in)
If you want production visibility — acceptance rates, failure patterns, alerts — you add@boundary/sdk. This is a separate package. It is never installed or activated automatically.
What “metadata” means
Whencapture.metadata is true (the default), Boundary sends:
- Contract name
- Attempt count
- Duration (ms)
- Success or failure
- Failure category (e.g.,
INVARIANT_ERROR) - Rule names that failed
- Raw LLM prompts
- Raw LLM outputs
- Your application data
- User inputs
capture.inputs: true or capture.outputs: true.
Redaction
Even when you opt into capturing inputs or outputs, you can redact sensitive fields:Full visibility
Boundary exposes every step of the execution loop to your application:No hidden behavior
Boundary does not:| Modify your prompts silently | attempt.instructions is generated from your schema — visible and deterministic |
| Call additional models or services | Only your RunFn makes LLM calls |
| Persist state between runs | Each .accept() call is independent |
| Auto-enable telemetry | You must explicitly create and pass a logger |
Custom transport
You don’t have to use Boundary’s cloud to get observability. Thewrite option lets you send events to your own infrastructure:
Control stays with you
You define:- The schema — what structure the output must have
- The rules — what “correct” means for your domain
- The retry policy — how many attempts, what backoff
- The capture policy — what data is logged and where
- The redaction rules — what is stripped before transmission