Boundary works alongside OpenAI’s structured outputs. OpenAI ensures valid JSON. Boundary ensures correct values.
Install
npm install @boundary/contract zod openai
Basic integration
import { z } from "zod";
import { defineContract } from "@boundary/contract";
import OpenAI from "openai";
const openai = new OpenAI();
// 1. Define your schema
const schema = z.object({
tier: z.enum(["hot", "warm", "cold"]),
score: z.number().min(0).max(100),
reason: z.string(),
});
// 2. Define your contract (schema + rules)
const contract = defineContract({
schema,
rules: [
(d) => d.tier === "hot" ? d.score > 70 : true,
(d) => d.reason.length > 0 || "reason cannot be empty",
],
});
// 3. Run with OpenAI inside the contract
const result = await contract.accept(async (attempt) => {
const res = await openai.responses.create({
model: "gpt-4.1",
input: [
{
role: "user",
content: "Score this lead: signed up 2 days ago, visited pricing page 3 times, opened 1 email",
},
...attempt.repairs,
],
text: {
format: {
type: "json_schema",
name: "lead_score",
schema: {
type: "object",
properties: {
tier: { type: "string", enum: ["hot", "warm", "cold"] },
score: { type: "number" },
reason: { type: "string" },
},
required: ["tier", "score", "reason"],
},
},
},
});
return res.output_text;
});
if (result.ok) {
console.log("Accepted:", result.data);
} else {
console.error("Failed:", result.error.message);
}
How repair context works
On the first attempt, attempt.repairs is empty. If the output fails a rule, Boundary generates a repair message and adds it to attempt.repairs for the next try.
// Attempt 1: repairs = []
// LLM returns: { tier: "hot", score: 25, reason: "" }
// Fails: "hot leads require score > 70", "reason cannot be empty"
// Attempt 2: repairs = [{ role: "user", content: "..." }]
// The repair message contains the exact violations
// LLM returns: { tier: "cold", score: 25, reason: "Low intent" }
// Passes all rules → ACCEPTED
The repair messages are regular Message objects with role and content. They spread directly into the input array.
Using the Chat Completions API
If you’re using the older Chat Completions API instead of Responses:
const result = await contract.accept(async (attempt) => {
const res = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [
{ role: "system", content: attempt.instructions },
{ role: "user", content: "Score this lead..." },
...attempt.repairs.map((r) => ({
role: r.role as "user" | "assistant",
content: r.content,
})),
],
response_format: { type: "json_object" },
});
return res.choices[0]?.message?.content ?? null;
});
With Chat Completions, you need to extract the content string from the response. The Responses API provides output_text directly.
Why use both OpenAI structured outputs and Boundary?
OpenAI’s structured outputs guarantee:
- Valid JSON output
- Matches a JSON Schema
Boundary adds:
- Cross-field rules — “hot tier requires score > 70”
- Auto repair — model fixes its own mistakes
- Deterministic gate — accepted or rejected, no silent corruption
- Full trace — every attempt recorded
They complement each other. Use OpenAI’s JSON mode for structure, Boundary for correctness.