QUANTLIX BOUNDARY

Stop sensitive data from leaking to AI

We check every message before your AI sees it.

Prefer a no-login sandbox? Try the public sandbox.

He’s describing the layer that checks AI traffic before your model answers — and saying it belongs next to the other tools your team already trusts in production.

They're going after that control layer. This belongs in the toolbox.

Inayathulla Khan Lavani

Engineering Leader

Former teams at Palo Alto Networks · Tenable · RSA Security · Oracle

Read on LinkedIn →
Prebuilt policy packsVisible policy outcomesTraceable enforcement decisionsSelf-host or managed

The risk shows up before your AI answers

Sensitive data, unsafe prompt patterns, and policy slip-ups can reach your AI stack before anyone has a clear safety checkpoint in front of it.

Quantlix puts that safety check in place first — before your AI generates an answer.

Protect prompts before your AI runs

Boundary gives teams a clear way to inspect prompts, apply policy, and see what happens before your AI receives the request.

  • Choose a prebuilt protection pack
  • Run a sample or live prompt through Quantlix with your deployment's rules applied first.
  • See what Boundary detects when contextual packs are enabled for your setup.
  • Redact or block risky content per policy before your model answers.
  • Inspect the resulting trace and enforcement metadata

Prebuilt policy packs

Start with ready-to-use packs for common prompt-protection needs.

Finance & payments

PAN/IBAN plus SAD-adjacent heuristics (CVV/expiry/track/crypto seed shapes), multi-surface apply_to, optional PAN masking and vault tokenization — not a substitute for PCI scope analysis or enterprise DLP.

PCI-minded patternsPre-AI checkBlock / redact / flag

Pack id: finance-pack

Patient and medical info

Flags health-related phrasing and sensitive identifiers in prompts so clinical-style content does not slip through unnoticed.

Clinical contextIdentifiersTraceable outcomes

Pack id: healthcare-pack

EU / GDPR data

Reduces risk from names, addresses, and other sensitive categories in EU-facing traffic — with clear allow, block, or redact outcomes.

EU-oriented rulesHeuristic checksDetails in docs

Pack id: gdpr-data-protection

Schools & internal assistants

Tightens internal chat and campus-style flows so student-style IDs and risky patterns are caught before answers go out.

Internal toolsPattern-basedBlock or flag

Pack id: student-privacy

High-risk prompt guard

Extra-strict posture for prompts that try to bend your rules — schema and integrity checks before anything expensive runs.

Strict schemaEnforce modeFail-closed bias

Pack id: high-risk

Knowledge bases & chatbots

Keeps structured bots honest about shape: required fields present, no surprise extras, so RAG-style prompts stay predictable.

Schema-firstStructured botsNo PII pack bundled

Pack id: rag-default

See what Boundary does

Every protected prompt follows a clear path through Boundary.

  1. 1

    Message received

    Your app sends the prompt to Quantlix first — not straight to the model.

  2. 2

    We check it against your rules

    Schema, budgets, guardrails, and any context packs you turned on for that deployment.

  3. 3

    Risky or safe?

    We decide whether the prompt is okay to continue, or needs a closer look.

  4. 4

    We allow, block, warn, or redact

    Whatever outcomes you configured for that situation — before an answer is generated.

  5. 5

    We log what happened

    So you can review each step later — not just the final reply.

Production scope today

Before rollout, confirm current enforcement scope and roadmap boundaries.

Production scope snapshot

  • Enforced today: Policy evaluation before inference on POST /run, OpenAI-compatible /v1/* gateway routes (for example chat, messages, embeddings), and product surfaces wired to those paths.
  • Configurable with caveats: Contextual pack-based sensitive-pattern handling and redaction.
  • On the roadmap: Broader ingress parity and additional runtime policy outcomes.

Full capability-by-capability details: Production scope today.

Last reviewed: May 2026.

See Boundary live

Try Boundary in your browser, or connect it from your own app with the same integration paths you'll use in production.

Open PlaygroundGoverned-flow example

Technical

In-product flows use deployments through POST /run. From your services, call POST /run or OpenAI-compatible /v1/*, and send X-Quantlix-Deployment-Id on gateway requests so policy runs first.

Or call it from your code

curl -X POST "https://api.quantlix.ai//run" \
  -H "X-API-Key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"deployment_id":"DEPLOYMENT_ID","input":{"prompt":"Summarize this support ticket in 3 bullet points."}}'

Send a prompt to POST /run; policy evaluates before model execution, then outcome and trace metadata are returned.

Boundary enforcement details

Visible control, not blind trust

Boundary helps teams move from "hope nothing sensitive gets sent" to visible prompt protection before your AI answers.

  • Clear policy outcomes on each request
  • Request-level trace visibility
  • Safer prompt handling at the edge of your AI traffic
  • Easier rollout confidence for production AI

Built for teams putting AI into real workflows

  • Internal AI assistants
  • Customer support AI
  • Privacy-sensitive prompt flows
  • Early agent and automation paths
  • Model APIs handling user or employee data

Start with Boundary. Scale when you're ready.

Boundary gives teams a strong first safety layer for prompts. As usage grows, you can widen policy coverage, add more visibility, and turn on the rest of the Quantlix platform when you are ready.

Common questions

Can I drop Boundary in front of my existing OpenAI, Azure OpenAI, or Anthropic SDK calls?

Yes. Point your SDK's base_url at Quantlix and call the same upstream paths (for example /v1/chat/completions, /v1/messages, or /v1/embeddings). Send X-Quantlix-Deployment-Id on each request so the deployment's policy runs before provider inference. For teams that prefer the structured contract, POST /run is still fully supported. Start with the live entry points or the boundary enforcement guide.

What does Boundary integration actually look like today?

Gateway path: OpenAI- or Anthropic-style HTTP requests to Quantlix with your usual payload shape, plus X-Quantlix-Deployment-Id and your API key. Policy evaluates before the provider call. Run path: an HTTP request to POST /run with a deployment_id and input payload; the response includes model output plus trace-linked enforcement metadata. Use the curl example for /run and the boundary enforcement docs for configuration detail.

Are /run and the OpenAI-compatible gateway both stable integrations?

Yes — both surfaces use the same deployment policy bundle evaluated before inference, and contracts are evolved deliberately with versioning guidance when payloads change.

Quantlix Boundary

Quantlix Boundary pricing

Starter

Start here

Start with Boundary on Starter

Protect prompts before the model runs—preset packs, policy outcomes, and traces on POST /run.

  • 1 workspace
  • 1 protected flow
  • 1 seat
  • 500 protected runs per month
  • 7 days trace retention
  • Preset protection packs
  • Prompt test console
  • Visible policy outcomes
  • Basic trace visibility

Team

For teams using Boundary regularly

Expand from individual testing to repeatable team usage.

  • 3 workspaces
  • 3 protected flows
  • 5 seats
  • 10,000 protected runs per month
  • 30 days trace retention
  • Preset protection packs
  • Prompt test console
  • Visible policy outcomes
  • Expanded trace visibility
  • Shared team access
  • Light rollout support

Billed in Stripe on subscription; limits match the Team column above.

From $499/month

Enterprise

For broader rollout and advanced control needs

Use Boundary as part of a larger production AI control strategy.

  • Custom workspaces
  • Custom protected flows
  • Custom seats
  • Custom run volume
  • Custom retention
  • Advanced policy coverage (agreed scope)
  • Broader deployment support
  • Implementation support
  • Custom rollout guidance

Custom pricing

Need Enterprise for regulated rollout, custom policies, or implementation support? Contact sales.

Put a control boundary in front of AI

Use Quantlix Boundary to protect prompts before your AI answers and bring visible policy enforcement into production usage.

Quantlix is the platform for safer AI in production. Boundary is the product most teams start with — prompt checks, traces, and policy outcomes you can show to security and compliance — before you turn on the rest of the stack.

Technical surface detail: Runtime protection (docs).

Explore the full platform