Finance & payments
PAN/IBAN plus SAD-adjacent heuristics (CVV/expiry/track/crypto seed shapes), multi-surface apply_to, optional PAN masking and vault tokenization — not a substitute for PCI scope analysis or enterprise DLP.
Pack id: finance-pack
QUANTLIX BOUNDARY
We check every message before your AI sees it.
Prefer a no-login sandbox? Try the public sandbox.
He’s describing the layer that checks AI traffic before your model answers — and saying it belongs next to the other tools your team already trusts in production.
“They're going after that control layer. This belongs in the toolbox.”
Inayathulla Khan Lavani
Engineering Leader
Former teams at Palo Alto Networks · Tenable · RSA Security · Oracle
Read on LinkedIn →Sensitive data, unsafe prompt patterns, and policy slip-ups can reach your AI stack before anyone has a clear safety checkpoint in front of it.
Quantlix puts that safety check in place first — before your AI generates an answer.
Boundary gives teams a clear way to inspect prompts, apply policy, and see what happens before your AI receives the request.
Start with ready-to-use packs for common prompt-protection needs.
PAN/IBAN plus SAD-adjacent heuristics (CVV/expiry/track/crypto seed shapes), multi-surface apply_to, optional PAN masking and vault tokenization — not a substitute for PCI scope analysis or enterprise DLP.
Pack id: finance-pack
Flags health-related phrasing and sensitive identifiers in prompts so clinical-style content does not slip through unnoticed.
Pack id: healthcare-pack
Reduces risk from names, addresses, and other sensitive categories in EU-facing traffic — with clear allow, block, or redact outcomes.
Pack id: gdpr-data-protection
Tightens internal chat and campus-style flows so student-style IDs and risky patterns are caught before answers go out.
Pack id: student-privacy
Extra-strict posture for prompts that try to bend your rules — schema and integrity checks before anything expensive runs.
Pack id: high-risk
Keeps structured bots honest about shape: required fields present, no surprise extras, so RAG-style prompts stay predictable.
Pack id: rag-default
Every protected prompt follows a clear path through Boundary.
Message received
Your app sends the prompt to Quantlix first — not straight to the model.
We check it against your rules
Schema, budgets, guardrails, and any context packs you turned on for that deployment.
Risky or safe?
We decide whether the prompt is okay to continue, or needs a closer look.
We allow, block, warn, or redact
Whatever outcomes you configured for that situation — before an answer is generated.
We log what happened
So you can review each step later — not just the final reply.
Before rollout, confirm current enforcement scope and roadmap boundaries.
Production scope snapshot
POST /run, OpenAI-compatible /v1/* gateway routes (for example chat, messages, embeddings), and product surfaces wired to those paths.Full capability-by-capability details: Production scope today.
Last reviewed: May 2026.
Try Boundary in your browser, or connect it from your own app with the same integration paths you'll use in production.
Open PlaygroundGoverned-flow example
Technical
In-product flows use deployments through POST /run. From your services, call POST /run or OpenAI-compatible /v1/*, and send X-Quantlix-Deployment-Id on gateway requests so policy runs first.
curl -X POST "https://api.quantlix.ai//run" \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"deployment_id":"DEPLOYMENT_ID","input":{"prompt":"Summarize this support ticket in 3 bullet points."}}'Send a prompt to POST /run; policy evaluates before model execution, then outcome and trace metadata are returned.
Boundary helps teams move from "hope nothing sensitive gets sent" to visible prompt protection before your AI answers.
Self-hosted deployments keep raw source data inside your network boundary.
Yes. Point your SDK's base_url at Quantlix and call the same upstream paths (for example /v1/chat/completions, /v1/messages, or /v1/embeddings). Send X-Quantlix-Deployment-Id on each request so the deployment's policy runs before provider inference. For teams that prefer the structured contract, POST /run is still fully supported. Start with the live entry points or the boundary enforcement guide.
Gateway path: OpenAI- or Anthropic-style HTTP requests to Quantlix with your usual payload shape, plus X-Quantlix-Deployment-Id and your API key. Policy evaluates before the provider call. Run path: an HTTP request to POST /run with a deployment_id and input payload; the response includes model output plus trace-linked enforcement metadata. Use the curl example for /run and the boundary enforcement docs for configuration detail.
Yes — both surfaces use the same deployment policy bundle evaluated before inference, and contracts are evolved deliberately with versioning guidance when payloads change.
Quantlix Boundary
Start with Boundary on Starter
Protect prompts before the model runs—preset packs, policy outcomes, and traces on POST /run.
For teams using Boundary regularly
Expand from individual testing to repeatable team usage.
Billed in Stripe on subscription; limits match the Team column above.
From $499/month
For broader rollout and advanced control needs
Use Boundary as part of a larger production AI control strategy.
Custom pricing
Need Enterprise for regulated rollout, custom policies, or implementation support? Contact sales.
Quantlix is the platform for safer AI in production. Boundary is the product most teams start with — prompt checks, traces, and policy outcomes you can show to security and compliance — before you turn on the rest of the stack.
Technical surface detail: Runtime protection (docs).
Explore the full platform