Provider Integrations
Quantlix integrates with major cloud model APIs—plus a built-in demo model for first success without keys. Providers are org-scoped; credentials are encrypted at rest.
What is supported today
- OpenAI and Anthropic — typical chat workloads; sync models after saving your API key.
- Azure OpenAI, AWS Bedrock, Groq, Together AI — connect per your cloud setup; capabilities depend on the model you enable.
- Voyage AI — primarily embeddings for knowledge bases and document pipelines.
- qx-example / orchestrator — built-in or self-hosted path for demos and tests when you do not want an external provider yet—not a substitute for a full vendor SLA.
Portal
Dashboard → Providers — Add Provider, choose type (Voyage AI, OpenAI, Anthropic, etc.), set credential (API key), Sync Models. Then bind a deployment to a provider model via the deployment detail page (Inference target).
Setup
- Create an org (if needed)
- Add Provider → choose type (e.g. Voyage AI for embeddings)
- Set Credential → paste API key
- Sync Models → pick models (chat, embeddings, etc.)
- Create a deployment, then bind it to a provider model in the deployment detail
Use cases
- Chat inference — Bind deployment to OpenAI/Anthropic for inference
- Embeddings — Voyage AI for embeddings (powers knowledge bases and document lookup)
- Document Q&A — Embedding provider + knowledge base for retrieval (often called RAG)