Workflows & MCP integration

Quantlix workflows let you fetch external data, redact it, and run model inference in one auditable graph. MCP tool payloads can be custom; Quantlix normalizes them into a stable shape for downstream nodes.

Recommended node chain

Typical production flow: input → mcp_input → transform/redact_text → model → output. Keep PII controls before the model node.

For privacy-sensitive data, the model should only read a field that exists after redaction. The supported redact_text node edits configured text fields in place, so if it redacts raw_tickets.text, set the model prompt_field to raw_tickets.text.

Supported executable node types include input, mcp_input, retrieval, rerank, transform, redact_text, tool_call, router, condition, policy_check, approval, model, agent, function,answer_with_citations, and output.

Deploy a multi-model workflow →See workflow setup examples →

Set up a workflow in the portal

  1. Create or select the provider-backed deployment the model or agent node will call.
  2. Open Dashboard → Workflows and create a workflow from a template or blank graph.
  3. Add an input node that matches the JSON your app will send.
  4. Add context nodes such as mcp_input or retrieval when the model needs data.
  5. Add redact_text before the model for fields that may contain PII.
  6. Configure the model or agent node with deployment_id and a real prompt_field.
  7. Finish with an output or answer_with_citations node and run a test payload.
  8. Inspect the run trace, node outputs, redaction summary, and model request payload before production use. See the observability guide for debugging playbooks.

If a field name is nested, use dotted paths where supported by the node. For example, the model node can read raw_tickets.text, and the redaction node can redact raw_tickets.text.

MCP output contract

The mcp_input node writes a normalized object to your configured result_field:

{
  "text": "...",
  "raw": [...],
  "items_count": 12,
  "has_text": true
}
  • raw: best-effort JSON-serializable MCP content
  • text: aggregated context text extracted from content items
  • items_count: number of returned items
  • has_text: whether extracted text is non-empty

If a tool returns structured JSON without explicit text blocks, Quantlix falls back to serializing raw into text so model nodes still receive context.

External system format requirements

Your MCP server does not need to match the demo ticket schema. Payloads can be custom. What matters is valid MCP responses and consistent workflow field mapping.

  • Pick a stable result_field (for example raw_tickets).
  • Redact nested text via fields: ["raw_tickets.text"].
  • Set model prompt_field to the redacted field that actually exists at runtime.
  • For very large payloads, add an MCP summarization tool before model inference.

Native agents and functions

Use agent when a provider-backed chat model should decide which configured tool to call. Use function when the workflow should run a deterministic HTTP, webhook, or internal function directly.

{
  "agent": {
    "deployment_id": "dep_reasoning_model",
    "prompt_field": "question",
    "max_iterations": 4,
    "tools": [
      {
        "name": "lookup_account",
        "description": "Fetch account context",
        "input_schema": {
          "type": "object",
          "properties": { "account_id": { "type": "string" } },
          "required": ["account_id"]
        },
        "function": {
          "execution_type": "http",
          "method": "POST",
          "endpoint": "https://crm.internal/account",
          "body_template": { "account_id": "{account_id}" }
        }
      }
    ]
  }
}

Agent tools do not execute arbitrary local code. They use the same safe execution primitives as function and tool_call, and the run trace records each model request, tool call, function result, and final answer.

DPA-safe ticket analysis example

A typical demo workflow fetches tickets from an MCP server, normalizes the result into raw_tickets, redacts raw_tickets.text, and then asks a provider model to summarize trends. The audit log should show the MCP tool called, redaction counts, and the exact payload that reached the provider.

{
  "mcp_input": {
    "tool_name": "search_tickets",
    "result_field": "raw_tickets"
  },
  "redact_text": {
    "fields": ["raw_tickets.text"],
    "patterns": ["email", "phone_se", "personnummer_se", "orgnumber_se"],
    "extra_patterns": ["\\+\\d{1,3}[\\s-]?\\d(?:[\\d\\s()-]{6,}\\d)"]
  },
  "model": {
    "prompt_field": "raw_tickets.text"
  }
}

Common questions

Can MCP return anything?

Yes. The MCP server does not need to match the demo ticket schema. Quantlix normalizes text blocks and structured JSON into a stable result object.

How do I prove what reached Claude?

Use the Analyze audit log. It records provider/model metadata and the model_request payload sent to the provider after workflow steps run.