Workflow recipes

Workflow setup and use-case examples

These examples use workflow behavior that exists in Quantlix today: MCP input, retrieval, redaction, provider-backed model and agent calls, executable functions, routing, approvals, retries, fallbacks, and audit traces.

Basic setup flow

  1. Create provider-backed deployments for every model or agent node you plan to use.
  2. Create a workflow and start with a single input node that matches your app request body.
  3. Add data nodes first: mcp_input for external systems, retrieval for knowledge bases, or tool_call for HTTP tools.
  4. Add privacy and control nodes before the model: redact_text, policy_check, condition, router, or approval.
  5. Set each model or agent prompt_field to a field that exists at that point in the run.
  6. Run a test payload and inspect node outputs before connecting production traffic.

Supported node cheat sheet

mcp_input

Writes normalized MCP results to result_field with text, raw, items_count, and has_text.

redact_text

Edits configured string fields in place. Use dotted paths such as raw_tickets.text.

model

Calls a provider-backed deployment. Use prompt_field for deterministic model input.

agent

Runs a bounded native tool-calling loop with configured HTTP/webhook/internal functions.

function

Executes a deterministic HTTP, webhook, or internal function and returns audit fields.

retrieval

Returns retrieval.candidates from a knowledge base.

answer_with_citations

Formats retrieved evidence into final_answer with citation metadata.

tool_call

Calls HTTP, webhook, or internal tools.

condition/router

Routes to labeled outgoing edges based on payload fields.

approval

Pauses the run for human review before continuing.

Use-case recipes

DPA-safe ticket analysis

Summarize support ticket trends without sending customer names, emails, phone numbers, or Swedish identifiers to the model.

input → mcp_input → redact_text → model → output

{
  "mcp_input": {
    "server_url": "https://tickets.internal/mcp/sse",
    "transport": "sse",
    "tool_name": "search_tickets",
    "result_field": "raw_tickets",
    "tool_arguments": { "query": "{question}", "limit": 20 }
  },
  "redact_text": {
    "fields": ["raw_tickets.text"],
    "patterns": ["email", "phone_se", "personnummer_se", "orgnumber_se"],
    "extra_patterns": ["\\+\\d{1,3}[\\s-]?\\d(?:[\\d\\s()-]{6,}\\d)"],
    "replacement": "[REDACTED]"
  },
  "model": {
    "deployment_id": "dep_claude_support",
    "prompt_field": "raw_tickets.text"
  }
}

Inspect the run trace for MCP output, redaction_summary, provider/model metadata, and model_request.messages.

CRM account briefing

Look up account context from an MCP server, redact contact data, and generate a short account summary.

input → mcp_input → redact_text → model → output

{
  "mcp_input": {
    "server_url": "https://crm.internal/mcp",
    "transport": "http",
    "tool_name": "lookup_accounts",
    "result_field": "crm_context",
    "tool_arguments": {
      "query": "{question}",
      "include_contacts": false,
      "limit": 10
    }
  },
  "redact_text": {
    "fields": ["crm_context.text"],
    "patterns": ["email", "phone_se"],
    "extra_patterns": ["\\+\\d{1,3}[\\s-]?\\d(?:[\\d\\s()-]{6,}\\d)"]
  },
  "model": {
    "deployment_id": "dep_account_summary",
    "prompt_field": "crm_context.text"
  }
}

Use a stable result_field so the redaction and model nodes always read the same normalized MCP text.

Knowledge lookup with citations

Search an uploaded knowledge base and return cited evidence for a customer or internal answer flow.

input → retrieval → rerank → answer_with_citations → output

{
  "retrieval": {
    "knowledge_base_id": "kb_product_docs",
    "query_template": "{question}",
    "top_k": 8,
    "min_score": 0.2
  },
  "rerank": {
    "top_n": 5,
    "threshold": 0.2
  },
  "answer_with_citations": {
    "question_field": "question",
    "strict_mode": true
  }
}

This workflow returns grounded citation objects from retrieved chunks. For full generated RAG answers, use the RAG API or add a supported prompt-building step before a model node.

Human approval before an external action

Pause high-risk actions for manual review before calling a webhook or internal HTTP tool.

input → condition → approval → tool_call → output

{
  "condition": {
    "field": "risk_level",
    "operator": "in",
    "value": ["high", "critical"]
  },
  "approval": {
    "payload": {
      "reason": "High-risk workflow action requires review"
    }
  },
  "tool_call": {
    "execution_type": "http",
    "method": "POST",
    "endpoint": "https://ops.internal/actions",
    "body_field": "tool_input",
    "timeout_ms": 10000
  }
}

The approval node creates a waiting_approval run state. The HTTP tool only runs after the workflow resumes.

Policy gate before model inference

Block a workflow when the incoming payload has already been classified as high risk.

input → policy_check → model → output

{
  "policy_check": {
    "risk_field": "high_risk",
    "block_on_high_risk": true
  },
  "model": {
    "deployment_id": "dep_safe_chat",
    "prompt_field": "question"
  }
}

The policy_check node enforces a simple boolean gate. Deployment-level policies still run again inside the model node before provider inference.

Provider fallback branch

Try one provider-backed model first and fall back to another node if the first node fails.

input → model_primary → output, with fallback_node_id: model_backup

{
  "model_primary": {
    "deployment_id": "dep_primary_model",
    "prompt_field": "question"
  },
  "model_backup": {
    "deployment_id": "dep_backup_model",
    "prompt_field": "question"
  },
  "node_options": {
    "fallback_node_id": "model_backup",
    "retry_policy": {
      "max_attempts": 2,
      "backoff_type": "exponential",
      "backoff_ms": 250
    }
  }
}

fallback_node_id and retry_policy are supported workflow node properties. Use a provider-backed deployment for each model node.

Native tool-calling account agent

Let a provider-backed model decide when to call an approved account lookup function, then produce a final answer.

input → agent → output

{
  "agent": {
    "deployment_id": "dep_reasoning_model",
    "prompt_field": "question",
    "max_iterations": 4,
    "tools": [
      {
        "name": "lookup_account",
        "description": "Fetch account context by account ID",
        "input_schema": {
          "type": "object",
          "properties": { "account_id": { "type": "string" } },
          "required": ["account_id"]
        },
        "function": {
          "execution_type": "http",
          "method": "POST",
          "endpoint": "https://crm.internal/account",
          "body_template": { "account_id": "{account_id}" }
        }
      }
    ]
  }
}

The run trace records agent_steps, native tool calls, function requests/responses, token usage, and the final agent_output.

Production checklist

  • Every model node uses a real provider-backed deployment.
  • Every prompt_field points to a field that exists after upstream nodes finish.
  • PII redaction runs before external provider inference.
  • Large external payloads are reduced before model inference to control cost and latency.
  • Approval and tool_call nodes are tested with non-production endpoints before real side effects.
  • Audit traces show node inputs, outputs, redaction counts, model request payloads, errors, and retries.
  • Use the observability guide to debug failed runs, weak citations, missing MCP data, and slow agent/tool calls.
  • For schema validation, guardrails, and budgets, use the boundary enforcement guide.