Skip to main content

Agents Step

The Agents step runs AI agents within your flow. Unlike traditional automation that follows rigid rules, agents think, reason, and make decisions. Give them tools to access data and perform tasks, provide context to guide their behavior, and set boundaries to ensure safe, reliable execution.
Agent-Centric Flows: Start here. Most QuivaWorks flows begin with an Agent step. Attach tools (connectors) to let agents access your systems, and add other steps only when you need explicit control over branching, transformations, or integrations.

How Agents Work

Agents receive input (from triggers or previous steps), process it using their instructions and context, use tools to access data or perform actions, and return responses.
Trigger: Customer inquiry received

Agent Step:
  - Tools: Knowledge base, Order history
  - Context: Company policies, Product info
  - Makes decision: Answer or escalate

Response sent

Configuration Tabs

Configure your agent across these tabs:

Information Tab

Basic agent settings and execution behavior.
name
string
required
Agent name (used to reference in flow)Example: Customer Support Agent
description
string
What this agent does (for documentation)Example: Handles customer inquiries, searches knowledge base, and escalates complex issues
responseMode
enum
required
How the flow should handle agent executionOptions:
  • wait_for_completion - Flow waits for agent to finish (default)
  • run_in_background - Flow continues immediately, agent runs async
Use run_in_background when: Agent performs non-critical tasks (logging, analytics) or long-running operations that don’t affect flow logic
Use descriptive names like “Analyze Customer Request” rather than generic names like “Agent 1” - makes flows self-documenting.

Provider Tab

Select your LLM provider and model.
provider
enum
required
LLM providerOptions:
  • OpenAI - ChatGPT models
  • Anthropic - Claude models
  • Google - Gemini models
model
string
required
Specific model versionExamples:
  • OpenAI: gpt-4-turbo, gpt-3.5-turbo
  • Anthropic: claude-3-opus, claude-3-sonnet
  • Google: gemini-pro, gemini-ultra
Different models offer varying capabilities, speeds, and costs. Newer models generally provide better performance.
apiKey
string
required
Your API key from the providerAPI keys are encrypted and securely stored. Never share keys publicly.Get keys:
Rotate API keys regularly (every 90 days) and use separate keys for dev/prod environments

Agent Instructions

Define your agent’s role, personality, and capabilities. This is the foundation for how your agent responds and behaves across all interactions.
instructions
text
required
Core behavioral instructions for the agentBe specific about:
  • Role and purpose
  • Communication style and personality
  • What tasks it should handle
  • What it should NOT do
  • When to escalate or defer
Example:
You are a customer support agent for Acme Corp. You help customers with 
order tracking, returns, and product questions. You're friendly, professional, 
and always try to resolve issues on the first interaction.

For order tracking: Search order history and provide status.
For returns: Check policy and guide through return process.
For product questions: Search knowledge base and provide accurate information.

If you cannot resolve the issue, escalate to a human agent rather than 
providing uncertain information.
The better your instructions, the better your agent performs. Include:
  • ✅ Clear examples of expected behavior
  • ✅ Specific dos and don’ts
  • ✅ Escalation criteria
  • ✅ Tone and style guidelines

Prompt

The input text for the agent to process. This can come from the trigger automatically or be set manually.
prompt
text
Text for the agent to processAutomatic from trigger: If this agent is directly connected to a trigger (Embed, HTTP, Webhook), the prompt is automatically passed from the trigger input. You don’t need to set it manually.Manual prompt: Set manually when:
  • Agent is not first step in flow
  • Need to transform trigger input
  • Want to provide specific instructions per execution
Use variables: Reference previous steps
${trigger.user_message}
${previous_agent.response}
${http_request.body.data}
Trigger: Chat embed receives user message

Agent: [prompt automatically populated]

Tools Tab

Attach MCP servers (connectors) to give your agent access to data and the ability to perform actions.
Tools = Connectors: In QuivaWorks, integrations, tools and connectors are the same thing - MCP servers that agents can use. Find them in the Marketplace or create custom ones.

Adding Tools

  1. Click Add Tool in the Tools tab
  2. Choose from:
    • Marketplace MCP servers - Pre-built integrations (CRM, databases, APIs)
    • Your custom MCP servers - Deploy from OpenAPI specs or Postman collections
  3. Configure authentication if required
  4. Tool is now available to agent

How Agents Use Tools

Agents intelligently decide when and how to use tools based on:
  • The user’s request
  • Available tools
  • Agent instructions
  • Tool capabilities
Example: Customer asks “What’s my order status?”
  1. Agent recognizes it needs order information
  2. Agent sees “Order System” tool is available
  3. Agent calls tool with customer ID
  4. Agent receives order data
  5. Agent formats response for customer
Best Practice: Give agents only the tools they need. Too many tools can confuse the agent or slow response time.

Tool Authentication

Many tools require authentication. Configure in tool settings:
authentication
object
Tool authentication credentialsTypes:
  • API Key
  • OAuth 2.0
  • Basic Auth
  • Custom headers
Credentials are encrypted and stored securely

Context Tab

Provide additional context to improve agent responses.

Knowledge

knowledge
text
Background information, policies, or guidelinesUse for:
  • Company policies
  • Product information
  • Process guidelines
  • FAQs
Example:
Return Policy: Customers can return items within 30 days with receipt.
Free return shipping on orders over $50.
Refunds processed within 5-7 business days.

Files

files
file[]
Upload files for agent referenceSupported formats:
  • PDF documents
  • Text files (.txt, .md)
  • Spreadsheets (.csv, .xlsx)
  • JSON files
Agents can search and reference uploaded files when responding.

Descriptions

descriptions
text
Descriptions of external resources or dataUse when: Agent needs to understand external data structures, API responses, or system behaviors that aren’t covered in tools or knowledge.
Context vs. Tools:
  • Use Context for static information (policies, guidelines)
  • Use Tools for dynamic data (CRM lookups, API calls)

Advanced Tab

Fine-tune agent behavior and output.

Output Schema

outputSchema
json
Define structured output formatUse when: You need consistent, structured data from the agent (not just text response)Example:
{
  "type": "object",
  "properties": {
    "decision": {"type": "string", "enum": ["approve", "reject", "escalate"]},
    "reason": {"type": "string"},
    "confidence": {"type": "number", "minimum": 0, "maximum": 1}
  },
  "required": ["decision", "reason"]
}
Agent output will conform to this schema, making it easy to use in Conditions or other steps.

Model Parameters

temperature
number
default:"0.7"
Creativity vs. consistency (0-2)
  • 0 - Deterministic, consistent (good for structured tasks)
  • 0.7 - Balanced (default)
  • 1.5+ - Creative, varied (good for content generation)
maxTokens
number
Maximum response lengthLimits how long the response can be. Higher = more detailed but slower and more expensive.
topP
number
default:"1"
Nucleus sampling (0-1)Alternative to temperature. Lower values = more focused responses.
frequencyPenalty
number
default:"0"
Reduce repetition (-2 to 2)Positive values discourage repeating the same phrases.
presencePenalty
number
default:"0"
Encourage topic diversity (-2 to 2)Positive values encourage discussing new topics.
stopSequences
string[]
Sequences that stop generationAgent stops generating when it encounters these strings.Example: ["END", "---", "STOP"]
Most users don’t need to adjust these parameters. Default values work well for most use cases. Adjust only if you have specific requirements.

Safety & Guardrails

Production-ready validation and safety features.

Output Validation

outputValidation
boolean
default:"true"
Automatically validate and correct agent outputWhen enabled:
  • Checks output against schema (if defined)
  • Validates data types and formats
  • Automatically requests corrections if invalid
  • Retries up to 3 times
Keep this enabled for production flows

Boundaries

boundaries
object
Define what the agent can and cannot doExamples:
{
  "maxRefundAmount": 500,
  "allowedActions": ["search", "read", "suggest"],
  "forbiddenTopics": ["medical advice", "legal advice"],
  "escalateWhen": ["user is angry", "request exceeds limits"]
}
Include boundary rules in Agent Instructions for enforcement.

Human-in-the-Loop Triggers

humanInLoopTriggers
string[]
Conditions that pause for human approvalExamples:
  • "refund amount > $100"
  • "sentiment = negative"
  • "confidence < 0.7"
  • "action = delete"
When triggered, flow pauses and sends approval request to designated reviewers.
Always define boundaries for production agents, especially when they can:
  • Access sensitive data
  • Perform actions (refunds, deletions, emails)
  • Make decisions with business impact

Memories

Enable persistent conversation memory across interactions.
memories
boolean
default:"false"
Remember previous interactions with this userWhen enabled:
  • Agent remembers past conversations
  • Provides personalized responses based on history
  • Maintains context across sessions
Use cases:
  • Customer support (remember customer preferences)
  • Sales agents (build on previous conversations)
  • Personalized assistants
Privacy: Memories are scoped per user and encrypted. Users can request memory deletion.
User: "What's my order status?"
Agent: "Sure, what's your order number?"

[Next conversation]
User: "Any updates?"
Agent: "Sure, what's your order number?" ❌

Common Patterns

Configuration:
  • Provider: Workforce (cost-effective)
  • Tools: Knowledge Base, Order System, Support Tickets
  • Context: Return policy, shipping info, FAQs
  • Output Schema: {action: string, response: string, escalate: boolean}
Instructions:
You are a customer support agent. Help with orders, returns, and products.
Always search knowledge base first. If unsure, escalate to human.
Be friendly, professional, and resolve on first contact when possible.
Configuration:
  • Provider: OpenAI GPT-4 (strong reasoning)
  • Tools: CRM, Company Database
  • Context: Ideal customer profile, pricing tiers
  • Output Schema: {score: number, category: string, reasons: string[]}
Instructions:
You qualify sales leads based on company size, budget, and needs.
Score 1-10. Above 7 = qualified. Search CRM for existing relationship.
Extract: company name, employee count, budget, use case, timeline.
Configuration:
  • Provider: Anthropic Claude (strong writing)
  • Tools: Brand Guidelines, Past Content, Customer Data
  • Context: Brand voice, style guide, approved messaging
  • Temperature: 0.9 (more creative)
Instructions:
You create marketing content matching our brand voice.
Reference brand guidelines for tone and style.
Personalize based on customer segment and industry.
Configuration:
  • Provider: OpenAI GPT-4 (strong analysis)
  • Tools: Database, Analytics API
  • Context: Key metrics, business goals
  • Output Schema: {insights: string[], recommendations: string[], data: object}
Instructions:
You analyze data and provide actionable insights.
Query database for trends, calculate metrics, identify anomalies.
Provide specific recommendations backed by data.
Configuration:
  • Provider: Workforce
  • Tools: Policy Database, User Directory
  • Output Schema: {approved: boolean, approver: string, reason: string}
  • Human-in-the-Loop: When approved = false or amount > threshold
Instructions:
You route approval requests to appropriate approvers.
Check policy database for approval rules.
Auto-approve within limits, escalate outside boundaries.

Testing Your Agent

1

Configure basic settings

Set provider, model, and instructions
2

Add tools if needed

Attach MCP servers for data access
3

Test with sample input

Use Test Mode to try different scenarios
4

Review responses

Check accuracy, tone, and tool usage
5

Refine instructions

Adjust based on test results
6

Add guardrails

Enable validation, set boundaries
7

Deploy

Connect to trigger and go live
Test Mode: Available in Test Mode (top right). Send test inputs without executing real actions or consuming API credits.

Best Practices

Clear Instructions

Be specific about what the agent should and shouldn’t do. Include examples of expected behavior.

Right-Size Tools

Give agents only the tools they need. Too many tools can confuse or slow the agent.

Structure Output

Use output schemas when you need structured data for Conditions or other steps.

Set Boundaries

Define clear boundaries for production agents, especially when they can perform actions.

Test Edge Cases

Test with unexpected inputs, errors, and boundary conditions before deploying.

Monitor Performance

Track response quality, tool usage, and error rates. Refine instructions based on results.

Troubleshooting

Possible causes:
  • Tool not properly configured
  • Authentication failed
  • Instructions don’t mention tool usage
  • Agent doesn’t understand when to use tool
Solutions:
  • Verify tool authentication
  • Check tool is enabled
  • Update instructions to explicitly mention tool usage
  • Test tool independently
Possible causes:
  • Temperature too high
  • Instructions too vague
  • Missing context or examples
Solutions:
  • Lower temperature (try 0.3-0.5)
  • Make instructions more specific
  • Add examples of expected behavior
  • Use output schema for structured responses
Possible causes:
  • Boundaries not clearly stated in instructions
  • No validation enabled
  • Model doesn’t follow instructions well
Solutions:
  • Make boundaries explicit in instructions with examples
  • Enable output validation
  • Add human-in-the-loop for critical actions
  • Try different model (GPT-4 or Claude for better instruction following)
Possible causes:
  • Tool calls taking too long
  • Model too large for task
  • Too much context
Solutions:
  • Optimize tool endpoints
  • Try smaller/faster model
  • Reduce context size
  • Use run_in_background for non-critical operations
Possible causes:
  • Using expensive model unnecessarily
  • Too many tool calls
  • Large context or responses
Solutions:
  • Switch to Workforce or smaller model
  • Reduce maxTokens
  • Optimize tool usage
  • Cache frequent queries

Next Steps