Agents Step
The Agents step runs AI agents within your flow. Unlike traditional automation that follows rigid rules, agents think, reason, and make decisions. Give them tools to access data and perform tasks, provide context to guide their behavior, and set boundaries to ensure safe, reliable execution.Agent-Centric Flows: Start here. Most QuivaWorks flows begin with an Agent step. Attach tools (connectors) to let agents access your systems, and add other steps only when you need explicit control over branching, transformations, or integrations.
How Agents Work
Agents receive input (from triggers or previous steps), process it using their instructions and context, use tools to access data or perform actions, and return responses.Configuration Tabs
Configure your agent across these tabs:Information
Name, description, execution mode
Provider
LLM provider, model, API key
Instructions
Role, personality, capabilities
Prompt
Input text for agent to process
Tools
MCP servers and connectors
Context
Knowledge, files, descriptions
Advanced
Output schema, temperature, tokens
Safety
Validation and boundaries
Memories
Persistent conversation memory
Information Tab
Basic agent settings and execution behavior.Agent name (used to reference in flow)Example:
Customer Support AgentWhat this agent does (for documentation)Example:
Handles customer inquiries, searches knowledge base, and escalates complex issuesHow the flow should handle agent executionOptions:
wait_for_completion- Flow waits for agent to finish (default)run_in_background- Flow continues immediately, agent runs async
Provider Tab
Select your LLM provider and model.LLM providerOptions:
OpenAI- ChatGPT modelsAnthropic- Claude modelsGoogle- Gemini models
Specific model versionExamples:
- OpenAI:
gpt-4-turbo,gpt-3.5-turbo - Anthropic:
claude-3-opus,claude-3-sonnet - Google:
gemini-pro,gemini-ultra
Your API key from the providerAPI keys are encrypted and securely stored. Never share keys publicly.Get keys:
Agent Instructions
Define your agent’s role, personality, and capabilities. This is the foundation for how your agent responds and behaves across all interactions.Core behavioral instructions for the agentBe specific about:
- Role and purpose
- Communication style and personality
- What tasks it should handle
- What it should NOT do
- When to escalate or defer
Prompt
The input text for the agent to process. This can come from the trigger automatically or be set manually.Text for the agent to processAutomatic from trigger: If this agent is directly connected to a trigger (Embed, HTTP, Webhook), the prompt is automatically passed from the trigger input. You don’t need to set it manually.Manual prompt: Set manually when:
- Agent is not first step in flow
- Need to transform trigger input
- Want to provide specific instructions per execution
Tools Tab
Attach MCP servers (connectors) to give your agent access to data and the ability to perform actions.Tools = Connectors: In QuivaWorks, integrations, tools and connectors are the same thing - MCP servers that agents can use. Find them in the Marketplace or create custom ones.
Adding Tools
- Click Add Tool in the Tools tab
- Choose from:
- Marketplace MCP servers - Pre-built integrations (CRM, databases, APIs)
- Your custom MCP servers - Deploy from OpenAPI specs or Postman collections
- Configure authentication if required
- Tool is now available to agent
How Agents Use Tools
Agents intelligently decide when and how to use tools based on:- The user’s request
- Available tools
- Agent instructions
- Tool capabilities
- Agent recognizes it needs order information
- Agent sees “Order System” tool is available
- Agent calls tool with customer ID
- Agent receives order data
- Agent formats response for customer
Tool Authentication
Many tools require authentication. Configure in tool settings:Tool authentication credentialsTypes:
- API Key
- OAuth 2.0
- Basic Auth
- Custom headers
Context Tab
Provide additional context to improve agent responses.Knowledge
Background information, policies, or guidelinesUse for:
- Company policies
- Product information
- Process guidelines
- FAQs
Files
Upload files for agent referenceSupported formats:
- PDF documents
- Text files (.txt, .md)
- Spreadsheets (.csv, .xlsx)
- JSON files
Descriptions
Descriptions of external resources or dataUse when: Agent needs to understand external data structures, API responses, or system behaviors that aren’t covered in tools or knowledge.
Advanced Tab
Fine-tune agent behavior and output.Output Schema
Define structured output formatUse when: You need consistent, structured data from the agent (not just text response)Example:Agent output will conform to this schema, making it easy to use in Conditions or other steps.
Model Parameters
Creativity vs. consistency (0-2)
0- Deterministic, consistent (good for structured tasks)0.7- Balanced (default)1.5+- Creative, varied (good for content generation)
Maximum response lengthLimits how long the response can be. Higher = more detailed but slower and more expensive.
Nucleus sampling (0-1)Alternative to temperature. Lower values = more focused responses.
Reduce repetition (-2 to 2)Positive values discourage repeating the same phrases.
Encourage topic diversity (-2 to 2)Positive values encourage discussing new topics.
Sequences that stop generationAgent stops generating when it encounters these strings.Example:
["END", "---", "STOP"]Most users don’t need to adjust these parameters. Default values work well for most use cases. Adjust only if you have specific requirements.
Safety & Guardrails
Production-ready validation and safety features.Output Validation
Automatically validate and correct agent outputWhen enabled:
- Checks output against schema (if defined)
- Validates data types and formats
- Automatically requests corrections if invalid
- Retries up to 3 times
Boundaries
Define what the agent can and cannot doExamples:Include boundary rules in Agent Instructions for enforcement.
Human-in-the-Loop Triggers
Conditions that pause for human approvalExamples:
"refund amount > $100""sentiment = negative""confidence < 0.7""action = delete"
Memories
Enable persistent conversation memory across interactions.Remember previous interactions with this userWhen enabled:
- Agent remembers past conversations
- Provides personalized responses based on history
- Maintains context across sessions
- Customer support (remember customer preferences)
- Sales agents (build on previous conversations)
- Personalized assistants
Common Patterns
Customer Support Agent
Customer Support Agent
Configuration:
- Provider: Workforce (cost-effective)
- Tools: Knowledge Base, Order System, Support Tickets
- Context: Return policy, shipping info, FAQs
- Output Schema:
{action: string, response: string, escalate: boolean}
Lead Qualification Agent
Lead Qualification Agent
Configuration:
- Provider: OpenAI GPT-4 (strong reasoning)
- Tools: CRM, Company Database
- Context: Ideal customer profile, pricing tiers
- Output Schema:
{score: number, category: string, reasons: string[]}
Content Generation Agent
Content Generation Agent
Configuration:
- Provider: Anthropic Claude (strong writing)
- Tools: Brand Guidelines, Past Content, Customer Data
- Context: Brand voice, style guide, approved messaging
- Temperature: 0.9 (more creative)
Data Analysis Agent
Data Analysis Agent
Configuration:
- Provider: OpenAI GPT-4 (strong analysis)
- Tools: Database, Analytics API
- Context: Key metrics, business goals
- Output Schema:
{insights: string[], recommendations: string[], data: object}
Approval Router Agent
Approval Router Agent
Configuration:
- Provider: Workforce
- Tools: Policy Database, User Directory
- Output Schema:
{approved: boolean, approver: string, reason: string} - Human-in-the-Loop: When
approved = falseor amount > threshold
Testing Your Agent
1
Configure basic settings
Set provider, model, and instructions
2
Add tools if needed
Attach MCP servers for data access
3
Test with sample input
Use Test Mode to try different scenarios
4
Review responses
Check accuracy, tone, and tool usage
5
Refine instructions
Adjust based on test results
6
Add guardrails
Enable validation, set boundaries
7
Deploy
Connect to trigger and go live
Best Practices
Clear Instructions
Be specific about what the agent should and shouldn’t do. Include examples of expected behavior.
Right-Size Tools
Give agents only the tools they need. Too many tools can confuse or slow the agent.
Structure Output
Use output schemas when you need structured data for Conditions or other steps.
Set Boundaries
Define clear boundaries for production agents, especially when they can perform actions.
Test Edge Cases
Test with unexpected inputs, errors, and boundary conditions before deploying.
Monitor Performance
Track response quality, tool usage, and error rates. Refine instructions based on results.
Troubleshooting
Agent not using tools
Agent not using tools
Possible causes:
- Tool not properly configured
- Authentication failed
- Instructions don’t mention tool usage
- Agent doesn’t understand when to use tool
- Verify tool authentication
- Check tool is enabled
- Update instructions to explicitly mention tool usage
- Test tool independently
Inconsistent responses
Inconsistent responses
Possible causes:
- Temperature too high
- Instructions too vague
- Missing context or examples
- Lower temperature (try 0.3-0.5)
- Make instructions more specific
- Add examples of expected behavior
- Use output schema for structured responses
Agent not respecting boundaries
Agent not respecting boundaries
Possible causes:
- Boundaries not clearly stated in instructions
- No validation enabled
- Model doesn’t follow instructions well
- Make boundaries explicit in instructions with examples
- Enable output validation
- Add human-in-the-loop for critical actions
- Try different model (GPT-4 or Claude for better instruction following)
Slow responses
Slow responses
Possible causes:
- Tool calls taking too long
- Model too large for task
- Too much context
- Optimize tool endpoints
- Try smaller/faster model
- Reduce context size
- Use run_in_background for non-critical operations
High costs
High costs
Possible causes:
- Using expensive model unnecessarily
- Too many tool calls
- Large context or responses
- Switch to Workforce or smaller model
- Reduce maxTokens
- Optimize tool usage
- Cache frequent queries