Skip to main content

Context Settings

The Context tab controls how your agent manages memory, processes conversation history, and reasons through problems. These settings directly impact response quality, cost, and agent capabilities. Context Settings Tab

Overview

Context settings determine:
  • How much conversation history the agent remembers
  • How intelligently that memory is managed
  • How many reasoning steps the agent can take
  • The total amount of information the agent can process
  • Whether prompts are automatically optimized

Smart Context

Automatically manages conversation memory by intelligently selecting the most relevant previous messages.

What is Smart Context?

Instead of including the entire conversation history (which wastes tokens and can confuse the agent), Smart Context:
  1. Analyzes the current query and full conversation
  2. Selects the most relevant previous messages
  3. Includes only pertinent context for this specific response
  4. Reduces token usage while improving quality
Smart Context Visualization

How It Works

Without Smart Context:
User: "What's your return policy?"
Agent: [Response about 30-day returns]

User: "What about shipping?"
Agent: [Response about shipping]

User: "Can I get a refund?"
Agent receives: ALL previous messages
- What's your return policy?
- [Full response about returns]
- What about shipping?
- [Full response about shipping]
- Can I get a refund?

Total: ~500 tokens of context
With Smart Context:
User: "What's your return policy?"
Agent: [Response about 30-day returns]

User: "What about shipping?"
Agent: [Response about shipping]

User: "Can I get a refund?"
Agent receives: ONLY relevant messages
- What's your return policy?
- [Full response about returns]
- Can I get a refund?

Total: ~200 tokens of context (shipping context excluded as irrelevant)

Benefits

Improved Quality

Agent focuses on relevant information, not distracted by unrelated history

Reduced Costs

Fewer tokens = lower costs per request

Longer Conversations

Stay within token limits even in extended conversations

Better Performance

Less context to process = faster responses

When to Enable

Default: Enabled - Keep Smart Context enabled for most use cases. It’s a free optimization that improves both quality and cost.

Prompt Optimization

Automatically enhances your agent’s prompts based on its configuration to achieve better results.

What is Prompt Optimization?

Prompt Optimization analyzes your agent’s:
  • Instructions
  • Tools and connectors
  • Output schema
  • Use case
Then automatically:
  • Structures the prompt for better AI performance
  • Emphasizes important instructions
  • Optimizes for the specific model being used
  • Improves reasoning and tool usage

How It Works

Without Prompt Optimization:
Agent receives:
- Your exact instructions as written
- Tool descriptions as provided
- User prompt as passed in

The AI processes these exactly as given.
With Prompt Optimization:
System analyzes your configuration and:
- Restructures instructions for clarity
- Highlights key constraints
- Optimizes tool usage guidance
- Formats for the specific model
- Adds relevant context cues

The AI receives an enhanced prompt.

Benefits

Agent is more likely to follow complex or nuanced instructions correctly.Example: Instructions about “only escalate refunds over $200” are emphasized in a way the model understands better.
Agent makes better decisions about when and how to use tools.Example: “Search knowledge base before answering” becomes a stronger directive that the agent follows more consistently.
Agent thinks through problems more systematically.Example: Multi-step problems are structured for step-by-step reasoning.
Prompts are tailored to work best with the specific model you selected.Example: GPT-4 and Claude have different prompt formats they respond to best - optimization handles this automatically.

When to Enable

Default: Enabled - Keep this on unless you’re an expert prompt engineer who prefers manual optimization.

Maximum Tokens

The maximum number of tokens the agent can use for context. This includes system instructions, conversation history, tool descriptions, and the agent’s reasoning.

What are Tokens?

Tokens are the basic units that AI models process:
  • Roughly 4 characters = 1 token
  • Roughly 0.75 words = 1 token
  • “Hello world!” = ~3 tokens
  • This paragraph = ~50 tokens
Token Calculator:
50,000 tokens ≈ 37,500 words ≈ 75 pages of text

What Counts Toward the Limit

All of these count toward your token limit:
Your agent instructions from the Information tab.Typical size:
  • Simple: 200-500 tokens
  • Detailed: 500-1,500 tokens
  • Very detailed: 1,500-3,000 tokens
Descriptions of available tools and how to use them.Typical size per tool:
  • Simple tool: 100-300 tokens
  • Complex tool: 300-800 tokens
  • 5 tools ≈ 1,000-2,000 tokens
Previous messages (limited by Message History setting).Typical size:
  • Short message: 50-150 tokens
  • Long message: 150-500 tokens
  • 50 messages ≈ 5,000-10,000 tokens
The current input to the agent.Typical size:
  • Simple question: 10-50 tokens
  • Detailed request: 50-200 tokens
  • Long document: 200-5,000+ tokens
Internal reasoning steps and tool usage.Typical size:
  • Simple response: 100-500 tokens
  • Tool usage: 200-800 tokens per tool call
  • Complex reasoning: 1,000-5,000+ tokens

Setting the Limit

Default: 50,000 tokens
Use for:
  • Simple, single-turn interactions
  • Minimal conversation history
  • Cost-sensitive applications
  • Fast responses needed
Sufficient for:
  • Basic classification
  • Simple Q&A
  • One-shot processing
  • Minimal tools
Limitations:
  • ⚠️ Limited history
  • ⚠️ Few tools available
  • ⚠️ Can’t handle long inputs

Choosing the Right Limit

Decision Guide

Ask yourself:
  1. How long are typical inputs?
    • Short (< 500 words) → 16K-50K
    • Medium (500-2,000 words) → 50K-128K
    • Long (2,000+ words) → 128K+
  2. How many tools does the agent use?
    • None or 1-2 → 16K-50K
    • 3-5 → 50K
    • 6-10 → 50K-128K
    • 10+ → 128K+
  3. How long are conversations?
    • Single turn → 16K
    • 5-20 turns → 50K
    • 20-50 turns → 50K-128K
    • 50+ turns → 128K+
  4. What’s your budget?
    • Cost-sensitive → Use minimum needed
    • Standard → 50K
    • Premium → 128K+
Context above 200K tokens can produce more unpredictable results. Even if your model supports it, quality may degrade with extreme context lengths.
Start with 50,000 (the default). Increase only if you hit limits or need more capability. Monitor your usage and adjust.

Message History Limit

Maximum number of previous messages to include in conversation context. Works with Smart Context to determine which messages the agent can access.

What is Message History?

The conversation history is the list of previous messages between the user and agent:
User: "What's your return policy?"
Agent: "We have a 30-day return policy..."
User: "What about damaged items?"
Agent: "Damaged items can be returned..."
User: "Can I get a refund?"  ← Current message
Message History Limit determines how far back the agent can see.

How It Works

With Message History Limit = 50:
Agent can access:
- Current message
- Up to 50 previous messages
- (Approximately 25 conversation turns)

Older messages are excluded from context.
With Smart Context enabled:
Agent can access:
- Current message
- Up to 50 previous messages
- Smart Context selects most relevant ones

Only the most pertinent history is included.

Setting the Limit

Default: 50 messages (approximately 25 turns)
Use for:
  • Short interactions
  • Simple Q&A
  • Cost optimization
  • Single-topic conversations
Sufficient for:
  • 5-10 conversation turns
  • Basic customer service
  • Simple automation
Limitations:
  • ⚠️ Can’t reference older context
  • ⚠️ Not good for complex conversations

Choosing the Right Limit

Decision Guide

Consider:
  1. Typical conversation length?
    • 1-3 questions → 10-20 messages
    • 5-15 questions → 20-50 messages
    • 15-30 questions → 50-100 messages
    • 30+ questions → 100-200 messages
  2. Need to reference old context?
    • Rarely → Lower limit
    • Sometimes → Medium limit
    • Frequently → Higher limit
  3. Cost sensitivity?
    • Very sensitive → Lower limit
    • Standard → Medium limit
    • Not concerned → Higher limit
  4. Smart Context enabled?
    • Yes → Can use higher limits (it optimizes)
    • No → Use lower limits to control costs
At 50 messages: You get approximately 25 conversation turns (each turn = 1 user message + 1 agent message). This is plenty for most customer service and automation scenarios.
Start with 50 (the default). Lower if you want to reduce costs. Raise if agents struggle with longer conversations.

Maximum Reasoning Steps

Limits how many times the agent can use tools or reason through a problem before providing a final response.

What are Reasoning Steps?

A reasoning step is any action the agent takes:
  1. Tool usage - Calling an API, searching knowledge base, querying database
  2. Internal reasoning - Thinking through a problem step-by-step
  3. Decision-making - Evaluating options and choosing a path
Example conversation:
User: "I want to return order #12345"

Step 1: Agent uses Order Lookup tool → Gets order details
Step 2: Agent uses Return Policy tool → Checks if return allowed
Step 3: Agent reasons → Order is within 30 days, item eligible
Step 4: Agent uses Refund Processor tool → Initiates refund
Step 5: Agent responds → "I've processed your return and refund"

Total: 5 reasoning steps

Why Limit Reasoning Steps?

Without limits, agents could get stuck in loops:
Agent: Use tool A → Error
Agent: Try tool B → Error
Agent: Try tool A again → Error
Agent: Try different approach → Error
(Repeats indefinitely...)
The limit prevents this.
Each reasoning step uses tokens:
Step 1: Tool call = 200 tokens
Step 2: Tool call = 200 tokens
Step 3: Reasoning = 300 tokens
Step 4: Tool call = 200 tokens
Step 5: Response = 400 tokens

Total: 1,300 tokens
More steps = higher costs. The limit caps this.
Each step takes time:
Step 1: 0.5 seconds
Step 2: 0.5 seconds
Step 3: 0.3 seconds
Step 4: 0.5 seconds
Step 5: 0.8 seconds

Total: 2.6 seconds
More steps = slower responses. The limit prevents excessive delays.
Limits encourage the agent to be efficient:❌ With unlimited steps:
  • Try every tool
  • Excessive reasoning
  • Redundant checks
✅ With reasonable limits:
  • Choose best tool first
  • Efficient reasoning
  • Direct path to answer

Setting the Limit

Default: 10 steps
Use for:
  • Simple tasks
  • Single tool usage
  • Fast responses critical
  • Cost-sensitive
Sufficient for:
  • 1-2 tool calls
  • Simple reasoning
  • Basic automation
Limitations:
  • ⚠️ Can’t handle complex tasks
  • ⚠️ May fail on multi-step problems

Choosing the Right Limit

Decision Guide

Consider:
  1. Task complexity?
    • Simple (1 tool) → 3-5 steps
    • Moderate (2-3 tools) → 5-10 steps
    • Complex (4-6 tools) → 10-20 steps
    • Very complex (7+ tools) → 20-30 steps
  2. How many tools available?
    • 1-2 tools → 5 steps
    • 3-5 tools → 10 steps
    • 6-10 tools → 15 steps
    • 10+ tools → 20 steps
  3. Response time requirements?
    • Must be fast → Lower limit
    • Standard → 10 steps
    • Can be slower → Higher limit
  4. Cost sensitivity?
    • Very sensitive → Lower limit
    • Standard → 10 steps
    • Not concerned → Higher limit

What Happens When Limit is Reached

When the agent hits the reasoning step limit:
  1. Agent stops reasoning
  2. Returns best answer so far
  3. May include a note that it couldn’t complete
Example:
User: "Analyze this complex data set and provide insights."

Agent (after 10 steps of analysis):
"Based on my analysis so far, I've found [partial insights]. 
However, this is a complex dataset that would benefit from 
additional analysis. Here's what I've discovered..."
If agents frequently hit the limit without completing tasks, increase the limit. If they rarely use all steps, you can lower it to save costs.
Start with 10 (the default). Monitor your agents’ performance. Increase for complex tasks, decrease for simple ones.

Best Practices

Smart Context is a free optimization that:
  • Reduces costs
  • Improves focus
  • Enables longer conversations
  • Works automatically
Disable only if you have specific reasons.
Unless you’re an expert prompt engineer:
  • Keep it enabled
  • Let the system optimize
  • Focus on clear instructions
  • Don’t worry about prompt formatting
You can always disable for manual control.
50,000 tokens is sufficient for most use cases:
  • Standard conversations
  • Multiple tools
  • Reasonable history
  • Good balance of cost/capability
Increase only when you hit limits.
Track how much context your agents actually use:
  • Are they consistently near the limit?
  • Are they using only a fraction?
  • Adjust limits based on actual usage
Right-size for efficiency.
More history = more tokens = higher costs:
  • Most use cases: 50 messages is plenty
  • Simple bots: 20 messages may be enough
  • Complex conversations: 100 messages if needed
Balance history with budget.
Match the limit to the task:
  • Simple (1-2 tools) → 5 steps
  • Standard (3-5 tools) → 10 steps
  • Complex (6+ tools) → 15-20 steps
Too low = incomplete tasks. Too high = wasted tokens.
Verify limits work for:
  • Longest expected conversations
  • Most complex tasks
  • Edge cases with many tools
If agents hit limits, increase thoughtfully.
As you learn your agents’ patterns:
  • Lower unused capacity
  • Increase where agents struggle
  • Fine-tune for specific use cases
Start generous, optimize down.

Troubleshooting

Symptoms:
  • Error: “Token limit exceeded”
  • Responses cut off
  • Agent can’t complete tasks
Solutions:
  1. Increase Maximum Tokens limit
  2. Reduce Message History Limit
  3. Simplify agent instructions
  4. Remove unnecessary tools
  5. Enable Smart Context (if not already)
  6. Use a model with higher context (Claude, GPT-4 Turbo)
Symptoms:
  • Repeating questions
  • Not remembering earlier conversation
  • Losing track of context
Solutions:
  1. Increase Message History Limit
  2. Check Smart Context is enabled
  3. Verify conversation is actually multi-turn
  4. Ensure messages are being saved correctly
Symptoms:
  • Incomplete answers
  • “I couldn’t complete analysis”
  • Tasks not finished
Solutions:
  1. Increase Maximum Reasoning Steps
  2. Simplify the task
  3. Reduce number of tools (remove unused ones)
  4. Break complex tasks into multiple agents
  5. Check agent isn’t stuck in loops
Causes:
  • High token limits
  • Many reasoning steps
  • Large message history
  • Complex tools
Solutions:
  1. Reduce token limits (if not using full capacity)
  2. Lower reasoning step limit
  3. Reduce message history
  4. Use faster model (GPT-3.5 vs GPT-4)
  5. Optimize tool descriptions
Check:
  • Token limits set too high?
  • Message history too long?
  • Reasoning steps too high?
  • Smart Context disabled?
  • Using expensive model?
Optimize:
  1. Right-size token limits to actual usage
  2. Lower message history to minimum needed
  3. Reduce reasoning steps if not all used
  4. Enable Smart Context
  5. Consider Workforce model
  6. Monitor per-agent costs

Next Steps