AI Agent
LLM-powered conversational agent with autonomous reasoning, tool integration, and knowledge base support.
Core LLM-powered agent node for building sophisticated conversational workflows with autonomous reasoning capabilities, extensive tool integration, and RAG-powered knowledge retrieval.
Configuration
Connections
| Direction | Link Types |
|---|---|
| Incoming | DEFAULT, HANDOVER |
| Outgoing | DEFAULT, HANDOVER, GUARDRAIL, TOOL, KNOWLEDGEBASE |
Agentic Loop Configuration
AI Agents operate through an autonomous reasoning cycle. For a detailed explanation of how this works, see Agentic Loop Concepts.
Loop Control Parameters
The agent's reasoning behavior is controlled through these configuration parameters:
| Parameter | Code | Type | Default | Description |
|---|---|---|---|---|
| Max Turns | maxTurns | Text | "5" | Maximum reasoning iterations to prevent infinite loops |
| Temperature | temperature | Fractional | 0.7 | Response creativity (0.0 = deterministic, 2.0 = creative) |
| Last Messages Count | lastMessagesCount | Text | "5" | Number of previous conversation turns included in context |
Loop Performance
The agentic loop enables autonomous tool usage and decision making. Use maxTurns to balance capability with performance and cost control.
Input Parameters
Template Variables
AI Agents have access to both agent-specific context and the full workflow context object. Use these variables in your system message and prompts:
Agent-Specific Context
These variables are unique to AI agent nodes:
| Variable | Description | Usage Context |
|---|---|---|
{{ctx.knowledge_base}} | Retrieved knowledge base context | Available when KB is connected |
{{ctx.user_prompt_message}} | Current user input message | Always available |
{{ctx.conversation_history}} | Previous conversation context | When lastMessagesCount > 0 |
{{ctx.memory.long_term}} | Persistent chat history across sessions | When enableLongTermMemory is true |
{{ctx.memory.tool_calls}} | Cached results from previous tool executions | When tools are connected |
{{ctx.agent_name}} | Display name of the current agent | Always available |
{{ctx.last_model_response}} | Previous AI response in conversation | Available in multi-turn conversations |
{{ctx.current_timestamp}} | Current execution timestamp (ISO format) | Always available |
Universal Workflow Context
AI Agents also have access to the complete workflow context. For detailed documentation, see Workflow Context.
Key Context Categories:
| Context | Example Usage | Description |
|---|---|---|
| Other Nodes | {{ctx.nodes.dataProcessor.outputs.result}} | Access outputs from any connected workflow node |
| Variables | {{ctx.vars.userPreferences}} | Workflow variables (mutable during execution) |
| Constants | {{ctx.consts.API_ENDPOINT}} | Workflow constants (configuration values) |
| User Info | {{ctx.user.login}}, {{ctx.user.roles}} | Current user details and permissions |
| System Info | {{ctx.workspace_slug}}, {{ctx.app_slug}} | Workspace and application context |
Common Agent Patterns
Personalized Responses:
Hello {{ctx.user.login}}! I'm {{ctx.agent_name}}, your assistant for {{ctx.app_slug}}.
Using knowledge: {{ctx.knowledge_base}}
Previous context: {{ctx.conversation_history}}Multi-Node Integration:
Based on the data from {{ctx.nodes.dataProcessor.outputs.summary}} and
user preference {{ctx.vars.displayMode}}, I'll provide a {{ctx.user.roles[0] === "admin" ? "detailed" : "standard"}} response.Conditional Logic with Memory:
{{ctx.memory.tool_calls ? "I remember using these tools: " + ctx.memory.tool_calls : "This is our first interaction."}}
Knowledge context: {{ctx.knowledge_base}}Default System Message
You are {{ctx.agent_name}}, a helpful assistant. Use this context: {{ctx.knowledge_base}}Configuration Examples
Tool Selection Strategies
AI Agents can use any node type as tools, providing unlimited extensibility for agentic workflows. The agent's reasoning process includes intelligent tool selection based on task requirements and available capabilities.
Tool Selection Decision Flow
Tool Selection Criteria
The agent's reasoning process considers these factors when selecting tools:
- Task Complexity - Simple responses vs. multi-step operations
- Data Requirements - Input/output formats, storage needs, transformations
- External Dependencies - API calls, third-party services, real-time data
- Performance Considerations - Response time, resource usage, concurrency
- Error Handling - Fallback options, retry mechanisms, validation needs
Universal Tool Integration
AI Agents can use any node type as a tool, providing unlimited extensibility. Each connected node becomes a capability the agent can autonomously invoke during its reasoning process.
Universal Tool Architecture
Any node = Potential tool. When you connect a node to an AI Agent via the TOOL connection, the agent automatically receives a tool function it can call. The tool's name, description, and parameters are derived from the node's configuration.
Tool Configuration Options
When any node is connected as a tool, additional configuration options become available to control tool behavior:
| Option | Type | Default | Description |
|---|---|---|---|
| Is Action Tool | Boolean | false | ⚠️ Deprecated - Legacy option for action vs information tools |
| Preserve Output | Boolean | true | Include raw output in agent response |
| Include in Memory | Boolean | true | Cache tool results for future reference in conversation |
| Start Event Payload | Text | - | Custom payload sent to Chat clients when tool execution starts |
| End Event Payload | Text | - | Custom payload sent to Chat clients when tool execution completes |
Tool Call Events for Chat Clients
Configure real-time event streaming to Chat clients during tool execution:
Start Event Payload - Sent to Chat clients before tool execution begins:
{
"toolName": "{{ctx.node_name}}",
"operation": "{{ctx.inputs.operation_type}}",
"status": "starting"
}End Event Payload - Sent to Chat clients after tool execution completes:
{
"toolName": "{{ctx.node_name}}",
"success": {{ctx.outputs.success}},
"resultCount": {{ctx.outputs.count}},
"status": "completed"
}Available Context Variables:
ctx.inputs.*- All tool input parameters (available in start and end events)ctx.outputs.*- Tool execution results (available in end events only)ctx.node_name- Name of the connected node
Chat Client Use Cases:
- Loading Indicators: Show "Searching..." when search tool starts
- Progress Updates: Display intermediate status during long-running operations
- Real-time Feedback: Update chat interface based on tool activity
- User Experience: Keep users informed during tool execution
Tool Configuration Best Practices
- Output Control: Use
Preserve Outputto control what appears in agent responses - Result Caching: Cache important tool results for efficient reuse in multi-turn conversations
- Chat Events: Configure event payloads for enhanced real-time user experience
- Deprecation Note: Avoid using
Is Action Toolas it will be removed in future versions
Tool Integration Best Practices
- Start Simple: Begin with essential tools, then expand based on actual needs
- Clear Naming: Use descriptive node names - the agent uses these to understand when to use each tool
- Detailed Descriptions: Provide clear descriptions for tool parameters and expected outputs
- Test Individually: Verify each node works correctly before connecting it as a tool
- Monitor Usage: Track which tools are being used effectively and optimize accordingly
Intelligent Tool Composition
Agents autonomously combine tools to solve complex tasks:
Best Practices
Tool Integration Guidelines
Tool Limit Recommendations
Maximum 20 Tools: Connecting more than 20 tools can cause AI agents to hallucinate and make poor tool selection decisions. Keep tool sets focused and purposeful.
Tool Selection and Organization
Tool Quantity Management:
- Optimal Range: 5-15 tools for most use cases
- Maximum Limit: 20 tools (performance degrades beyond this)
- Minimum Viable: Start with 3-5 essential tools, expand based on actual needs
Tool Description Quality:
- Be Specific: Describe exactly what the tool does and when to use it
- Include Examples: Provide sample inputs and expected outputs
- Clear Boundaries: Specify what the tool does NOT do to avoid confusion
- Parameter Guidance: Explain each parameter's purpose and format
// Good Tool Description
"Search the customer database by email, phone, or customer ID.
Returns customer profile including order history and preferences.
Use when user asks about account details or order status."
// Poor Tool Description
"Search customers"Tool Naming Conventions
Descriptive Names:
- ✅
SearchCustomerDatabase,CalculateShippingCost,SendWelcomeEmail - ❌
Tool1,Query,Process,Handler
Consistent Patterns:
- Action + Object:
CreateTicket,UpdateInventory,DeleteUser - Domain + Action:
CustomerLookup,PaymentProcess,InventoryCheck
System Prompt Design
Prompt Engineering
- Be specific about the agent's role and capabilities
- Include clear instructions for tool usage priorities
- Define response format and tone requirements
- Use template variables effectively for dynamic content
- Provide examples of good tool selection decisions
Tool Usage Guidance in Prompts:
You are a customer service agent with access to these tools:
PRIORITY TOOLS (use first):
- SearchCustomerDatabase: For account lookups and order history
- CreateSupportTicket: For issues requiring escalation
SECONDARY TOOLS (use as needed):
- SendEmail: For follow-up communications
- UpdateCustomerNotes: For documenting interactions
Always search the customer database before creating tickets.Performance Optimization
Performance Guidelines
- Max Turns: Set appropriate
maxTurns(3-8) to balance capability and performance - Temperature: Use 0.1-0.3 for factual responses, 0.7-1.0 for creative tasks
- Context Limit: Limit
lastMessagesCountto essential context (3-10 messages) - Tool Efficiency: Prioritize faster tools in agent prompts
Conversation Management
Memory Configuration:
- Short Conversations:
lastMessagesCount: 3-5 - Complex Tasks:
lastMessagesCount: 8-12 - Long-term Sessions: Enable
enableLongTermMemorysparingly
Token Management:
- Monitor total context size (system prompt + conversation + tool descriptions)
- Use concise but descriptive tool documentation
- Consider tool result caching for expensive operations
Tool Architecture Patterns
Layered Tool Strategy
Core Tools (Always Available):
- Essential business functions (search, create, update)
- Error handling and validation tools
- Basic communication tools
Conditional Tools (Context-Dependent):
- Specialized domain tools
- Integration-specific tools
- Advanced processing tools
Tool Dependency Management
Independent Tools:
✅ Each tool works standalone
✅ No hidden dependencies between tools
✅ Clear input/output contractsCoordinated Tools:
⚠️ Document tool execution order in system prompt
⚠️ Handle partial failures gracefully
⚠️ Provide fallback options when tool chains failQuality Assurance
Testing Strategy
Individual Tool Testing:
- Test each tool independently with various inputs
- Verify error handling and edge cases
- Confirm output format consistency
- Validate parameter validation
Agent Integration Testing:
- Test tool selection accuracy with realistic scenarios
- Verify agent chooses appropriate tools for different queries
- Test tool combination workflows
- Validate error recovery and fallback behavior
Monitoring and Optimization
Key Metrics to Track:
- Tool usage frequency and success rates
- Average turns per conversation
- Tool selection accuracy
- User satisfaction scores
- Response time and token usage
Optimization Signals:
- Too Many Unused Tools: Remove or consolidate rarely used tools
- Poor Tool Selection: Improve tool descriptions and system prompt guidance
- High Turn Count: Review tool efficiency and agent logic
- Slow Responses: Reduce tool count or optimize tool performance
Iterative Improvement
Start with a minimal tool set and expand based on real user needs. Monitor tool usage patterns and agent decision quality to optimize your tool configuration over time.
Troubleshooting
Common Issues
Infinite Loops
- Increase
maxTurnsif tasks legitimately require more iterations - Review system prompt for circular reasoning patterns
- Ensure tools return clear success/failure indicators
Poor Tool Usage
- Verify tool descriptions are clear and specific
- Check that tool parameters match agent expectations
- Test tools independently before connecting to agent
Context Overflow
- Reduce
lastMessagesCountif hitting token limits - Optimize knowledge base chunk sizes
- Use more efficient prompt templates
Performance Monitoring
Monitor these metrics for optimal agent performance:
- Average turns per conversation
- Tool usage frequency and success rates
- Response time and token usage
- User satisfaction and task completion rates
Related Concepts
Within AI & Knowledge Ecosystem
- AI Sub-Agent - Multi-agent orchestration patterns and delegation strategies
- Knowledge Base - Advanced RAG implementation and semantic search optimization
- AI & Knowledge Overview - Ecosystem architecture and workflow composition patterns
Tool Integration Nodes
- Data & Storage Nodes - File operations, databases, and data management tools
- Code Execution Nodes - JavaScript, Python, and custom logic execution
- Integration Nodes - HTTP, webhooks, and third-party service connectivity
- Flow Control - Conditional logic, loops, and orchestration patterns
Next Steps:
- Multi-Agent Systems: AI Sub-Agent for delegation and orchestration
- Enhanced Context: Knowledge Base for RAG integration
- Tool Selection: Review individual node categories for specific tool capabilities