Overview
The Agent Node runs LLM generation in the backend. It supports:generateTextfor normal text responsesgenerateObjectfor structured JSON output- tool calling through the Agent
toolsconnector - MCP tool discovery through MCP Node
Required Configuration
Basic
- Label - Display name for the node
- Agent Name - Unique identifier for the agent
- Instructions - System prompt
- Instructions Mode -
Fixed | From args | Expression - Input Mode -
Fixed | From args | Expression - Input Value - Typically
__full_result__for upstream data
Model & Operation
- Operation -
generateText | generateObject - Base URL (optional) - OpenAI-compatible endpoint (
https://api.openai.com/v1default) - API Key - provider key (default env expected:
OPENAI_API_KEY) - Chat Model - model id string
- Tool max steps - from
toolConfig.maxSteps(default10, used asstepCountIs(10))
Output Schema
- Output Schema is used only for
generateObject. - For
generateText, output is plain text wrapped as:{ content: "<text>" }
Example Configurations
Provider presets
OpenAI (default)- Base URL:
https://api.openai.com/v1 - API Key env:
OPENAI_API_KEY - Example models:
gpt-4o,gpt-4o-mini
- Base URL:
https://api.groq.com/v1 - API Key: Groq dashboard key
- Example models:
llama-3.1-70b,mixtral-8x7b
- Base URL:
https://openrouter.ai/api/v1 - API Key: OpenRouter key
- Example models: any OpenRouter-supported chat model
- Base URL:
https://api.minimax.io/v1 - API Key env:
OPENAI_API_KEY(set to your MiniMax key when using OpenAI-compatible mode) - Example models:
MiniMax-M2.5,MiniMax-M2.5-highspeed,MiniMax-M2.1,MiniMax-M2.1-highspeed,MiniMax-M2 - Notes:
temperaturemust be in(0.0, 1.0]- image/audio inputs are not supported in this compatibility mode
Agent config: generateText
Agent config: generateObject
Tools + MCP wiring example
Input Support
Agent input currently supports:string- multimodal content arrays (including image parts)
- object payloads (converted to text/content parts when needed)
What Are Tools?
In Agent Node context, tools are callable actions the model can invoke while generating a response.- The model decides when to call a tool based on your prompt and available tool definitions.
- A tool executes backend logic (query, mutation, http, code, mcp, etc.) and returns data.
- The returned data is fed back into the model so it can continue reasoning.
- Tool calls are bounded by
toolConfig.maxSteps(default10). - Tools edges are separate from normal execution edges; they define capability, not linear flow order.
Tool Connector (Canonical)
Use the Agenttools handle for tools. The canonical edge shape is:
Supported tool target node types
agentNodehttpRequestNodequeryNodemutationNodeifNodeforLoopNodeeditFieldsNodecodeNodeemailNodestripeNodemcpNode
MCP Integration from Agent
You can attach one or more MCP nodes to Agent tools. At runtime:- MCP client is created from MCP node URL/auth config
- MCP tools are discovered via
mcpClient.tools() - discovered tools are merged into the Agent toolset
- MCP clients are closed during cleanup/finalization
Recommended Patterns
1) Basic chat flow
frontendElementNode -> agentNode -> returnNode
2) Web-search-enabled agent
frontendElementNode -> agentNode -> returnNodeagentNode.tools -> mcpNode.input(Exa MCP or other MCP server)
3) Mixed custom + MCP tools
agentNode.tools -> queryNode.inputagentNode.tools -> httpRequestNode.inputagentNode.tools -> mcpNode.input
Notes
- Current generated runtime path is non-streaming in this phase.
generateObjectremains the structured-output operation in the node UI/model.- Keep
toolConfig.maxStepshigh enough for multi-step tool reasoning.

