Skip to main content

Overview

The Agent Node runs LLM generation in the backend. It supports:
  • generateText for normal text responses
  • generateObject for structured JSON output
  • tool calling through the Agent tools connector
  • MCP tool discovery through MCP Node

Required Configuration

Basic

  • Label - Display name for the node
  • Agent Name - Unique identifier for the agent
  • Instructions - System prompt
  • Instructions Mode - Fixed | From args | Expression
  • Input Mode - Fixed | From args | Expression
  • Input Value - Typically __full_result__ for upstream data

Model & Operation

  • Operation - generateText | generateObject
  • Base URL (optional) - OpenAI-compatible endpoint (https://api.openai.com/v1 default)
  • API Key - provider key (default env expected: OPENAI_API_KEY)
  • Chat Model - model id string
  • Tool max steps - from toolConfig.maxSteps (default 10, used as stepCountIs(10))

Output Schema

  • Output Schema is used only for generateObject.
  • For generateText, output is plain text wrapped as:
    • { content: "<text>" }

Example Configurations

Provider presets

OpenAI (default)
  • Base URL: https://api.openai.com/v1
  • API Key env: OPENAI_API_KEY
  • Example models: gpt-4o, gpt-4o-mini
Groq
  • Base URL: https://api.groq.com/v1
  • API Key: Groq dashboard key
  • Example models: llama-3.1-70b, mixtral-8x7b
OpenRouter
  • Base URL: https://openrouter.ai/api/v1
  • API Key: OpenRouter key
  • Example models: any OpenRouter-supported chat model
MiniMax
  • Base URL: https://api.minimax.io/v1
  • API Key env: OPENAI_API_KEY (set to your MiniMax key when using OpenAI-compatible mode)
  • Example models: MiniMax-M2.5, MiniMax-M2.5-highspeed, MiniMax-M2.1, MiniMax-M2.1-highspeed, MiniMax-M2
  • Notes:
    • temperature must be in (0.0, 1.0]
    • image/audio inputs are not supported in this compatibility mode

Agent config: generateText

{
  "nodeType": "agent",
  "label": "Support Agent",
  "agentName": "supportAgent",
  "operation": "generateText",
  "instructionsMode": "Fixed",
  "instructionsValue": "You are a helpful customer support assistant.",
  "inputMode": "From args",
  "inputValue": "__full_result__",
  "modelConfig": {
    "provider": "openai",
    "baseUrl": "https://api.openai.com/v1",
    "chatModel": "gpt-4o-mini",
    "temperature": 0.7,
    "maxTokens": 1000
  },
  "toolConfig": {
    "maxSteps": 10
  }
}

Agent config: generateObject

{
  "nodeType": "agent",
  "label": "Lead Extractor",
  "agentName": "leadExtractor",
  "operation": "generateObject",
  "instructionsMode": "Fixed",
  "instructionsValue": "Extract lead details from the input.",
  "inputMode": "From args",
  "inputValue": "__full_result__",
  "modelConfig": {
    "provider": "openai",
    "baseUrl": "https://api.openai.com/v1",
    "chatModel": "gpt-4o-mini"
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "name": { "type": "string" },
      "email": { "type": "string" },
      "company": { "type": "string" }
    }
  }
}

Tools + MCP wiring example

{
  "source": "agentNodeId",
  "sourceHandle": "tools",
  "target": "mcpNodeId",
  "targetHandle": "input"
}
MCP node example:
{
  "nodeType": "mcp",
  "label": "MCP Tools",
  "url": "https://mcp.exa.ai/mcp",
  "authType": "none",
  "bearerToken": ""
}

Input Support

Agent input currently supports:
  • string
  • multimodal content arrays (including image parts)
  • object payloads (converted to text/content parts when needed)
This allows passing direct user text, structured data, or image-enabled content.

What Are Tools?

In Agent Node context, tools are callable actions the model can invoke while generating a response.
  • The model decides when to call a tool based on your prompt and available tool definitions.
  • A tool executes backend logic (query, mutation, http, code, mcp, etc.) and returns data.
  • The returned data is fed back into the model so it can continue reasoning.
  • Tool calls are bounded by toolConfig.maxSteps (default 10).
  • Tools edges are separate from normal execution edges; they define capability, not linear flow order.

Tool Connector (Canonical)

Use the Agent tools handle for tools. The canonical edge shape is:
{
  "source": "agentNodeId",
  "sourceHandle": "tools",
  "target": "toolNodeId",
  "targetHandle": "input"
}
Tools edges are not part of the main execution chain. They are used only for model tool calls.

Supported tool target node types

  • agentNode
  • httpRequestNode
  • queryNode
  • mutationNode
  • ifNode
  • forLoopNode
  • editFieldsNode
  • codeNode
  • emailNode
  • stripeNode
  • mcpNode
Unsupported tool edges are removed during flow normalization.

MCP Integration from Agent

You can attach one or more MCP nodes to Agent tools. At runtime:
  • MCP client is created from MCP node URL/auth config
  • MCP tools are discovered via mcpClient.tools()
  • discovered tools are merged into the Agent toolset
  • MCP clients are closed during cleanup/finalization

1) Basic chat flow

  • frontendElementNode -> agentNode -> returnNode

2) Web-search-enabled agent

  • frontendElementNode -> agentNode -> returnNode
  • agentNode.tools -> mcpNode.input (Exa MCP or other MCP server)

3) Mixed custom + MCP tools

  • agentNode.tools -> queryNode.input
  • agentNode.tools -> httpRequestNode.input
  • agentNode.tools -> mcpNode.input

Notes

  • Current generated runtime path is non-streaming in this phase.
  • generateObject remains the structured-output operation in the node UI/model.
  • Keep toolConfig.maxSteps high enough for multi-step tool reasoning.