Documentation Index
Fetch the complete documentation index at: https://docs.vibeflow.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Agent Node runs LLM generation in the backend. It supports:generateTextfor normal text responsesgenerateObjectfor structured JSON output- tool calling through the Agent
toolsconnector - MCP tool discovery through MCP Node
Required Configuration
Basic
- Label: Display name for the node
- Agent Name: Unique identifier for the agent
- Instructions: System prompt
- Instructions Mode:
Fixed | From args | Expression - Input Mode:
Fixed | From args | Expression - Input Value: Typically
__full_result__for upstream data
Model & Operation
- Operation:
generateText | generateObject - Base URL (optional): OpenAI-compatible endpoint (
https://api.openai.com/v1default) - API Key: provider key (default env expected:
OPENAI_API_KEY) - Chat Model: model id string
- Tool max steps: from
toolConfig.maxSteps(default10, used asstepCountIs(10))
Output Schema
- Output Schema: used only for
generateObject. - For
generateText, output is plain text wrapped as:{ content: "<text>" }
Example Configurations
Supported providers
The Agent Node supports a wide range of LLM providers through an OpenAI-compatible interface:- OpenAI (default):
gpt-4o,gpt-4o-mini, etc. - Anthropic (Claude):
claude-sonnet-4-20250514, etc. - Google AI (Gemini):
gemini-2.0-flash, etc. - Groq:
llama-3.1-70b,mixtral-8x7b, etc. - Mistral:
mistral-large-latest, etc. - xAI (Grok):
grok-2, etc. - DeepSeek:
deepseek-chat, etc. - OpenRouter: any OpenRouter-supported chat model
- Together AI: open-source models hosted on Together
- And 15+ more providers, including custom OpenAI-compatible endpoints
- Base URL:
https://api.openai.com/v1 - API Key env:
OPENAI_API_KEY
- Base URL:
https://api.groq.com/openai/v1 - API Key: Groq dashboard key
- Base URL:
https://openrouter.ai/api/v1 - API Key: OpenRouter key
- Base URL:
https://api.minimax.chat/v1 - API Key: MiniMax key
- Notes: temperature must be > 0, supports
abab6.5s-chatandabab5.5-chat
Agent config: generateText
Agent config: generateObject
Tools + MCP wiring example
Input Support
Agent input currently supports:string- multimodal content arrays (including image parts)
- object payloads (converted to text/content parts when needed)
What Are Tools?
In Agent Node context, tools are callable actions the model can invoke while generating a response.- The model decides when to call a tool based on your prompt and available tool definitions.
- A tool executes backend logic (query, mutation, http, code, mcp, etc.) and returns data.
- The returned data is fed back into the model so it can continue reasoning.
- Tool calls are bounded by
toolConfig.maxSteps(default10). - Tools edges are separate from normal execution edges; they define capability, not linear flow order.
Tool Connector (Canonical)
Use the Agenttools handle for tools. The canonical edge shape is:
Supported tool target node types
agentNodehttpRequestNodequeryNodemutationNodeifNodeforLoopNodeeditFieldsNodecodeNodeemailNodestripeNodemcpNode
MCP Integration from Agent
You can attach one or more MCP nodes to Agent tools. At runtime:- MCP client is created from MCP node URL/auth config
- MCP tools are discovered via
mcpClient.tools() - discovered tools are merged into the Agent toolset
- MCP clients are closed during cleanup/finalization
Recommended Patterns
1) Basic chat flow
frontendElementNode -> agentNode -> returnNode
2) Web-search-enabled agent
frontendElementNode -> agentNode -> returnNodeagentNode.tools -> mcpNode.input(Exa MCP or other MCP server)
3) Mixed custom + MCP tools
agentNode.tools -> queryNode.inputagentNode.tools -> httpRequestNode.inputagentNode.tools -> mcpNode.input
Notes
- Current generated runtime path is non-streaming in this phase.
generateObjectremains the structured-output operation in the node UI/model.- Keep
toolConfig.maxStepshigh enough for multi-step tool reasoning.

