diff --git a/docs/docs/API-Reference/api-build.mdx b/docs/docs/API-Reference/api-build.mdx index ada316aa1..dc6d8e295 100644 --- a/docs/docs/API-Reference/api-build.mdx +++ b/docs/docs/API-Reference/api-build.mdx @@ -113,7 +113,7 @@ curl -X GET \ The `/build` endpoint accepts optional values for `start_component_id` and `stop_component_id` to control where the flow run starts and stops. Setting `stop_component_id` for a component triggers the same behavior as clicking the **Play** button on that component, where all dependent components leading up to that component are also run. -For example, to stop flow execution at the Open AI model component, run the following command: +For example, to stop flow execution at the OpenAI model component, run the following command: ```bash curl -X POST \ diff --git a/docs/docs/API-Reference/api-flows-run.mdx b/docs/docs/API-Reference/api-flows-run.mdx index a330f9d61..22c4493f7 100644 --- a/docs/docs/API-Reference/api-flows-run.mdx +++ b/docs/docs/API-Reference/api-flows-run.mdx @@ -139,8 +139,6 @@ The following example is truncated to illustrate a series of `token` events as w ### Run endpoint parameters - - | Parameter | Type | Info | |-----------|------|------| | flow_id | UUID/string | Required. Part of URL: `/run/$FLOW_ID` | diff --git a/docs/docs/Agents/agents-tools.mdx b/docs/docs/Agents/agents-tools.mdx index 4e60c7060..67d8c3a2d 100644 --- a/docs/docs/Agents/agents-tools.mdx +++ b/docs/docs/Agents/agents-tools.mdx @@ -31,7 +31,7 @@ Add an agent to your flow that uses a different OpenAI model for a larger contex 1. Create the [Simple agent starter flow](/simple-agent). 2. Add a second agent component to the flow. -3. Add your **Open AI API Key** to the **Agent** component. +3. Add your **OpenAI API Key** to the **Agent** component. 4. In the **Model Name** field, select `gpt-4.1`. 5. Click **Tool Mode** to use this new agent as a tool. 6. Connect the new agent's **Toolset** port to the previously created agent's **Tools** port. diff --git a/docs/docs/Agents/agents.mdx b/docs/docs/Agents/agents.mdx index a96e77f47..2120e303a 100644 --- a/docs/docs/Agents/agents.mdx +++ b/docs/docs/Agents/agents.mdx @@ -1,91 +1,157 @@ --- -title: Use Langflow Agents +title: Use Langflow agents slug: /agents --- import Icon from "@site/src/components/icon"; -Agents use LLMs as a brain to autonomously analyze problems and select tools to solve them. +Langflow's [**Agent** component](/components-agents) is critical for building agentic flows. +This component provides everything you need to create an agent, including multiple Large Language Model (LLM) providers, tool calling, and custom instructions. +It simplifies agent configuration so you can focus on application development. -Langflow's [Agent component](/components-agents#agent-component) simplifies agent configuration so you can focus on application development. +
+How agents work -The Agent component provides everything you need to create an agent, including multiple LLM providers and custom instructions. +Agents extend LLMs by integrating _tools_, which are functions that provide additional context and enable autonomous task execution. +These integrations make agents more specialized and powerful than standalone LLMs. -## Agent settings +Whereas an LLM might generate acceptable, inert responses to general queries and tasks, an agent can leverage the integrated context and tools to provide more relevant responses and even take action. +For example, you might create an agent that can access your company's knowledge base, repositories, and other resources to help your team with tasks that require knowledge of your specific products, customers, and code. -You can configure the Agent component to use your preferred provider and model, custom instructions, and tools. +Agents use LLMs as a reasoning engine to process input, determine which actions to take to address the query, and then generate a response. +The response could be a typical text-based LLM response, or it could involve an action, like editing a file, running a script, or calling an external API. -### Agent models and providers +In an agentic context, tools are functions that the agent can run to perform tasks or access external resources. +A function is wrapped as a `Tool` object with a common interface that the agent understands. +Agents become aware of tools through tool registration, which is when the agent is provided a list of available tools typically at agent initialization. +The `Tool` object's description tells the agent what the tool can do so that it can decide whether the tool is appropriate for a given request. -Use the **Model Provider** and **Model Name** settings to select the LLM that you want the Agent to use. - -You must provide an authentication key for the selected provider, such as an OpenAI API key for OpenAI models. - -### Agent instructions and input - -In the **Agent Instructions** field, you can provide custom instructions that you want the Agent component to use for every conversation. - -These instructions are applied in addition to the **Input**, which is provided at runtime. - -### Agent tools - -Agents are most useful when they have the appropriate tools available to complete requests. - -An Agent component can use any Langflow component as a tool, as long as you attach it to the Agent component. - -:::tip -To allow agents to use tools from MCP servers, use the [**MCP Tools** component](/components-tools#mcp-connection). -::: - -When you attach a component as a tool, you must configure the component as a tool by enabling **Tool Mode**. - -For more information, see [Configure tools for agents](/agents-tools). +
## Use the Agent component in a flow -:::tip -For a pre-built demonstration, open the **Simple Agent** template flow and follow along. -::: - -Create an agent in Langflow, starting with the **Agent** component and working outward. +The following steps explain how to create an agentic flow in Langflow from a blank flow. +For a prebuilt example, use the [**Simple Agent** template](/simple-agent) or try the [Langflow quickstart](/get-started-quickstart). 1. Click **New Flow**, and then click **Blank Flow**. -2. Add an **Agent** component to your workspace. -3. Use the default model or select another provider and model, and then provide credentials for your chosen provider. For example, to use the default model, you must provide an OpenAI API key. -4. Add **Chat input** and **Chat output** components to your flow, and connect them to the tool calling agent. +2. Add an **Agent** component to the **Workspace**. +3. Enter a valid OpenAI API key. -![Chat with agent component](/img/agent-example-add-chat.png) + The default model for the **Agent** component is an OpenAI model. + If you want to use a different provider, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly. + For more information, see [Agent component parameters](#agent-component-parameters). -This basic flow allows you to chat with the agent in the **Playground**, but you're only chatting with the OpenAI LLM. -To unlock the power of the Agent component, connect some tools. +4. Add [**Chat input** and **Chat output** components](/components-io) to your flow, and then connect them to the **Agent** component. -5. Add the **News Search**, **URL**, and **Calculator** components to your flow. -6. Enable **Tool Mode** in the **News Search**, **URL**, and **Calculator** components. -In the [component's header menu](/concepts-components#component-menus), enable **Tool Mode** so you can use the component with an agent. + At this point, you have created a basic LLM-based chat flow that you can test in the