docs: Various edits related to style, syntax, and adding more detail to some pages (#9132)

* port unrelated changes from IA PR

* few more ports

* fix build

* edit to try to restart build
This commit is contained in:
April I. Murphy 2025-07-22 08:20:31 -07:00 committed by GitHub
commit 2fa2a43f96
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
36 changed files with 584 additions and 423 deletions

View file

@ -113,7 +113,7 @@ curl -X GET \
The `/build` endpoint accepts optional values for `start_component_id` and `stop_component_id` to control where the flow run starts and stops.
Setting `stop_component_id` for a component triggers the same behavior as clicking the **Play** button on that component, where all dependent components leading up to that component are also run.
For example, to stop flow execution at the Open AI model component, run the following command:
For example, to stop flow execution at the OpenAI model component, run the following command:
```bash
curl -X POST \

View file

@ -139,8 +139,6 @@ The following example is truncated to illustrate a series of `token` events as w
### Run endpoint parameters
<!-- TODO: Can there be other parameters depending on the components in the flow? -->
| Parameter | Type | Info |
|-----------|------|------|
| flow_id | UUID/string | Required. Part of URL: `/run/$FLOW_ID` |

View file

@ -31,7 +31,7 @@ Add an agent to your flow that uses a different OpenAI model for a larger contex
1. Create the [Simple agent starter flow](/simple-agent).
2. Add a second agent component to the flow.
3. Add your **Open AI API Key** to the **Agent** component.
3. Add your **OpenAI API Key** to the **Agent** component.
4. In the **Model Name** field, select `gpt-4.1`.
5. Click **Tool Mode** to use this new agent as a tool.
6. Connect the new agent's **Toolset** port to the previously created agent's **Tools** port.

View file

@ -1,91 +1,157 @@
---
title: Use Langflow Agents
title: Use Langflow agents
slug: /agents
---
import Icon from "@site/src/components/icon";
Agents use LLMs as a brain to autonomously analyze problems and select tools to solve them.
Langflow's [**Agent** component](/components-agents) is critical for building agentic flows.
This component provides everything you need to create an agent, including multiple Large Language Model (LLM) providers, tool calling, and custom instructions.
It simplifies agent configuration so you can focus on application development.
Langflow's [Agent component](/components-agents#agent-component) simplifies agent configuration so you can focus on application development.
<details>
<summary>How agents work</summary>
The Agent component provides everything you need to create an agent, including multiple LLM providers and custom instructions.
Agents extend LLMs by integrating _tools_, which are functions that provide additional context and enable autonomous task execution.
These integrations make agents more specialized and powerful than standalone LLMs.
## Agent settings
Whereas an LLM might generate acceptable, inert responses to general queries and tasks, an agent can leverage the integrated context and tools to provide more relevant responses and even take action.
For example, you might create an agent that can access your company's knowledge base, repositories, and other resources to help your team with tasks that require knowledge of your specific products, customers, and code.
You can configure the Agent component to use your preferred provider and model, custom instructions, and tools.
Agents use LLMs as a reasoning engine to process input, determine which actions to take to address the query, and then generate a response.
The response could be a typical text-based LLM response, or it could involve an action, like editing a file, running a script, or calling an external API.
### Agent models and providers
In an agentic context, tools are functions that the agent can run to perform tasks or access external resources.
A function is wrapped as a `Tool` object with a common interface that the agent understands.
Agents become aware of tools through tool registration, which is when the agent is provided a list of available tools typically at agent initialization.
The `Tool` object's description tells the agent what the tool can do so that it can decide whether the tool is appropriate for a given request.
Use the **Model Provider** and **Model Name** settings to select the LLM that you want the Agent to use.
You must provide an authentication key for the selected provider, such as an OpenAI API key for OpenAI models.
### Agent instructions and input
In the **Agent Instructions** field, you can provide custom instructions that you want the Agent component to use for every conversation.
These instructions are applied in addition to the **Input**, which is provided at runtime.
### Agent tools
Agents are most useful when they have the appropriate tools available to complete requests.
An Agent component can use any Langflow component as a tool, as long as you attach it to the Agent component.
:::tip
To allow agents to use tools from MCP servers, use the [**MCP Tools** component](/components-tools#mcp-connection).
:::
When you attach a component as a tool, you must configure the component as a tool by enabling **Tool Mode**.
For more information, see [Configure tools for agents](/agents-tools).
</details>
## Use the Agent component in a flow
:::tip
For a pre-built demonstration, open the **Simple Agent** template flow and follow along.
:::
Create an agent in Langflow, starting with the **Agent** component and working outward.
The following steps explain how to create an agentic flow in Langflow from a blank flow.
For a prebuilt example, use the [**Simple Agent** template](/simple-agent) or try the [Langflow quickstart](/get-started-quickstart).
1. Click **New Flow**, and then click **Blank Flow**.
2. Add an **Agent** component to your workspace.
3. Use the default model or select another provider and model, and then provide credentials for your chosen provider. For example, to use the default model, you must provide an OpenAI API key.
4. Add **Chat input** and **Chat output** components to your flow, and connect them to the tool calling agent.
2. Add an **Agent** component to the **Workspace**.
3. Enter a valid OpenAI API key.
![Chat with agent component](/img/agent-example-add-chat.png)
The default model for the **Agent** component is an OpenAI model.
If you want to use a different provider, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
For more information, see [Agent component parameters](#agent-component-parameters).
This basic flow allows you to chat with the agent in the **Playground**, but you're only chatting with the OpenAI LLM.
To unlock the power of the Agent component, connect some tools.
4. Add [**Chat input** and **Chat output** components](/components-io) to your flow, and then connect them to the **Agent** component.
5. Add the **News Search**, **URL**, and **Calculator** components to your flow.
6. Enable **Tool Mode** in the **News Search**, **URL**, and **Calculator** components.
In the [component's header menu](/concepts-components#component-menus), enable **Tool Mode** so you can use the component with an agent.
At this point, you have created a basic LLM-based chat flow that you can test in the <Icon name="Play" aria-hidden="true" /> **Playground**.
However, this flow only chats with the LLM.
To enhance this flow and make it truly agentic, add some tools, as explained in the next steps.
**Tool Mode** makes a component into a tool by modifying the component's inputs to accept requests from the Agent component to use a tool's available actions. A component in tool mode has a **Toolset** port that you must connect to an Agent component's **Tools** port if you want to allow the agent to use the tool's actions.
7. Connect the **Toolset** port on the three tool components to the **Tools** port on the Agent component.
![A basic agent chat flow with Chat Input, Agent, and Chat Output components.](/img/agent-example-add-chat.png)
![Chat with agent component](/img/agent-example-add-tools.png)
5. Add **News Search**, **URL**, and **Calculator** components to your flow.
6. Enable **Tool Mode** in the **News Search**, **URL**, and **Calculator** components:
8. Open the <Icon name="Play" aria-hidden="true" /> **Playground**. Ask the agent, `What tools are you using to answer my questions?`
The agent should respond with a list of the connected tools.
1. Click the **News Search** component to expose the [component's header menu](/concepts-components#component-menus), and then enable **Tool Mode**.
2. Repeat for the **URL** and **Calculator** components.
3. Connect the **Toolset** port for each tool component to the **Tools** port on the **Agent** component.
```text
I use a combination of my built-in knowledge (up to June 2024) and a set of external tools to answer your questions. Here are the main types of tools I can use:
Web Search & Content Fetching: I can fetch and summarize content from web pages, including crawling links recursively.
News Search: I can search for recent news articles using Google News via RSS feeds.
Calculator: I can perform arithmetic calculations and evaluate mathematical expressions.
Date & Time: I can provide the current date and time in various time zones.
These tools help me provide up-to-date information, perform calculations, and retrieve specific data from the internet when needed. If you have a specific question, let me know, and I'll use the most appropriate tool(s) to help!
```
**Tool Mode** makes a component into a tool by modifying the component's inputs.
With **Tool Mode** enabled, a component can accept requests from an **Agent** component to use the component's available actions as tools.
9. Ask the agent, `Summarize today's tech news`.
The Playground displays the agent's tool calls, what input was provided, and the raw output the agent received before generating the summary. The agent should call the **News Search** component's `search_news` action.
When in **Tool Mode**, a component has a **Toolset** port that you must connect to an **Agent** component's **Tools** port if you want to allow the agent to use that component's actions as tools.
You've successfully constructed a flow with the Langflow Agent.
Connect more tools to solve more specialized problems.
For more information, see [Configure tools for agents](/agents-tools).
## See also
![A more complex agent chat flow where three components are connected to the Agent component as tools](/img/agent-example-add-tools.png)
* [Configure tools for agents](/agents-tools)
8. Open the <Icon name="Play" aria-hidden="true" /> **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
The agent should respond with a list of the connected tools.
It may also include built-in tools.
```text
I use a combination of my built-in knowledge (up to June 2024) and a set of external tools to answer your questions. Here are the main types of tools I can use:
Web Search & Content Fetching: I can fetch and summarize content from web pages, including crawling links recursively.
News Search: I can search for recent news articles using Google News via RSS feeds.
Calculator: I can perform arithmetic calculations and evaluate mathematical expressions.
Date & Time: I can provide the current date and time in various time zones.
These tools help me provide up-to-date information, perform calculations, and retrieve specific data from the internet when needed. If you have a specific question, let me know, and I'll use the most appropriate tool(s) to help!
```
9. To test a specific tool, ask the agent a question that uses one of the tools, such as `Summarize today's tech news`.
To help you debug and test your flows, the **Playground** displays the agent's tool calls, the provided input, and the raw output the agent received before generating the summary.
With the given example, the agent should call the **News Search** component's `search_news` action.
You've successfully created a basic agentic flow that uses some generic tools.
To continue building on this tutorial, try connecting other tool components or [use Langflow as an MCP client](/mcp-client) to support more complex and specialized tasks.
For a multi-agent example, see [Use an agent as a tool](/agents-tools#use-an-agent-as-a-tool).
## Agent component parameters
You can configure the **Agent** component to use your preferred provider and model, custom instructions, and tools.
### Provider and model
Use the **Model Provider** (`agent_llm`) and **Model Name** (`llm_model`) settings to select the model provider and LLM that you want the agent to use.
The **Agent** component includes many models from several popular model providers.
To access other providers and models, set **Model Provider** to **Custom**, and then connect a [**Language Model** component](/components-models).
:::tip
If you need to generate embeddings in your flow, use an [**Embedding Model** component](/components-embedding-models).
:::
### Model provider API key
In the **API Key** field, enter a valid authentication key for your selected model provider, if you selected one of the built-in providers.
For example, to use the default OpenAI model, you must provide a valid OpenAI API key for an OpenAI account that has credits and permission to call OpenAI LLMs.
You can enter the key directly, but it is recommended that you follow industry best practices for storing and referencing API keys.
For example, you can use a <Icon name="Globe" aria-hidden="true"/> [global variable](/configuration-global-variables) or [environment variables](/environment-variables).
For more information, see [Add component API keys to Langflow](/configuration-api-keys#add-component-api-keys-to-langflow).
If you select **Custom** as the model provider, authentication is handled in the incoming **Language Model** component.
### Agent instructions and input
In the **Agent Instructions** (`system_prompt`) field, you can provide custom instructions that you want the **Agent** component to use for every conversation.
These instructions are applied in addition to the **Input** (`input_value`), which can be entered directly or provided through another component, such as a **Chat Input** component.
### Tools
Agents are most useful when they have the appropriate tools available to complete requests.
An **Agent** component can use any Langflow component as a tool, including other agents and MCP servers.
To attach a component as a tool, you must enable **Tool Mode** on the component that you want to attach, and then attach it to the **Agent** component's **Tools** port.
For more information, see [Configure tools for agents](/agents-tools).
:::tip
To allow agents to use tools from MCP servers, use the [**MCP Tools** component](/components-agents#mcp-connection).
:::
### Additional parameters
Many optional **Agent** component input parameters are hidden by default in the visual editor.
You can view and toggle all parameters through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
With the **Agent** component, the available parameters can change depending on the selected provider and model.
For example, some models support additional modes, arguments, or features like chat memory and temperature.
Some additional input parameters include the following:
* **Current Date** (`add_current_date_tool`): When enabled (`true`), this setting adds a tool to the agent that can retrieve the current date.
* **Handle Parse Errors** (`handle_parsing_errors`): When enabled (`true`), this setting allows the agent to fix errors, like typos, when analyzing user input.
* **Verbose** (`verbose`): When enabled (`true`), this setting records detailed logging output for debugging and analysis.
## Agent component output
The **Agent** component outputs a **Response** (`response`) that is [`Message` data](/data-types#message) containing the agent's raw response to the query.
Typically, this is passed to a **Chat Output** component to return the response in a human-readable format.
It can also be passed to other components if you need to process the response further before, or in addition to, returning it to the user.

View file

@ -184,17 +184,17 @@ If you're creating custom components in a different location using the [LANGFLOW
```
Components must be placed inside **category folders**, not directly in the base directory.
The category folder name determines where the component appears in the UI menu.
The category folder name determines where the component appears in the Langflow **Components** menu.
For example, to add a component to the **Helpers** menu, place it in a `helpers` subfolder:
For example, to add a component to the **Helpers** category, place it in a `helpers` subfolder:
```
/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
└── helpers/ # Displayed within the "Helpers" menu
└── helpers/ # Displayed within the "Helpers" category
└── custom_component.py # Your component
```
You can have **multiple category folders** to organize components into different menus:
You can have multiple category folders to organize components into different categories:
```
/app/custom_components/
├── helpers/

View file

@ -65,7 +65,7 @@ This component has two modes, depending on the type of server you want to access
7. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent: Click **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
For example, if you use `mcp-server-fetch` with the `fetch` tool, you could ask the agent to summarize recent tech news. The agent calls the MCP server function `fetch`, and then returns the response.
8. If you want the agent to be able to use more tools, repeat these steps to add more **Tools** components with different servers or tools.
8. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
### Connect a Langflow MCP server {#mcp-sse-mode}
@ -83,20 +83,13 @@ In SSE mode, all flows available from the targeted server are treated as tools.
5. Test your flow to make sure the agent uses your flows to respond to queries: Click **Playground**, and then enter a prompt that uses a flow that you connected through the **MCP Tools** component.
6. If you want the agent to be able to use more flows, repeat these steps to add more **MCP Tools** components with different servers or tools selected.
## MCP tools component parameters
**Inputs**
## MCP Tools parameters
| Name | Type | Description |
|------|------|-------------|
| command | String | Stdio mode only. The MCP server startup command. Default: `uvx mcp-sse-shim@latest`. |
| sse_url | String | SSE mode only. The SSE URL for a Langflow project's MCP server. Default for Langflow Desktop: `http://localhost:7868/`. Default for other installations: `http://localhost:7860/api/v1/mcp/sse` |
**Outputs**
| Name | Type | Description |
|------|------|-------------|
| tools | List[Tool] | A list of tools exposed by the MCP server. |
| command | String | Input parameter. Stdio mode only. The MCP server startup command. Default: `uvx mcp-sse-shim@latest`. |
| sse_url | String | Input parameter. SSE mode only. The SSE URL for a Langflow project's MCP server. Default for Langflow Desktop: `http://localhost:7868/`. Default for other installations: `http://localhost:7860/api/v1/mcp/sse` |
| tools | List[Tool] | Output parameter. [`Tool`](/data-types#tool) object containing a list of tools exposed by the MCP server. |
## Manage connected MCP servers

View file

@ -81,13 +81,21 @@ These indicate a component _connection point_ or _port_.
Ports either accept input or produce output of a specific data type.
You can infer the data type from the field the port is attached to or from the [port's color](#port-colors).
For example, the **System Message** field accepts [message data](#message), as illustrated by the blue port icon: <Icon name="Circle" size="16" aria-label="Indigo message port" style={{ color: '#4f46e5', fill: '#4f46e5' }} />.
For example, the **System Message** field accepts [message data](/data-types#message), as illustrated by the blue port icon: <Icon name="Circle" size="16" aria-label="Indigo message port" style={{ color: '#4f46e5', fill: '#4f46e5' }} />.
![Prompt component with multiple inputs](/img/prompt-component.png)
When building flows, connect output ports to input ports of the same type (color) to transfer that type of data between two components.
For information about the programmatic representation of each data type, see [Langflow data types](/data-types).
:::tip
* Hover over a port to see connection details for that port.
* Click a port to filter the **Components** menu by compatible components.
* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/components-processing#type-convert) to convert the data between components.
:::
### Dynamic ports
Some components have ports that are dynamically added or removed.
@ -111,11 +119,7 @@ When `group_outputs=True`, outputs are displayed individually.
### Port colors
Component port colors indicate the data type ingested or emitted by the port.
For example, a **Message** port either accepts or emits `message` data.
:::tip
Hover over a port to see connection details for that port.
:::
For example, a **Message** port either accepts or emits `Message` data.
The following table lists the component data types and their corresponding port colors:
@ -143,7 +147,7 @@ In the context of creating and running flows, component code does the following:
* Passes results to the next component in the flow.
All components inherit from a base `Component` class that defines the component's interface and behavior.
For example, the [Recursive character text splitter](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the [LCTextSplitterComponent](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/base/textsplitters/model.py) class.
For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/base/textsplitters/model.py) class.
Each component's code includes definitions for inputs and outputs, which are represented in the **Workspace** as [component ports](/concepts-components#component-ports).
For example, the `RecursiveCharacterTextSplitter` has four inputs. Each input definition specifies the input type, such as `IntInput`, as well as the encoded name, display name, description, and other parameters for that specific input.

View file

@ -20,7 +20,7 @@ To try building and running a flow in a few minutes, see the [Langflow quickstar
When building a [flow](/concepts-flows), you primarily interact with the **Workspace**.
This is where you add [components](/concepts-components), configure them, and attach them together.
![Empty langflow workspace](/img/workspace.png)
![Empty Langflow workspace](/img/workspace.png)
From the **Workspace**, you can also access the [**Playground**](#playground), [**Share** menu](#share-menu), and [**Logs**](/concepts-flows#flow-logs).
@ -43,17 +43,14 @@ From the **Workspace**, you can also access the [**Playground**](#playground), [
## Playground
From the **Workspace**, click <Icon name="Play" aria-hidden="true"/> **Playground** to test your flow.
If your flow has a **Chat Input** component, you can use the **Playground** to run your flow, chat with your flow, view inputs and outputs, and modify the LLM's memories to tune the flow's responses in real time.
If your flow has a **Chat Input** component, you can use the **Playground** to run your flow, chat with your flow, view inputs and outputs, and modify your AI's memories to tune your responses in real time.
For example, if your flow has **Chat Input**, **Language Model**, and **Chat Output** components, then you can chat with the LLM in the **Playround** to test the flow.
To try this for yourself, you can use the [**Basic Prompting** template](/basic-prompting).
To try this for yourself, create a flow with the [**Basic Prompting** template](/basic-prompting), and then click <Icon name="Play" aria-hidden="true"/> **Playground** when editing the flow in the **Workspace**.
![Playground window](/img/playground.png)
If you have an **Agent** component in your flow, the **Playground** displays its tool calls and outputs so you can monitor the agent's tool use and understand the reasoning behind its responses.
To try this for yourself, you can use the [**Simple Agent** template](/simple-agent).
To try an agentic flow in the **Playground**, use the [**Simple Agent** template](/simple-agent).
![Playground window with agent response](/img/playground-with-agent.png)

View file

@ -15,7 +15,7 @@ Langflow provides several ways to run flows from external applications:
* [Serve flows through a Langflow MCP server](#serve-flows-through-a-langflow-mcp-server)
Although you can use these options with an isolated, local Langflow instance, they are typically more valuable when you have [deployed a Langflow server](/deployment-overview) or packaged Langflow as a dependency of an application.
For package dependencies, see [Application development overview](/develop-application) and [Package a flow as a Docker image](/deployment-docker#package-your-flow-as-a-docker-image).
For package dependencies, see [Develop an application with Langflow](/develop-application) and [Package a flow as a Docker image](/deployment-docker#package-your-flow-as-a-docker-image).
## Use the Langflow API to run flows {#api-access}
@ -66,7 +66,7 @@ For more information, see [API keys](/configuration-api-keys) and [Get started w
### Input Schema (tweaks) {#input-schema}
Tweaks are one-time overrides that modify component parameters for at runtime, rather than permanently modifying the flow itself.
Tweaks are one-time overrides that modify component parameters at runtime, rather than permanently modifying the flow itself.
For an example of tweaks in a script, see the [Quickstart](/get-started-quickstart).
:::tip
@ -82,6 +82,25 @@ These tweaks don't change the flow parameters set in the **Workspace**, and they
Adding tweaks through the **Input Schema** can help you troubleshoot formatting issues with tweaks that you manually added to Langflow API requests.
For example, the following curl command includes a tweak that disables the **Store Messages** setting in a flow's **Chat Input** component:
```bash
curl --request POST \
--url "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" \
--header "Content-Type: application/json" \
--header "x-api-key: LANGFLOW_API_KEY" \
--data '{
"input_value": "Text to input to the flow",
"output_type": "chat",
"input_type": "chat",
"tweaks": {
"ChatInput-4WKag": {
"should_store_message": false
}
}
}'
```
### Use a flow ID alias
If you want your requests to use an alias instead of the actual flow ID, you can rename the flow's `/v1/run/$FLOW_ID` endpoint:
@ -104,7 +123,7 @@ For each flow, Langflow provides a code snippet that you can insert into the `<b
The chat widget only supports flows that have **Chat Input** and **Chat Output** components, which are required for the chat experience.
**Text Input** and **Text Output** components can send and receive messages, but they don't include ongoing LLM chat context.
Attempting to chat with a flow that doesn't have a valid input component will trigger the flow, but the response only indicates that the input was empty.
Attempting to chat with a flow that doesn't have [**Chat Input** component](/components-io) will trigger the flow, but the response only indicates that the input was empty.
:::
### Get a langflow-chat snippet
@ -504,7 +523,9 @@ export class AppComponent {
Each [Langflow project](/concepts-flows#projects) has an MCP server that exposes the project's flows as [tools](https://modelcontextprotocol.io/docs/concepts/tools) that [MCP clients](https://modelcontextprotocol.io/clients) can use to generate responses.
You can also use Langflow as an MCP client, and you can serve your flows as tools to a Langflow MCP client.
In addition to serving flows through Langflow MCP servers, you can use Langflow as an MCP client to access any MCP server, including Langflow MCP servers.
Interactions with Langflow MCP servers happen through the Langflow API's `/mcp` endpoints.
For more information, see [Use Langflow as an MCP server](/mcp-server) and [Use Langflow as an MCP client](/mcp-client).

View file

@ -1,74 +1,88 @@
---
title: Voice mode
title: Use voice mode
slug: /concepts-voice-mode
---
import Icon from "@site/src/components/icon";
<!-- TODO: Combine & redirect to /concepts-playground -->
You can use Langflow's voice mode to interact with your flows verbally through a microphone and speakers.
The Langflow **Playground** supports **voice mode** for interacting with your applications through a microphone.
## Prerequisites
An [OpenAI API key](https://platform.openai.com/) is required to use **voice mode**. An [ElevenLabs](https://elevenlabs.io) API key enables more voices in the chat, but is optional.
Voice mode requires the following:
Your flow must have a [Chat input](/components-io#chat-input) component to interact with the **Playground**.
* A flow with **Chat Input**, **Language Model**, and **Chat Output** components.
## Prerequisite
If your flow has an **Agent** component, make sure the tools in your flow have accurate names and descriptions to help the agent choose which tools to use.
- [An OpenAI API key](https://platform.openai.com/)
Additionally, be aware that voice mode overrides typed instructions in the **Agent** component's **Agent Instructions** field.
## Use voice mode in the Langflow Playground
* An [OpenAI](https://platform.openai.com/) account and an OpenAI API key because Langflow uses the OpenAI API to process voice input and generate responses.
Chat with an agent in the **Playground**, and get more recent results by asking the agent to use tools.
* Optional: An [ElevenLabs](https://elevenlabs.io) API key to enable voice options for the LLM's response.
* A microphone and speakers.
A high quality microphone and minimal background noise are recommended for optimal voice comprehension.
## Test voice mode in the Playground
In the **Playground**, click the <Icon name="Mic" aria-hidden="true"/> **Microphone** to enable voice mode and verbally interact with your flows through a microphone and speakers.
The following steps use the [**Simple Agent** template](/simple-agent) to demonstrate how to enable voice mode:
1. Create a flow based on the **Simple Agent** template.
1. Create a [Simple agent starter project](/simple-agent).
2. Add your **OpenAI API key** credentials to the **Agent** component.
3. To start a chat session, click **Playground**.
4. To enable voice mode, click the <Icon name="Mic" aria-hidden="true"/> **Microphone** icon.
The **Voice mode** pane opens.
5. In the **OpenAI API Key** field, add your **OpenAI API key** credentials.
This key is saved as a [global variable](/configuration-global-variables) in Langflow and is accessible from any component or flow.
6. Your browser may prompt you for microphone access.
Browser access is **required** to use voice mode.
To continue, allow microphone access in your browser.
7. In the **Audio Input** menu, select the input device to use with voice mode.
:::tip
A higher quality microphone improves OpenAI's voice chat comprehension.
:::
8. Optionally, add your **ElevenLabs API key** in the **ElevenLabs API Key** field.
This makes more voices available for your AI responses.
This key is saved as a [global variable](/configuration-global-variables) in Langflow and is accessible from any component or flow.
9. In the **Preferred Language** menu, select your language for conversing with Langflow.
This option changes both the spoken conversation and the chat responses in the **Playground**.
10. Talk into your microphone.
The waveform in the voice mode pane should register your input, and the agent should respond in voice and in the **Playground**.
11. Ask the agent to use the tools available to find recent news about a subject.
The agent describes its search process, including accessing the **URL** tool to fetch recent news.
The agent summarizes the recent news in speech and in the **Playground**.
3. Click **Playground**.
Be aware of the following considerations when using voice mode:
4. Click the <Icon name="Mic" aria-hidden="true"/> **Microphone** icon to open the **Voice mode** dialog.
* Name and describe your tools accurately, so the **Agent** chooses tools correctly.
* Voice mode does not use the instructions in the Agent component's **Agent Instructions** field, because your spoken instructions override this value.
* Voice mode only maintains context within the conversation session you are currently in.
If you exit a conversation and close the **Playground**, your conversational context is not available in the next chat session.
5. Enter your OpenAI API key, and then click **Save**. Langflow saves the key as a [global variable](/configuration-global-variables).
## Langflow voice mode endpoints
6. If you are prompted to grant microphone access, you must allow microphone access to use voice mode.
If microphone access is blocked, you won't be able to provide verbal input.
Langflow exposes OpenAI Realtime API-compatible websocket endpoints for your flows. You can build voice applications against these endpoints the same way you would build against [OpenAI Realtime API websockets](https://platform.openai.com/docs/guides/realtime#connect-with-websockets).
7. For **Audio Input**, select the input device to use with voice mode.
The WebSockets endpoints require an [OpenAI API key](https://platform.openai.com/docs/overview) for authentication, and they support an optional [ElevenLabs](https://elevenlabs.io) integration.
8. Optional: Add an ElevenLabs API key to enable more voices for the LLM's response.
Langflow saves this key as a global variable.
Langflow exposes two WebSockets endpoints:
9. For **Preferred Language**, select the language you want to use for your conversations with the LLM.
This option changes both the expected input language and the response language.
* `/ws/flow_as_tool/{flow_id}` or `/ws/flow_as_tool/{flow_id}/{session_id}`: Establishes a connection to OpenAI Realtime voice, and then invokes flows as tools by the [OpenAI Realtime model](https://platform.openai.com/docs/guides/realtime-conversations#handling-audio-with-websockets).
This approach is ideal for low latency applications, but it is less deterministic since the OpenAI voice-to-voice model determines when to call your flow.
10. Speak into your microphone to start the chat.
If configured correctly, the waveform in the voice mode dialog registers your input, and then the agent's logic and response are described verbally and in the **Playground**.
## Develop applications with websockets endpoints
Langflow exposes two OpenAI Realtime API-compatible websocket endpoints for your flows.
You can build applications against these endpoints the same way you would build against [OpenAI Realtime API websockets](https://platform.openai.com/docs/guides/realtime#connect-with-websockets).
The Langflow API's websockets endpoints require an [OpenAI API key](https://platform.openai.com/docs/overview) for authentication, and they support an optional [ElevenLabs](https://elevenlabs.io) integration with an ElevenLabs API key.
Additionally, both endpoints require that you provide the flow ID in the endpoint path.
### Voice-to-voice audio streaming
The `/ws/flow_as_tool/$FLOW_ID` endpoint establishes a connection to OpenAI Realtime voice, and then invokes the specified flow as a tool according to the [OpenAI Realtime model](https://platform.openai.com/docs/guides/realtime-conversations#handling-audio-with-websockets).
This approach is ideal for low latency applications, but it is less deterministic because the OpenAI voice-to-voice model determines when to call your flow.
### Speech-to-text audio transcription
The `/ws/flow_tts/$FLOW_ID` endpoint converts audio to text using [OpenAI Realtime voice transcription](https://platform.openai.com/docs/guides/realtime-transcription), and then directly invokes the specified flow for each transcript.
* `/ws/flow_tts/{flow_id}` or `/ws/flow_tts/{flow_id}/{session_id}`: Converts audio to text using [OpenAI Realtime voice transcription](https://platform.openai.com/docs/guides/realtime-transcription), and then each flow is invoked directly for each transcript.
This approach is more deterministic but has higher latency.
This is the mode used in the Langflow Playground.
Path parameters:
* `flow_id`: Required path parameter. The ID of the flow to be used as a tool.
* `session_id`: Optional path parameter. A unique identifier for the conversation session. If not provided, one is automatically generated.
This is the mode used in the Langflow **Playground**.
### Session IDs for websockets endpoints
Both endpoints accept an optional `/$SESSION_ID` path parameter to provide a unique ID for the conversation.
If omitted, Langflow uses the flow ID as the [session ID](/session-id).
However, be aware that voice mode only maintains context within the current conversation instance.
When you close the **Playground** or end a chat, verbal chat history is discarded and not available for future chat sessions.

View file

@ -19,7 +19,11 @@ For example **Data** ports, represented by <Icon name="Circle" size="16" aria-la
When building flows, connect output ports to input ports of the same type (color) to transfer that type of data between two components.
:::tip
In the visual editor, hover over a port to see connection details for that port.
* In the visual editor, hover over a port to see connection details for that port.
* Click a port to filter the **Components** menu by compatible components.
* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/components-processing#type-convert) to convert the data between components.
:::
## Data
@ -130,9 +134,13 @@ When represented as tabular data, the preceding DataFrame object is structured a
## Embeddings
**Embeddings** ports <Icon name="Circle" size="16" aria-label="Emerald embeddings port" style={{ color: '#10b981', fill: '#10b981' }} /> handle vector embeddings to support functions like similarity search.
**Embeddings** ports <Icon name="Circle" size="16" aria-label="Emerald embeddings port" style={{ color: '#10b981', fill: '#10b981' }} /> emit or ingest vector embeddings to support functions like similarity search.
For example, the **Embedding Model** component outputs `embeddings` data that you can connect to an **Embedding** input port on a vector store component.
The `Embeddings` data type is used specifically by components that either produce or consume vector embeddings, such as the [embedding model components](/components-embedding-models) and [vector store components](/components-vector-stores).
For example, the **Embedding Model** component outputs `Embeddings` data that you can connect to an **Embedding** input port on a vector store component.
For information about the underlying Python classes that produce `Embeddings`, see the [LangChain Embedding models documentation](https://python.langchain.com/docs/integrations/text_embedding/).
## LanguageModel
@ -148,28 +156,26 @@ For more information, see [Use the LanguageModel output](/components-models#use-
**Message** ports <Icon name="Circle" size="16" aria-label="Indigo message port" style={{ color: '#4f46e5', fill: '#4f46e5' }} /> accept or produce `Message` data, which extends the [`Data` type](#data) with additional fields and methods for text input typically used in chat flows.
The `Message` data type provides a consistent structure for chat interactions, and it is ideal for flows like chatbots, conversational analysis, and other LLM input and output.
This data type is used by many components.
### Schema and attributes
:::tip
Components that accept or produce `Message` data may not include all attributes in the incoming or outgoing `Message` data.
As long as the data is compatible with the `Message` schema, it can be valid.
The schema is defined in [`message.py`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/schema/message.py).
When building flows, focus on the fields shown on each component in the visual editor, rather than the data types passed between components.
The details of a particular data type are often only relevant when you are debugging a flow or component that isn't producing the expected output.
The following attributes are available:
For example, a **Chat Input** component only requires the content of the **Input Text** (`input_value`) field.
The component then constructs a complete `Message` object before passing the data to other components in the flow.
:::
- `text`: Main message content
- `sender`: `"User"` or `"AI"`
- `sender_name`: Display name for sender
- `session_id`: Chat session identifier
- `timestamp`: UTC timestamp of the message
- `files`: List of file paths or images included with the message
- `content_blocks`: Handles rich content input, such as text, media, or code
- `category`: `"message"`, `"error"`, `"warning"`, or `"info"`.
### Schema, structure, and attributes
### Message structure
The `Message` schema is defined in [`message.py`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/schema/message.py).
Some `Message` attributes have their own schema definitions, such as [`content_block.py`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/schema/content_block.py).
`Message` content appears in Langflow logs, GUI outputs, and the **Playground** as the `text` string alone, but this text is actually extracted from the complete structured `Message` object.
For example, the output string `"Name: Charlie Lastname, Age: 28, Email: charlie.lastname@example.com"` is extracted from the following `Message` object:
`Message` data is structured as a JSON object.
For example:
```json
{
@ -184,20 +190,48 @@ For example, the output string `"Name: Charlie Lastname, Age: 28, Email: charlie
}
```
### Text I/O components don't treat Message data as conversations
The attributes included in a specific `Message` object depend on the context, including the component type, flow activity, and whether the message is a query or response.
Some common attributes include the following:
[**Text Input**](/components-io#text-input) and [**Text Output**](/components-io#text-output) components have `Message` ports, but they _don't_ support conversational chat in the same way as **Chat Input** components.
- `text`: The main message content.
- `sender`: Identifies the originator of a chat message as either `User` or `Language Model`.
- `sender_name`: The display name for the sender. Defaults to `User` or `Language Model`.
- `session_id`: The chat [session identifier](/session-id).
- `flow_id`: The ID of the flow that the message is associated with. `flow_id` and `session_id` are the same if the flow doesn't use custom session IDs.
- `timestamp`: The UTC timestamp that the message was sent.
- `files`: A list of file paths or images included with the message
- `content_blocks`: Container for rich content input, such as text, media, or code. Also contains error message information if the LLM can't process the input.
- `category`: `"message"`, `"error"`, `"warning"`, or `"info"`.
When a **Text Input** component receives `Message` data, the input isn't handled in the same way that it is when passed to a **Chat Input** component in a chat flow.
Instead, the text is treated as a static string input, not as part of an ongoing conversation.
Not all attributes are required, and some components accept message-compatible input, such as raw text input.
The strictness depends on the component.
The same is true for the **Text Output** component, which produces simple string output, rather than a response to a conversation.
### Message data in Input/Output components
In flows with [**Chat Input/Output** components](/components-io), `Message` data provides a consistent structure for chat interactions, and it is ideal for chatbots, conversational analysis, and other use cases based on a dialog with an LLM or agent.
In these flows, the **Playground** chat interface prints only the `Message` attributes that are relevant to the conversation, such as `text`, `files`, and error messages from `content_blocks`.
To see all `Message` attributes, inspect the message logs in the **Playground**.
In flows with [**Text Input/Output** components](/components-io), `Message` data is used to pass simple text strings without the chat-related metadata.
These components handle `Message` data as independent text strings, not as part of an ongoing conversation.
For this reason, a flow with only **Text Input/Output** components isn't compatible with the **Playground**.
For more information, see [Input/Output components](/components-io).
When using the Langflow API, the response includes the `Message` object along with other response data from the flow run.
Langflow API responses can be extremely verbose, so your applications must include code to extract relevant data from the response to return to the user.
For an example, see the [Langflow quickstart](/get-started-quickstart).
Additionally, input sent to the input port of input/output components does _not_ need to be a complete `Message` object because the component constructs the `Message` object that is then passed to other components in the flow or returned as flow output.
In fact, some components should not receive a complete `Message` object because some attributes, like `timestamp` should be added by the component for accuracy.
## Tool
**Tool** ports <Icon name="Circle" size="16" aria-label="Cyan tool port" style={{ color: '#06b6d4', fill: '#06b6d4' }} /> connect tools to an **Agent** component.
Tools can be other components where you enabled **Tool Mode** or they can be the dedicated **MCP Tools** component.
Tools can be other components where you enabled **Tool Mode**, they can be the dedicated **MCP Tools** component, or they can be other components that only support **Tool Mode**.
Multiple tools can be connected to the same **Agent** component at the same port.
Functionally, `Tool` data is a LangChain `StructuredTool` object that can be used in agent workflows.
For more information, see [Configure tools for agents](/agents-tools) and [Use Langflow as an MCP client](/mcp-client).
@ -293,6 +327,6 @@ The following example shows how to inspect the output of a **Type Convert** comp
## See also
- [Custom Components](/components-custom-components)
- [Custom components](/components-custom-components)
- [Pydantic Models](https://docs.pydantic.dev/latest/api/base_model/)
- [pandas.DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html)

View file

@ -32,13 +32,13 @@ export VARIABLE_NAME='VALUE'
```
</TabItem>
<TabItem value="windows" label="Windows" default>
<TabItem value="windows" label="Windows">
```
set VARIABLE_NAME='VALUE'
```
</TabItem>
<TabItem value="docker" label="Docker" default>
<TabItem value="docker" label="Docker">
```bash
docker run -it --rm \
-p 7860:7860 \
@ -107,7 +107,7 @@ If it detects a supported environment variable, then it automatically adopts the
```
</TabItem>
<TabItem value="docker" label="Docker" default>
<TabItem value="docker" label="Docker">
```bash
docker run -it --rm \

View file

@ -4,16 +4,16 @@ slug: /contributing-components
---
New components are added as objects of the [Component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class.
New components are added as objects of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class.
Dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L148) file.
## Contribute an example component to Langflow
Anyone can contribute an example component. For example, to create a new **Data** component called **DataFrame processor**, follow these steps to contribute it to Langflow.
Anyone can contribute an example component. For example, to create a new data component called **DataFrame processor**, follow these steps to contribute it to Langflow.
1. Create a Python file called `dataframe_processor.py`.
2. Write your processor as an object of the [Component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. You'll create a new class, `DataFrameProcessor`, that will inherit from `Component` and override the base class's methods.
2. Write your processor as an object of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. You'll create a new class, `DataFrameProcessor`, that will inherit from `Component` and override the base class's methods.
```python
from typing import Any, Dict, Optional
@ -77,7 +77,7 @@ class DataFrameProcessor(Component):
```
5. Save the `dataframe_processor.py` to the `src > backend > base > langflow > components` directory.
This example adds a **Data** component, so add it to the `/data` directory.
This example adds a data component, so add it to the `/data` directory.
6. Add the component dependency to `src > backend > base > langflow > components > data > __init__.py` as `from .DataFrameProcessor import DataFrameProcessor`.
You can view the [/data/__init__.py](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/components/data/__init__.py) in the Langflow repository.

View file

@ -205,10 +205,10 @@ For more information, see the [VSCode documentation](https://code.visualstudio.c
### Additional contribution guides
- [Contribute Bundles](./contributing-bundles.mdx)
- [Contribute Components](./contributing-components.mdx)
- [Contribute Tests](./contributing-component-tests.mdx)
- [Contribute Templates](./contributing-templates.mdx)
- [Contribute bundles](./contributing-bundles.mdx)
- [Contribute components](./contributing-components.mdx)
- [Contribute tests](./contributing-component-tests.mdx)
- [Contribute templates](./contributing-templates.mdx)
## Contribute documentation

View file

@ -9,7 +9,6 @@ You can use the Langflow Docker image to start a Langflow container.
This guide demonstrates several ways to deploy Langflow with [Docker](https://docs.docker.com/) and [Docker Compose](https://docs.docker.com/compose/):
<!-- no toc -->
* [Start a Langflow container with default values](#quickstart)
* [Clone the repo and use Docker Compose to build the Langflow Docker container](#clone-the-repo-and-build-the-langflow-docker-container) with a persistent PostgreSQL database service
* [Use a Dockerfile to package a flow as a Docker image](#package-your-flow-as-a-docker-image)

View file

@ -41,10 +41,5 @@ After your flow is packaged as a Docker image and available on Docker Hub, deplo
For more information, see [Deploy the Langflow development environment on Kubernetes](/deployment-kubernetes-dev).
<!--TODO: Verify these scenarios-->
<!--You can host a Langflow server 24x7 and have all of your apps call that fixed server to run your flows. You can bundle the Langflow package as a dependency of a larger application and run the whole thing all together. You can containerize Langflow and serve it as a microservice for a larger application comprised of microservices like a website.-->

View file

@ -1,8 +1,14 @@
---
title: Logging options in Langflow
title: Logs
slug: /logging
---
import Icon from "@site/src/components/icon";
This page provides information about Langflow logs, including logs for individual flows and the Langflow application itself.
## Log options
Langflow uses the `loguru` library for logging.
The default `log_level` is `ERROR`. Other options are `DEBUG`, `INFO`, `WARNING`, and `CRITICAL`.
@ -23,4 +29,47 @@ LANGFLOW_LOG_FILE=path/to/logfile.log
LANGFLOW_LOG_ENV=container
```
To start Langflow with the values from your .env file, start Langflow with `uv run langflow run --env-file .env`
To start Langflow with the values from your .env file, start Langflow with `uv run langflow run --env-file .env`
## Flow and component logs
After you run a flow, you can inspect the logs for the each component and flow run.
For example, you can inspect `Message` objects ingested and generated by [input and output components](/components-io).
### View flow logs
In the visual editor, click **Logs** to view logs for the entire flow:
![Logs pane](/img/logs.png)
Then, click the cells in the **inputs** and **outputs** columns to inspect the `Message` objects.
For example, the following `Message` data could be the output from a **Chat Input** component:
```text
"messages": [
{
"message": "What's the recommended way to install Docker on Mac M1?",
"sender": "User",
"sender_name": "User",
"session_id": "Session Apr 21, 17:37:04",
"stream_url": null,
"component_id": "ChatInput-4WKag",
"files": [],
"type": "text"
}
],
```
In the case of Input/Output components, the original input might not be structured as a `Message` object.
For example, a **Language Model** component might pass a raw text response to a **Chat Output** component that is then transformed into a `Message` object.
### View chat logs
In the **Playground**, you can inspect the chat history for each chat session.
For more information, see [Use the Playground](/concepts-playground).
### View output from a single component
When debugging issues with the format or content of a flow's output, it can help to inspect each component's output to determine where data is being lost or malformed.
To view the output produced by a single component during the most recent run, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output** in the visual editor.

View file

@ -3,15 +3,10 @@ title: Memory management options
slug: /memory
---
Langflow provides flexible memory management options for storage and retrieval.
Langflow provides flexible memory management options for storage and retrieval of data relevant to your flows and your Langflow server.
This includes essential Langflow database tables, file management, and caching, as well as chat memory.
This page details the following memory configuration options in Langflow.
- [Local Langflow database tables](#local-langflow-database-tables)
- [Store messages in local memory](#store-messages-in-local-memory)
- [Configure external memory](#configure-external-memory)
- [Configure the external database connection](#configure-the-external-database-connection)
- [Configure cache memory](#configure-cache-memory)
Langflow supports both local memory and external memory options.
## Local Langflow database tables
@ -90,4 +85,8 @@ LANGFLOW_CACHE_TYPE=Async
```
Alternative caching options can be configured, but options other than the default asynchronous, in-memory cache are not supported.
The default behavior is suitable for most use cases.
The default behavior is suitable for most use cases.
## See also
* [Langflow file management](/concepts-file-management)

View file

@ -8,6 +8,7 @@ import Icon from "@site/src/components/icon";
You can use the **Webhook** component to start a flow run in response to an external event.
With the **Webhook** component, a flow can receive data directly from external sources. Then, the flow can parse the data and pass it to other components in the flow to initiate other actions, such as calling APIs, writing to databases, and chatting with LLMs.
If the input is not valid JSON, the **Webhook** component wraps it in a `payload` object so that it can be accepted as input to trigger the flow.
The **Webhook** component provides a versatile entrypoint that can make your flows more event-driven and integrated with your entire stack of applications and services.
For example:
@ -24,13 +25,13 @@ To use the **Webhook** component in a flow, do the following:
2. Add a [**Webhook** component](/components-data#webhook) and a [**Parser** component](/components-processing#parser) to your flow.
The **Parser** component extracts relevant data from the raw payload received by the **Webhook** component.
These two components are commonly paired together because the **Parser** component extracts relevant data from the raw payload received by the **Webhook** component.
3. Connect the Webhook component's **Data** output to the Parser component's **Data** input.
3. Connect the **Webhook** component's **Data** output to the **Parser** component's **Data** input.
4. In the Parser component's **Template** field, enter a template to parse the raw payload into structured text.
4. In the **Parser** component's **Template** field, enter a template to parse the raw payload into structured text.
In the template, use variables for payload keys in the same way you would define variables in a [**Prompt** component](/components-prompts).
In the template, use variables for payload keys in the same way you would define variables in a [**Prompt Template** component](/components-prompts).
For example, assume that you expect your **Webhook** component to receive the following JSON data:
@ -48,15 +49,15 @@ To use the **Webhook** component in a flow, do the following:
ID: {id} - Name: {name} - Email: {email}
```
5. Connect the Parser component's **Parsed Text** output to the next logical component in your flow, such as a Chat Input component.
5. Connect the **Parser** component's **Parsed Text** output to the next logical component in your flow, such as a **Chat Input** component.
If you want to test only the Webhook and Parser components, you can connect the **Parsed Text** output directly to a Chat Output component's **Text** input. Then, you can see the parsed data in the **Playground** after you run the flow.
If you want to test only the **Webhook** and **Parser** components, you can connect the **Parsed Text** output directly to a **Chat Output** component's **Text** input. Then, you can see the parsed data in the **Playground** after you run the flow.
6. From the Webhook component's **Endpoint** field, copy the API endpoint that you will use to send data to the Webhook component and trigger the flow.
6. From the **Webhook** component's **Endpoint** field, copy the API endpoint that you will use to send data to the **Webhook** component and trigger the flow.
Alternatively, to get a complete `POST /v1/webhook/$FLOW_ID` code snippet, open the flow's [**API access** pane](/concepts-publish#api-access), and then click the **Webhook cURL** tab.
You can also modify the default curl command in the Webhook component's **cURL** field.
If this field isn't visible by default, click the Webhook component, and then click **Controls** in the component's header menu.
You can also modify the default curl command in the **Webhook** component's **cURL** field.
If this field isn't visible by default, click the **Webhook** component, and then click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
7. Send a POST request with `data` to the flow's `webhook` endpoint to trigger the flow.
@ -69,7 +70,8 @@ To use the **Webhook** component in a flow, do the following:
-d '{"id": "12345", "name": "alex", "email": "alex@email.com"}'
```
A successful response indicates that Langflow started the flow:
A successful response indicates that Langflow started the flow.
The response doesn't include the output for the entire flow, only an indication that the flow started.
```json
{
@ -78,25 +80,43 @@ To use the **Webhook** component in a flow, do the following:
}
```
The output for the entire flow isn't returned by the `webhook` endpoint.
8. To view the flow's most recent parsed payload, click the **Parser** component, and then click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output**.
For the preceding example, the parsed payload would be a string like `ID: 12345 - Name: alex - Email: alex@email.com`.
## Troubleshoot Parser component build failure
The **Parser** component can fail to build if it doesn't receive data from the **Webhook** component or if there is a problem with the incoming data.
If this occurs, try changing the Parser component's **Mode** to **Stringify** so that the component outputs the parsed payload as a single string.
Then, you can examine the string output and troubleshoot your parsing template, or work with the parsed data in string form.
## Trigger flows with Composio webhooks
Typically, you won't manually trigger the webhook component.
To learn about triggering flows with payloads from external applications, see the video tutorial [How to Use Webhooks in Langflow](https://www.youtube.com/watch?v=IC1CAtzFRE0).
## Troubleshoot flows with Webhook components
Use the following information to help address common issues that can occur with the **Webhook** component.
### Validate data received by the Webhook component
To troubleshoot a flow with a **Webhook** component and verify that the component is receiving data, you can create a small flow that outputs only the parsed payload:
1. Create a flow with **Webhook**, **Parser**, and **Chat Output** components.
2. Connect the **Webhook** component's **Data** output to the **Parser** component's **Data** input.
3. Connect the **Parser** component's **Parsed Text** output to the **Chat Output** component's **Text** input.
4. Edit the **Parser** component to set **Mode** to **Stringify**.
This mode passes the data received by the **Webhook** component as a string that is printed by the **Chat Output** component.
5. Click **Share**, select **API access**, and then copy the **Webhook cURL** code snippet.
6. Optional: Edit the `data` in the code snippet if you want to pass a different payload.
7. Send the POST request to trigger the flow.
8. Click **Playground** to verify that the **Chat Output** component printed the JSON data from your POST request.
### Parser component build failure
The **Parser** component can fail to build if it doesn't receive data from the **Webhook** component or if there is a problem with the incoming data.
If this occurs, try changing the **Parser** component's **Mode** to **Stringify** so that the component outputs the parsed payload as a single string.
Then, you can examine the string output and troubleshoot your parsing template, or work with the parsed data in string form.
## See also
- [Get started with the Langflow API](/api-reference-api-examples)
- [Webhook component](/components-data#webhook)
- [Flow trigger endpoints](/api-flows-run)

View file

@ -52,14 +52,13 @@ For example:
![Simple agent starter flow](/img/quickstart-simple-agent-flow.png)
The Simple Agent flow consists of an [Agent component](/agents) connected to [Chat I/O components](/components-io), a [Calculator component](/components-tools#calculator-tool), and a [URL component](/components-data#url). When you run this flow, you submit a query to the agent through the Chat Input component, the agent uses the Calculator and URL tools to generate a response, and then returns the response through the Chat Output component.
The Simple Agent flow consists of an [Agent component](/agents) connected to [Chat I/O components](/components-io), a [Calculator component](/components-helpers#calculator), and a [URL component](/components-data#url). When you run this flow, you submit a query to the agent through the Chat Input component, the agent uses the Calculator and URL tools to generate a response, and then returns the response through the Chat Output component.
Many components can be tools for agents, including [Model Context Protocol (MCP) servers](/mcp-server). The agent decides which tools to call based on the context of a given query.
2. In the **Agent** component's settings, in the **OpenAI API Key** field, enter your OpenAI API key.
This guide uses an OpenAI model for demonstration purposes. If you want to use a different provider, change the **Model Provider** field, and then provide credentials for your selected provider.
2. In the **Agent** component's settings, in the **OpenAI API Key** field, enter your OpenAI API key directly or click the <Icon name="Globe" aria-hidden="true"/> **Globe** to create a [global variable](/configuration-global-variables).
Optionally, you can click <Icon name="Globe" aria-hidden="true"/> **Globe** to store the key in a Langflow [global variable](/configuration-global-variables).
This guide uses an OpenAI model for demonstration purposes. If you want to use a different provider, change the **Model Provider** and **Model Name** fields, and then provide credentials for your selected provider.
3. To run the flow, click <Icon name="Play" aria-hidden="true"/> **Playground**.
@ -368,7 +367,6 @@ The following example builds on the API pane's example code to create a question
1. Incorporate your **Simple Agent** flow's `/run` snippet into the following script.
This script runs a question-and-answer chat in your terminal and stores the Agent's previous answer so you can compare them.
<Tabs groupId="Languages">
<TabItem value="Python" label="Python" default>
@ -554,7 +552,7 @@ payload = {
## Next steps
* [Use Langflow as a Model Context Protocol (MCP) server](/mcp-server)
* [Application development with Langflow](/develop-application)
* [Develop an application with Langflow](/develop-application)
* [Deploy a Langflow server](/deployment-overview)
* [File management](/concepts-file-management)
* [Credential management](/configuration-api-keys)

View file

@ -7,17 +7,25 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Icon from "@site/src/components/icon";
Langflow integrates with [Docling](https://docling-project.github.io/docling/) through a suite of components for parsing documents.
Langflow integrates with [Docling](https://docling-project.github.io/docling/) through a bundle of components for parsing documents.
## Install Docling dependency
* Install the Docling extra in Langflow OSS with `uv pip install langflow[docling]` or `uv pip install docling`.
:::important
You must install the Docling dependency to use the Docling components in Langflow.
:::
To add a dependency to Langflow Desktop, add an entry for Docling to the application's `requirements.txt` file.
For more information, see [Install custom dependencies in Langflow Desktop](/install-custom-dependencies#langflow-desktop).
Install the Docling extra in Langflow OSS with `uv pip install langflow[docling]` or `uv pip install docling`.
To add a dependency to Langflow Desktop, add an entry for Docling to the application's `requirements.txt` file.
For more information, see [Install custom dependencies in Langflow Desktop](/install-custom-dependencies#langflow-desktop).
## Use Docling components in a flow
:::tip
To learn more about content extraction with Docling, see the video tutorial [Docling + Langflow: Document Processing for AI Workflows](https://www.youtube.com/watch?v=5DuS6uRI5OM).
:::
This example demonstrates how to use Docling components to split a PDF in a flow:
1. Connect a **Docling** and an **ExportDoclingDocument** component to a [**Split Text**](/components-processing#split-text) component.
@ -140,8 +148,4 @@ This component exports DoclingDocument to Markdown, HTML, and other formats.
| data | Data | The exported data. |
| dataframe | DataFrame | The exported data as a DataFrame. |
</details>
## Docling video tutorial
To learn more about content extraction with Docling, see the video tutorial [Docling + Langflow: Document Processing for AI Workflows](https://www.youtube.com/watch?v=5DuS6uRI5OM).
</details>

View file

@ -47,7 +47,7 @@ The BigQuery component can now query your datasets and tables using your service
With your component credentials configured, query your BigQuery datasets and tables to confirm connectivity.
1. Connect a **Chat input** and **Chat output** component to the BigQuery component.
1. Connect **Chat Input** and **Chat Output** components to the BigQuery component.
The flow looks like this:
![BigQuery component connected to chat input and output](/img/google/integrations-bigquery.png)
2. Open the **Playground**, and then submit a valid SQL query.
@ -81,5 +81,4 @@ This example queries a table of Oscar winners stored within a BigQuery dataset c
</TabItem>
</Tabs>
A successful chat confirms the component can access the BigQuery table.
A successful chat confirms the component can access the BigQuery table.

View file

@ -1,10 +1,8 @@
---
title: Setup
title: Set up a Notion App
slug: /integrations/notion/setup
---
# Set up a Notion App
To use Notion components in Langflow, you first need to create a Notion integration and configure it with the necessary capabilities. This guide will walk you through the process of setting up a Notion integration and granting it access to your Notion databases.
## Prerequisites
@ -23,7 +21,6 @@ To use Notion components in Langflow, you first need to create a Notion integrat
When creating the integration, make sure to enable the necessary capabilities based on your requirements. Refer to the [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities) documentation for more information on each capability.
:::
## Configure Integration Capabilities
After creating the integration, you need to configure its capabilities to define what actions it can perform and what data it can access.
@ -60,29 +57,23 @@ For your integration to interact with Notion databases, you need to grant it acc
If your database contains references to other databases, you need to grant the integration access to those referenced databases as well. Repeat step 4 for each referenced database to ensure your integration has the necessary access.
:::
## Build with Notion Components in Langflow
## Build with Notion components in Langflow
Once you have set up your Notion integration and granted it access to the required databases, you can start using the Notion components in Langflow.
Once you have set up your Notion integration and granted it access to the required databases, you can start using the Notion components in Langflow:
Langflow provides the following Notion components:
- **Search**: Searches all pages and databases that have been shared with the integration. You can filter results to either pages or databases and specify the sort direction.
- **List Users**: Retrieves a list of users from the Notion workspace.
- **List Database Properties**: Retrieves the properties of a specified Notion database.
- **Create Page**: Creates a new page in a specified Notion database with the provided properties.
- **Update Page Property**: Updates the properties of an existing Notion page.
- **Add Content to Page**: Converts markdown text to Notion blocks and appends them to a specified Notion page.
- **Create Page**: Creates a new page in a specified Notion database with the provided properties.
- **List Database Properties**: Retrieves the properties of a specified Notion database.
- **List Pages**: Queries a Notion database with filtering and sorting options.
- **List Users**: Retrieves a list of users from the Notion workspace.
- **Page Content Viewer**: Retrieves the content of a Notion page as plain text.
- **Search**: Searches all pages and databases that have been shared with the integration. You can filter results to either pages or databases and specify the sort direction.
- **Update Page Property**: Updates the properties of an existing Notion page.
Each of these components output both "Data" and "Tool":
- The "Data" output can be used directly in your Langflow for further processing or display.
- The "Tool" output can be utilized in Langflow Agents, allowing them to interact with Notion programmatically.
Each of these components can output `Data` and `Tool` [data types](/data-types).
## Next steps
## Additional Resources
- [Notion API Documentation](https://developers.notion.com/docs/getting-started)
- [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities)
If you encounter any issues or have questions, please reach out to our support team or consult the Langflow community forums.
- [Notion Agent for Meeting Notes flow](/integrations/notion/notion-agent-meeting-notes)
- [Notion Conversational Agent flow](/integrations/notion/notion-agent-conversational)
- [Notion API Documentation](https://developers.notion.com/docs/getting-started)

View file

@ -3,12 +3,12 @@ title: Notion Conversational Agent
slug: /integrations/notion/notion-agent-conversational
---
The Notion Conversational Agent is an AI-powered assistant that interacts with your Notion workspace through natural language conversations. This flow performs Notion-related tasks like creating pages, searching for information, and managing content, all through a chat interface.
The Notion Conversational Agent is an AI-powered assistant that interacts with your Notion workspace through natural language conversations. This flow performs Notion-related tasks like creating pages, searching for information, and managing content, all through a chat interface.
![Notion Components Toolkit](./notion_conversational_agent_tools.png)
## Prerequisites
---
- [Notion App](/integrations/notion/setup)
- [Notion account and API key](https://www.notion.so/my-integrations)
- [OpenAI API key](https://platform.openai.com/account/api-keys)
@ -16,19 +16,11 @@ The Notion Conversational Agent is an AI-powered assistant that interacts with y
![Notion Components Toolkit](./notion_conversational_agent_tools.png)
## Flow Components
## Components
---
### Input and Output
- **Chat Input**: Accepts user queries and commands
- **Chat Output**: Displays the agent's responses
### Language Model
- **OpenAI Model**: Processes user input and generates responses
### Agent and Tools
- **Language Model**: Processes user input and generates responses with an OpenAI model
- **Tool Calling Agent**: Coordinates the use of various Notion tools based on user input
- **Toolkit**: Combines multiple Notion-specific tools into a single toolkit
- **Notion Tools**: Various components for interacting with Notion, including:
@ -40,18 +32,13 @@ The Notion Conversational Agent is an AI-powered assistant that interacts with y
- Update Page Property
- Add Content to Page
- Search
### Memory and Prompt
- **Chat Memory**: Stores conversation history
- **Prompt**: Provides system instructions and context for the agent
- **Message History**: Stores conversation history
- **Prompt Template**: Provides system instructions and context for the agent
- **Current Date**: Supplies the current date and time for context
## Run the Conversational Notion Agent
---
1. Open Langflow and create a new project.
1. Open Langflow and create a new flow.
2. Add the components listed above to your flow canvas, or Download the [Conversation Agent Flow](./Conversational_Notion_Agent.json)(Download link) and **Import** the JSON file into Langflow.
3. Connect the components as shown in the flow diagram.
4. Input the Notion and OpenAI API keys in their respective components.
@ -65,8 +52,6 @@ The Notion Conversational Agent is an AI-powered assistant that interacts with y
## Example Interactions
---
```
User: List all the users in my Notion workspace.
@ -114,26 +99,19 @@ I've successfully added the description to your "Website Redesign" project page.
Description: Redesign company website to improve user experience and modernize the look.
The description has been added as a new text block on the page. Is there anything else you'd like me to add or modify on this project page?
```
## Customization
---
The flow can be customized to meet your team's specific needs.
For example:
Customize this flow by:
1. Adjusting the system prompt to change the agent's behavior or knowledge base.
2. Adding or removing Notion tools based on your specific needs.
3. Modifying the OpenAI model parameters (e.g., temperature) to adjust the agent's response style.
1. Adjust the system prompt to change the agent's behavior or knowledge base.
2. Add or remove Notion tools based on your specific needs.
3. Modify the OpenAI model parameters (e.g., temperature) to adjust the agent's response style.
## Troubleshooting
---
If you encounter issues:
1. Ensure all API keys are correctly set and have the necessary permissions.

View file

@ -8,38 +8,31 @@ import Icon from "@site/src/components/icon";
The Notion Agent for Meeting Notes is an AI-powered tool that automatically processes meeting transcripts and updates your Notion workspace. It identifies tasks, action items, and key points from your meetings, then creates new tasks or updates existing ones in Notion without manual input.
## Prerequisites
---
- [Notion App](/integrations/notion/setup)
- [Notion API key](https://www.notion.so/my-integrations)
- [OpenAI API key](https://platform.openai.com/account/api-keys)
- [Download Flow Meeting Agent Flow](./Meeting_Notes_Agent.json)(Download link)
:::warning
:::important
Before using this flow, ensure you have obtained the necessary API keys from Notion and OpenAI. These keys are essential for the flow to function properly. Keep them secure and do not share them publicly.
:::
## Components
---
![Notion Meeting Agent Part 1](./notion_meeting_agent_part_1.png)
### Meeting Transcript (Text Input)
### Meeting Transcript (text input)
This component allows users to input the meeting transcript directly into the flow.
### List Users (Notion Component)
### List Users (Notion component)
- **Purpose**: Retrieves a list of users from the Notion workspace.
- **Input**: Notion Secret (API key)
- **Output**: List of user data
### List Databases (Notion Component)
### List Databases (Notion component)
- **Purpose**: Searches and lists all databases in the Notion workspace.
- **Input**:
@ -84,7 +77,7 @@ This component creates a dynamic prompt template using the following inputs:
- Update Page Property
- Add Content to Page
### Notion Components (Tools)
### Notion components (tools)
#### List Database Properties
@ -123,8 +116,6 @@ Displays the final output of the Notion Agent in the Playground.
## Flow Process
---
1. The user inputs a meeting transcript.
2. The flow retrieves the list of Notion users and databases.
3. A prompt is generated using the transcript, user list, database list, and current date.
@ -137,8 +128,6 @@ Displays the final output of the Notion Agent in the Playground.
## Run the Notion Meeting Notes flow
---
To run the Notion Agent for Meeting Notes:
1. Open Langflow and create a new project.
@ -153,8 +142,6 @@ For optimal results, use detailed meeting transcripts. The quality of the output
## Customization
---
The flow can be customized to meet your team's specific needs.
Customize this flow by:
@ -165,8 +152,6 @@ Customize this flow by:
## Troubleshooting
---
If you encounter issues:
1. Ensure all API keys are correctly set and have the necessary permissions.

View file

@ -5,8 +5,6 @@ slug: /integrations-assemblyai
import Icon from "@site/src/components/icon";
# AssemblyAI
The AssemblyAI components allow you to apply powerful Speech AI models to your app for tasks like:
- Transcribing audio and video files
@ -31,7 +29,7 @@ Enter the key in the *AssemblyAI API Key* field in all components that require t
## Components
![AssemblyAI Components](./assemblyai-components.png)
![AssemblyAI components](./assemblyai-components.png)
### AssemblyAI Start Transcript

View file

@ -3,8 +3,6 @@ title: LangSmith
slug: /integrations-langsmith
---
LangSmith is a full-lifecycle DevOps service from LangChain that provides monitoring and observability. To integrate with Langflow, add your LangChain API key and configuration as Langflow environment variables, and then start Langflow.
1. Obtain your LangChain API key from [https://smith.langchain.com](https://smith.langchain.com/)
@ -24,7 +22,6 @@ Alternatively, export the environment variables in your terminal:
3. Restart Langflow using `langflow run --env-file .env`
4. Run a project in Langflow.
5. View the Langsmith dashboard for monitoring and observability.
![](/img/langsmith-dashboard.png)
5. View the LangSmith dashboard for monitoring and observability.
![LangSmith dashboard](/img/langsmith-dashboard.png)

View file

@ -3,37 +3,41 @@ title: LangWatch
slug: /integrations-langwatch
---
[LangWatch](https://app.langwatch.ai/) is an all-in-one LLMOps platform for monitoring, observability, analytics, evaluations and alerting for getting user insights and improve your LLM workflows.
## Integrate LangWatch observability
# LangWatch {#938674091aac4d9d9aa4aa6eb5c215b4}
To integrate with Langflow, add your LangWatch API key as a Langflow environment variable:
1. Get a LangWatch API key from your LangWatch account.
LangWatch is an all-in-one LLMOps platform for monitoring, observability, analytics, evaluations and alerting for getting user insights and improve your LLM workflows.
2. Add the key to your Langflow `.env` file:
```shell
LANGWATCH_API_KEY="API_KEY_STRING"
```
To integrate with Langflow, just add your LangWatch API as a Langflow environment variable and you are good to go!
Alternatively, you can set the environment variable in your terminal session:
```shell
export LANGWATCH_API_KEY="API_KEY_STRING"
```
## Step-by-step Configuration {#6f1d56ff6063417491d100d522dfcf1a}
3. Restart Langflow with your `.env` file, if you modified the Langflow `.env`:
1. Obtain your LangWatch API key from [https://app.langwatch.ai/](https://app.langwatch.ai/)
2. Add the following key to Langflow .env file:
```
langflow run --env-file .env
```
```shell
LANGWATCH_API_KEY="your-api-key"
```
4. Run a flow.
or export it in your terminal:
```shell
export LANGWATCH_API_KEY="your-api-key"
```
3. Restart Langflow using `langflow run --env-file .env`
4. Run a project in Langflow.
5. View the LangWatch dashboard for monitoring and observability.
![](/img/langwatch-dashboard.png)
![LangWatch dashboard](/img/langwatch-dashboard.png)
## Use the LangWatch Evaluator
In your flows, you can use the **LangWatch Evaluator** component to use LangWatch's evaluation endpoints to assess a model's performance.
This component is available in the LangWatch bundle in the **Components** menu.
For more information, see [Bundles](/components-bundle-components).

View file

@ -15,44 +15,54 @@ Use the [MCP Tools component](/mcp-client) to connect Langflow to a [Datastax As
3. Create an [Astra DB Serverless (Vector) database](https://docs.datastax.com/en/astra-db-serverless/databases/create-database.html#create-vector-database), if you don't already have one.
4. Get your database's **Astra DB API endpoint** and an **Astra DB application token** with the Database Administrator role. For more information, see [Generate an application token for a database](https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html#database-token).
4. Get your database's Astra DB API endpoint and an Astra application token with the **Database Administrator** role. For more information, see [Generate an application token for a database](https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html#database-token).
5. Create a [Simple agent starter project](/simple-agent) if you want to follow along with this guide. Otherwise, you can use an existing flow or create a new, blank flow.
5. To follow along with this guide, create a flow based on the [**Simple Agent** template](/simple-agent).
6. Remove the **URL** tool, and then replace it with an [MCP Tools component](/mcp-client).
The flow should look like this:
You can also use an existing flow or create a blank flow.
![MCP Tools component connecting to Astra](/img/component-mcp-astra-db.png)
6. Remove the **URL** tool, and then replace it with an [**MCP Tools** component](/mcp-client).
7. In the **MCP Tools** component, in the **MCP server** field, add the following code to connect to an Astra DB MCP server:
7. Configure the **MCP Tools** component as follows:
```bash
npx -y @datastax/astra-db-mcp
```
1. Select **Stdio** mode.
2. In the **MCP server** field, add the following code to connect to an Astra DB MCP server:
8. In the **MCP Tools** component, in the **Env** fields, add variables for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT` with the values from your Astra database.
```bash
npx -y @datastax/astra-db-mcp
```
:::important
Langflow passes environment variables from the `.env` file to MCP, but not global variables declared in the UI.
To add the values for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT` as global variables, add them to Langflow's `.env` file at startup.
For more information, see [global variables](/configuration-global-variables).
:::
3. In the **Env** fields, add variables for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT` with the values from your Astra database.
```bash
ASTRA_DB_APPLICATION_TOKEN=AstraCS:...
```
:::important
Environment variables declared in your Langflow `.env` file can be referenced in your MCP server commands, but you cannot reference global variables declared in Langflow.
9. To add another variable, click <Icon name="Plus" aria-hidden="true"/> **Add More**.
If you want to use variables for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT`, add them to Langflow's `.env` file, and then restart Langflow.
For more information, see [global variables](/configuration-global-variables).
:::
```bash
ASTRA_DB_API_ENDPOINT=https://...-us-east-2.apps.astra.datastax.com
```
Add each variable separately.
To add another variable field click <Icon name="Plus" aria-hidden="true"/> **Add More**.
10. In the **Agent** component, add your **OpenAI API key**.
```bash
ASTRA_DB_APPLICATION_TOKEN=AstraCS:...
```
11. Open the **Playground**, and then ask the agent, `What collections are available?`
```bash
ASTRA_DB_API_ENDPOINT=https://...-us-east-2.apps.astra.datastax.com
```
Since Langflow is connected to your Astra DB database through the MCP, the agent chooses the correct tool and connects to your database to retrieve the answer.
8. In the **Agent** component, add your **OpenAI API key**.
The default model is an OpenAI model.
If you want to use a different model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
![The Simple Agent flow with the URL tool replaced by an MCP Tools component, and the MCP Tools component launching an Astra DB MCP server](/img/component-mcp-astra-db.png)
9. Open the **Playground**, and then ask the agent, `What collections are available?`
Since Langflow is connected to your Astra DB database through the MCP server, the agent chooses the correct tool and connects to your database to retrieve the answer.
For example:
```text
The available collections in your database are:

View file

@ -14,7 +14,7 @@ As Langflow development continues, components are often recategorized or depreca
If a component appears to be missing from the expected location on the **Components** menu, try the following:
* Search for the component or check other component categories, including [Bundles](/components-bundle-components).
* Search for the component or check other component categories, including [**Bundles**](/components-bundle-components).
* [Expose legacy components](/concepts-components#component-menus), which are hidden by default.
* Check the [Changelog](https://github.com/langflow-ai/langflow/releases/latest) for component changes in recent releases.
* Make sure the component isn't already present in your flow if it is a single-use component.
@ -23,7 +23,11 @@ If you still cannot locate the component, see [Langflow GitHub Issues and Discus
## No input in the Playground
If there is no text box for input in the Playground, make sure your flow has a [Input component](/components-io) that is connected to the **Input** port of another component.
If there is no message input field in the **Playground**, make sure your flow has a [**Chat Input** component](/components-io) that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
Because the **Playground** is designed for flows that use an LLM in a query-and-response format, such as chatbots and agents, a flow must have **Chat Input**, **Language Model**/**Agent**, and **Chat Output** components to be fully supported by the **Playground**'s chat interface.
For more input, see [Use the **Playground**](/concepts-playground).
## Missing key, no key found, or invalid API key
@ -153,39 +157,6 @@ The cache folder location depends on your OS:
- **WSL2 on Windows**: `home/<username>/.cache/langflow/`
- **macOS**: `/Users/<username>/Library/Caches/langflow/`
<!--
### Unexpected data loss after Langflow Desktop upgrade {#data-loss}
If you upgrade Langflow Desktop and find that your projects, flows, and settings have been replaced by a fresh installation, follow these steps to attempt to recover the data from the prior version:
:::important
Any projects, flows, and settings you created after the upgrade will be overwritten when you recover the data from your previous installation.
:::
<Tabs>
<TabItem value="Linux and macOS" label="Linux and macOS" default>
1. Navigate to `~/.langflow/.langflow-venv/lib/python3.12/site-packages/langflow`.
2. Copy `langflow.db`, paste it in `~/.langflow/data`, and then rename it to `database.db`.
This overwrites the existing `database.db` with your previous version's internal Langflow database.
3. Launch Langflow Desktop to verify that your projects, flows, and settings have been restored.
</TabItem>
<TabItem value="Windows" label="Windows">
1. Navigate to `C:\Users\USERNAME\.langflow\.langflow-venv\Lib\site-packages\langflow`.
2. Copy `langflow.db`, paste it in `C:\Users\<name>\AppData\Roaming\com.Langflow\data`, and then rename it to `database.db`.
This overwrites the existing `database.db` with your previous version's internal Langflow database.
3. Launch Langflow Desktop to verify that your projects, flows, and settings have been restored.
</TabItem>
</Tabs>
-->
## Langflow uninstall issues
The following issues can occur when uninstalling Langflow.

View file

@ -40,7 +40,7 @@ The **Travel Planning Agent** flow consists of these components:
## Run the travel planning agent flow
1. Add your credentials to the Open AI and Search API components.
1. Add your credentials to the OpenAI and Search API components.
2. To run the flow, click <Icon name="Play" aria-hidden="true"/> **Playground**.
You should receive a detailed, helpful answer to the journey defined in the **Chat input** component.

View file

@ -47,17 +47,17 @@ When connected to an **Agent** component as tools, the agent has the option to u
The **Playground** prints the agent's chain of thought as it selects tools to use and interacts with functionality provided by those tools.
For example, the agent can use the **Directory** component's `as_dataframe` tool to retrieve a [DataFrame](/data-types#dataframe), and the **Web search** component's `perform_search` tool to find links to related items.
## Add a prompt component to the flow
## Add a Prompt Template component to the flow
In this example, the application sends a customer's email address to the Langflow agent. The agent compares the customer's previous orders within the Directory component, searches the web for used versions of those items, and returns three results.
1. To include the email address as a value in your flow, add a [Prompt](/components-prompts) component to your flow between the **Chat Input** and **Agent**.
2. In the Prompt component's **Template** field, enter `Recommend 3 used items for {email}, based on previous orders.`
Adding the `{email}` value in curly braces creates a new input in the **Prompt** component, and the component connected to the `{email}` port is supplying the value for that variable.
1. To include the email address as a value in your flow, add a [**Prompt Template**](/components-prompts) component to your flow between the **Chat Input** and **Agent**.
2. In the **Prompt Template** component's **Template** field, enter `Recommend 3 used items for {email}, based on previous orders.`
Adding the `{email}` value in curly braces creates a new input in the **Prompt Template** component, and the component connected to the `{email}` port is supplying the value for that variable.
This creates a point for the user's email to enter the flow from your request.
If you aren't using the `customer_orders.csv` example file, modify the input to search for a value in your dataset.
At this point your flow has six components. The Chat Input is connected to the Prompt component's `email` input port. Then, the Prompt component's output port is connected to the Agent component's input port. The Directory and Web search components are connected to the Agent's Tools port. Finally, the Agent component's output port is connected to the Chat Output component, which returns the final response to the application.
At this point your flow has six components. The **Chat Input** is connected to the **Prompt Template** component's **email** port. Then, the **Prompt Template** output is connected to the **Agent** component's **System Message** port. The **Directory** and **Web Search** components are connected to the **Agent** component's **Tools** port. Finally, the **Agent** component's output is connected to the **Chat Output** component, which returns the final response to the application.
![An agent component connected to web search and directory components](/img/tutorial-agent-with-directory.png)

View file

@ -185,7 +185,11 @@ For help with constructing file upload requests in Python, JavaScript, and curl,
## Next steps
To process multiple files in a single flow run, add a separate File component for each file you want to ingest. Then, modify your script to upload each file, retrieve each returned file path, and then pass a unique file path to each File component ID.
To continue building on this tutorial, try these next steps.
### Process multiple files loaded at runtime
To process multiple files in a single flow run, add a separate **File** component for each file you want to ingest. Then, modify your script to upload each file, retrieve each returned file path, and then pass a unique file path to each **File** component ID.
For example, you can modify `tweaks` to accept multiple file components.
The following code is just an example; it is not working code:
@ -206,4 +210,18 @@ def chat_with_flow(input_message, file_paths):
tweaks[component_id] = {"path": file_path}
```
To upload files from another machine that is not your local environment, your Langflow server must first be accessible over the internet. Then, authenticated users can upload files your public Langflow server's `/v2/files/` endpoint, as shown in the tutorial. For more information, see [Langflow deployment overview](/deployment-overview).
You can also use a [**Directory** component](/components-data#directory) to load all files in a directory or pass an archive file to the **File** component.
### Upload external files at runtime
To upload files from another machine that is not your local environment, your Langflow server must first be accessible over the internet. Then, authenticated users can upload files to your public Langflow server's `/v2/files/` endpoint, as shown in the tutorial. For more information, see [Langflow deployment overview](/deployment-overview).
### Preload files outside the chat session
You can use the **File** component to load files anywhere in a flow, not just in a chat session.
In the visual editor, you can preload files to the file component by selecting them from your local machine or [Langflow file management](/concepts-file-management).
For example, you can preload an instructions file for a prompt template, or you can preload a vector store with documents that you want to query in a Retrieval Augmented Generation (RAG) flow.
For more information about the **File** component and other data loading components, see [Data components](/components-data).

View file

@ -17,7 +17,7 @@ Then, you interact with the client, and the client uses tools from the connected
You can run Langflow as an MCP client and an MCP server:
* [Use Langflow as an MCP client](/mcp-client): When run as an MCP client, an **Agent** component in a Langflow flow can use connected components as tools to handle requests.
You can use existing components as tools, and you can connect any MCP server to you flow to make that server's tools available to the agent.
You can use existing components as tools, and you can connect any MCP server to your flow to make that server's tools available to the agent.
* [Use Langflow as an MCP server](/mcp-server): When run as an MCP server, your flows become tools that can be used by an MCP client, which could be an external client or another Langflow flow.
@ -131,7 +131,7 @@ You need one **MCP Tools** component for each MCP server that you want your flow
For this tutorial, don't enter anything in this field.
Instead, you will add a geolocation MCP server in the next step, which the agent will use to detect your location.
6. Click the **MCP Tools** component, enable **Tool Mode** in the component's header menu, and then connect the component's **Toolset** port to the **Agent** component's **Tools** port.
6. Click the **MCP Tools** component, enable **Tool Mode** in the [component's header menu](/concepts-components#component-menus), and then connect the component's **Toolset** port to the **Agent** component's **Tools** port.
At this point your flow has four connected components:
@ -194,7 +194,7 @@ To add the Toolkip MCP server to your flow, do the following:
5. Click **Add Server**, and then wait for the **Actions** list to populate. This means that the MCP server successfully connected.
6. Click the **MCP Tools** component, enable **Tool Mode** in the component's header menu, and then connect the component's **Toolset** port to the **Agent** component's **Tools** port.
6. Click the **MCP Tools** component, enable **Tool Mode** in the [component's header menu](/concepts-components#component-menus), and then connect the component's **Toolset** port to the **Agent** component's **Tools** port.
Your flow now has an additional **MCP Tools** component for a total of five components.

View file

@ -445,14 +445,23 @@ module.exports = {
type: "html",
className: "sidebar-ad",
value: `
<a href="https://astra.datastax.com/signup?type=langflow" target="_blank" class="menu__link">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-cloud"><path d="M17.5 19H9a7 7 0 1 1 6.71-9h1.79a4.5 4.5 0 1 1 0 9Z"/></svg>
<a href="https://www.langflow.org/desktop" target="_blank" class="menu__link">
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1645_37)">
<path d="M12 17H20C21.1046 17 22 16.1046 22 15V13M12 17H4C2.89543 17 2 16.1046 2 15V5C2 3.89543 2.89543 3 4 3H10M12 17V21M8 21H12M12 21H16M11.75 10.2917H13.2083L16.125 7.375H17.5833L20.5 4.45833H21.9583M16.125 11.75H17.5833L20.5 8.83333H21.9583M11.75 5.91667H13.2083L16.125 3H17.5833" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</g>
<defs>
<clipPath id="clip0_1645_37">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>
<div class="sidebar-ad-text-container">
<span class="sidebar-ad-text">Use Langflow in the cloud</span>
<span class="sidebar-ad-text sidebar-ad-text-gradient">Sign up for DataStax Langflow</span>
<span class="sidebar-ad-text">Get started in minutes</span>
<span class="sidebar-ad-text sidebar-ad-text-gradient">Download Langflow Desktop</span>
</div>
</a>
`,
},
],
};
};

10
docs/static/logos/monitor-langflow.svg vendored Normal file
View file

@ -0,0 +1,10 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1645_37)">
<path d="M12 17H20C21.1046 17 22 16.1046 22 15V13M12 17H4C2.89543 17 2 16.1046 2 15V5C2 3.89543 2.89543 3 4 3H10M12 17V21M8 21H12M12 21H16M11.75 10.2917H13.2083L16.125 7.375H17.5833L20.5 4.45833H21.9583M16.125 11.75H17.5833L20.5 8.83333H21.9583M11.75 5.91667H13.2083L16.125 3H17.5833" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</g>
<defs>
<clipPath id="clip0_1645_37">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 619 B