diff --git a/docs/docs/API-Reference/api-build.mdx b/docs/docs/API-Reference/api-build.mdx
index fe4d51262..5f445e70e 100644
--- a/docs/docs/API-Reference/api-build.mdx
+++ b/docs/docs/API-Reference/api-build.mdx
@@ -7,14 +7,14 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
:::important
-The `/build` endpoints are used by Langflow's frontend **Workspace** and **Playground** code.
+The `/build` endpoints are used by Langflow's frontend visual editor code.
These endpoints are part of the internal Langflow codebase.
Don't use these endpoints to run flows in applications that use your Langflow flows.
To run flows in your apps, see [Flow trigger endpoints](/api-flows-run).
:::
-The `/build` endpoints support Langflow's frontend code for building flows in the Langflow Workspace.
+The `/build` endpoints support Langflow's frontend code for building flows in the Langflow visual editor.
You can use these endpoints to build vertices and flows, as well as execute flows with streaming event responses.
You might need to use or understand these endpoints when contributing to the Langflow codebase.
@@ -105,7 +105,8 @@ curl -X GET \
The `/build` endpoint accepts optional values for `start_component_id` and `stop_component_id` to control where the flow run starts and stops.
Setting `stop_component_id` for a component triggers the same behavior as clicking the **Play** button on that component, where all dependent components leading up to that component are also run.
-For example, to stop flow execution at the OpenAI model component, run the following command:
+
+The following example stops flow execution at an **OpenAI** component:
```shell
curl -X POST \
@@ -119,7 +120,7 @@ curl -X POST \
### Override flow parameters
The `/build` endpoint also accepts inputs for `data` directly, instead of using the values stored in the Langflow database.
-This is useful for running flows without having to pass custom values through the UI.
+This is useful for running flows without having to pass custom values through the visual editor.
```shell
curl -X POST \
diff --git a/docs/docs/API-Reference/api-files.mdx b/docs/docs/API-Reference/api-files.mdx
index c0d4aaf28..05e01f5e9 100644
--- a/docs/docs/API-Reference/api-files.mdx
+++ b/docs/docs/API-Reference/api-files.mdx
@@ -237,11 +237,10 @@ The `/v2/files` endpoint can't send image files to flows.
To send image files to your flows through the API, see [Upload image files (v1)](#upload-image-files-v1).
:::
-Send a file to your flow for analysis using the [File](/components-data#file) component and the API.
-Your flow must contain a [File](/components-data#file) component to receive the file.
+This endpoint uploads files to your Langflow server's file management system.
+To use an uploaded file in a flow, send the file path to a flow with a [**File** component](/components-data#file).
-The default file limit is 100 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` environment variable.
-For more information, see [Supported environment variables](/environment-variables#supported-variables).
+The default file limit is 100 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).
1. To send a file to your flow with the API, POST the file to the `/api/v2/files` endpoint.
@@ -269,11 +268,10 @@ For more information, see [Supported environment variables](/environment-variabl
}
```
-2. To use this file in your flow, add a [File](/components-data#file) component to load a file into the flow.
-3. To load the file into your flow, send it to the **File** component.
+2. To use this file in your flow, add a **File** component to your flow.
+This component loads files into flows from your local machine or Langflow file management.
- To retrieve the **File** component's full name with the UUID attached, call the [Read flow](/api-flows#read-flow) endpoint, and then include your **File** component and the file path as a tweak with the `/v1/run` POST request.
- In this example, the file uploaded to `/v2/files` is included with the `/v1/run` POST request.
+3. Run the flow, passing the `path` to the `File` component in the `tweaks` object:
```text
curl --request POST \
@@ -294,13 +292,9 @@ For more information, see [Supported environment variables](/environment-variabl
}'
```
-
- Result
+ To get the `File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
- ```text
- "text":"This document provides important safety information and instructions for selecting, installing, and operating Briggs & Stratton engines. It includes warnings and guidelines to prevent injury, fire, or damage, such as choosing the correct engine model, proper installation procedures, safe fuel handling, and correct engine operation. The document emphasizes following all safety precautions and using authorized parts to ensure safe and effective engine use."
- ```
-
+ If the file path is valid, the flow runs successfully.
### List files (v2)
diff --git a/docs/docs/API-Reference/api-flows-run.mdx b/docs/docs/API-Reference/api-flows-run.mdx
index b768210d8..7043b2834 100644
--- a/docs/docs/API-Reference/api-flows-run.mdx
+++ b/docs/docs/API-Reference/api-flows-run.mdx
@@ -178,7 +178,7 @@ curl -X POST \
Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request.
:::tip
-After you add a [**Webhook** component](/components-data#webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook cURL** tab to get an automatically generated `POST /webhook` request for your flow.
+After you add a [**Webhook** component](/components-data#webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook curl** tab to get an automatically generated `POST /webhook` request for your flow.
For more information, see [Trigger flows with webhooks](/webhook).
:::
diff --git a/docs/docs/API-Reference/api-projects.mdx b/docs/docs/API-Reference/api-projects.mdx
index 3b762d20d..a231cde4d 100644
--- a/docs/docs/API-Reference/api-projects.mdx
+++ b/docs/docs/API-Reference/api-projects.mdx
@@ -8,8 +8,6 @@ import TabItem from '@theme/TabItem';
Use the `/projects` endpoint to create, read, update, and delete [Langflow projects](/concepts-flows#projects).
-Projects store your flows and components.
-
## Read projects
Get a list of Langflow projects, including project IDs, names, and descriptions.
@@ -70,7 +68,7 @@ curl -X POST \
To add flows and components at project creation, retrieve the `components_list` and `flows_list` values from the [`/all`](/api-reference-api-examples#get-all-components) and [/flows/read](/api-flows#read-flows) endpoints and add them to the request body.
-Adding a flow to a project moves the flow from its previous location. The flow is not copied.
+Adding a flow to a project moves the flow from its previous location. The flow isn't copied.
```bash
curl -X POST \
diff --git a/docs/docs/API-Reference/api-reference-api-examples.mdx b/docs/docs/API-Reference/api-reference-api-examples.mdx
index 0f64dfa83..f9c7aeaae 100644
--- a/docs/docs/API-Reference/api-reference-api-examples.mdx
+++ b/docs/docs/API-Reference/api-reference-api-examples.mdx
@@ -11,8 +11,8 @@ You can use the Langflow API for programmatic interactions with Langflow, such a
* Create and edit flows, including file management for flows.
* Develop applications that use your flows.
* Develop custom components.
-* Build Langflow as a dependency of a larger project.
-* Contribute to the overall Langflow project.
+* Build Langflow as a dependency of a larger application, codebase, or service.
+* Contribute to the overall Langflow codebase.
To view and test all available endpoints, you can access the Langflow API's OpenAPI specification at your Langflow deployment's `/docs` endpoint, such as `http://localhost:7860/docs`.
@@ -26,7 +26,7 @@ The quickstart demonstrates how to get automatically generated code snippets for
While individual options vary by endpoint, all Langflow API requests share some commonalities, like a URL, method, parameters, and authentication.
-As an example of a Langflow API request, the following curl command calls the `/v1/run` endpoint, and it passes a runtime override (`tweaks`) to the flow's Chat Output component:
+As an example of a Langflow API request, the following curl command calls the `/v1/run` endpoint, and it passes a runtime override (`tweaks`) to the flow's **Chat Output** component:
```bash
curl --request POST \
diff --git a/docs/docs/Agents/agents-tools.mdx b/docs/docs/Agents/agents-tools.mdx
index 85b755255..c5a1a2ea9 100644
--- a/docs/docs/Agents/agents-tools.mdx
+++ b/docs/docs/Agents/agents-tools.mdx
@@ -7,21 +7,44 @@ import Icon from "@site/src/components/icon";
Configure tools connected to agents to extend their capabilities.
-## Edit a tool component's actions
+## Edit a tool's actions {#edit-a-tools-actions}
-To edit a tool's actions, in the tool component, click **Edit Tools** to modify its `name`, `description`, or `enabled` metadata.
-These fields help connected agents understand how to use the action, without having to modify the agent's prompt instructions.
+When you set any component to **Tool Mode** or **Tool** output, an agent can use the actions (functions) provided by that component.
+Available actions are listed in the tool component's **Actions** list.
-For example, the [URL](/components-data#url) component has two actions available when **Tool Mode** is enabled:
+You can change each action's labels, descriptions, and availability to help the agent understand how to use the tool and prevent it from using irrelevant or undesired actions.
-| Tool Name | Description | Enabled |
-|-----------|-------------|---------|
-| `fetch_content` | Fetch content from web pages recursively | true |
-| `fetch_content_as_message` | Fetch web content formatted as messages | true |
+:::tip
+If an agent seems to be using a tool incorrectly, try editing the action metadata to clarify the tool's purpose and disable unnecessary actions.
-A Langflow Agent has a clear idea of each tool's capabilities based on the `name` and `description` metadata. The `enabled` boolean controls the tool's availability to the agent. If you think an agent is using a tool incorrectly, edit a tool's `description` metadata to help the agent better understand the tool.
+You can also try using a **Prompt Template** component to pass additional instructions or examples to the agent.
+:::
-Tool names and descriptions can be edited, but the default tool identifiers cannot be changed. If you want to change the tool identifier, create a custom component.
+To view and edit a tool's actions, click **Edit Tool Actions** on the tool component.
+
+The following information is provided for each action:
+
+* **Enabled**: A checkbox that determines whether the action is available to the agent.
+If checked, the action is enabled.
+If unchecked, the action is disabled.
+
+* **Name**: A human-readable string name for the action, such as `Fetch Content`. This cannot be changed.
+
+* **Description**: A human-readable description of the action's purpose, such as `Fetch content from web pages recursively`.
+
+ To edit this value, double-click the action's row to open the edit pane.
+ Changes are saved automatically when you click out of the field or close the dialog.
+
+* **Slug**: An encoded name for the action, usually the same as the name but in snake case, such as `fetch_content`.
+
+ To edit this value, double-click the action's row to open the edit pane.
+ Changes are saved automatically when you click out of the field or close the dialog.
+
+To edit the **Description** or **Slug**, double-click anywhere on the action's row to open the edit pane.
+Note that the **Name** field on the edit page maps to the **Slug** column.
+Changes are saved automatically when you click out of a field or close the dialog.
+
+Optionally, you can provide fixed values for an action's inputs. Typically you want to leave these blank so the agent can provide its own values. You might use a fixed value if you're trying to debug an agent's behavior or your use case requires a fixed input for an action.
## Use an agent as a tool
@@ -33,9 +56,9 @@ To try this for yourself, add an additional agent to the **Simple Agent** templa
2. Add a second **Agent** component to the flow.
3. Add your **OpenAI API Key** to both **Agent** components.
4. In the second **Agent** component, change the model to `gpt-4.1`, and then enable **Tool Mode**.
-5. Click **Edit Tools** to set tool names and descriptions that help the primary agent understand how to use those tools.
+5. Click **Edit Tool Actions** to [edit the tool's actions](#edit-a-tools-actions).
- For this example, change the tool name to `Agent-gpt-41` and set the description to `Use the gpt-4.1 model for complex problem solving`.
+ For this example, change the action **Slug** to `Agent-gpt-41` and set the description to `Use the gpt-4.1 model for complex problem solving`.
This lets the primary agent know that this tool uses the `gpt-4.1` model, which could be helpful for tasks requiring a larger context window, such as large scrape and search tasks.
As another example, you could attach several specialized models to a primary agent, such as agents that are trained on certain tasks or domains, and then the primary agent would call each specialized agent as needed to respond to queries.
@@ -50,7 +73,7 @@ To try this for yourself, add an additional agent to the **Simple Agent** templa
An agent can use [custom components](/components-custom-components) as tools.
-1. To add a custom component to the agent flow, click **New Custom Component**.
+1. To add a custom component to an agent flow, click **New Custom Component** in the **Components** menu.
2. Enter Python code into the **Code** pane to create the custom component.
@@ -115,8 +138,8 @@ An agent can use [custom components](/components-custom-components) as tools.
```
-3. To use the custom component as a tool, click **Tool Mode**.
-4. Connect the custom component's tool output to the agent's tools input.
+3. Enable **Tool Mode** in the custom component.
+4. Connect the custom component's tool output to the **Agent** component's **Tools** input.
5. Open the **Playground** and instruct the agent, `Use the text analyzer on this text: "Agents really are thinking machines!"`
Based on your instruction, the agent should call the `analyze_text` action and return the result.
@@ -148,7 +171,7 @@ Langflow supports **Tool Mode** for the following data types:
* `MultilineInput`
* `DropdownInput`
-For example, the [components as tools](#components-as-tools) example above adds `tool_mode=True` to the `MessageTextInput` input so the custom component can be used as a tool.
+For example, the example code in [Use custom components as tools](#components-as-tools) included `tool_mode=True` to the `MessageTextInput` input so the custom component could be used as a tool:
```python
inputs = [
@@ -164,20 +187,19 @@ inputs = [
## Use flows as tools
-An agent can use flows that are saved in your workspace as tools with the [Run flow](/components-logic#run-flow) component.
+An agent can use your other flows as tools with the [**Run Flow** component](/components-logic#run-flow).
-1. To add a **Run flow** component, click and drag a **Run flow** component to your workspace.
+1. Add a **Run Flow** component to your flow.
2. Select the flow you want the agent to use as a tool.
-3. Enable **Tool Mode** in the component.
-The **Run flow** component displays your flow as an available action.
-4. Connect the **Run flow** component's tool output to the agent's tools input.
-5. Ask the agent, `What tools are you using to answer my questions?`
-Your flow should be visible in the response as a tool.
-6. Ask the agent to specifically use the connected tool to answer your question.
+3. Enable **Tool Mode**.
+The selected flow becomes an [action](#edit-a-tools-actions) in the **Run Flow** component.
+4. Connect the **Run Flow** component's **Tool** output to the **Agent** component's **Tools** input.
+5. Open the **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
+Your flow should be visible in the response as an available tool.
+6. Ask the agent a question that specifically uses the connected flow as a tool.
The connected flow returns an answer based on your question.
-For example, a Basic Prompting flow connected as a tool returns a different result depending upon its LLM and prompt instructions.
-
+
## See also
diff --git a/docs/docs/Agents/agents.mdx b/docs/docs/Agents/agents.mdx
index 8455ff68c..338fd9073 100644
--- a/docs/docs/Agents/agents.mdx
+++ b/docs/docs/Agents/agents.mdx
@@ -5,7 +5,7 @@ slug: /agents
import Icon from "@site/src/components/icon";
-Langflow's [**Agent** component](/components-agents) is critical for building agentic flows.
+Langflow's [**Agent** component](/components-agents) is critical for building agent flows.
This component provides everything you need to create an agent, including multiple Large Language Model (LLM) providers, tool calling, and custom instructions.
It simplifies agent configuration so you can focus on application development.
@@ -30,11 +30,11 @@ The `Tool` object's description tells the agent what the tool can do so that it
## Use the Agent component in a flow
-The following steps explain how to create an agentic flow in Langflow from a blank flow.
-For a prebuilt example, use the **Simple Agent** template or try the [Langflow quickstart](/get-started-quickstart).
+The following steps explain how to create an agent flow in Langflow from a blank flow.
+For a prebuilt example, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).
1. Click **New Flow**, and then click **Blank Flow**.
-2. Add an **Agent** component to the **Workspace**.
+2. Add an **Agent** component to your flow.
3. Enter a valid OpenAI API key.
The default model for the **Agent** component is an OpenAI model.
@@ -84,7 +84,7 @@ For a prebuilt example, use the **Simple Agent** template or try the [Langflow q
To help you debug and test your flows, the **Playground** displays the agent's tool calls, the provided input, and the raw output the agent received before generating the summary.
With the given example, the agent should call the **News Search** component's `search_news` action.
-You've successfully created a basic agentic flow that uses some generic tools.
+You've successfully created a basic agent flow that uses some generic tools.
To continue building on this tutorial, try connecting other tool components or [use Langflow as an MCP client](/mcp-client) to support more complex and specialized tasks.
diff --git a/docs/docs/Components/bundles-aiml.mdx b/docs/docs/Components/bundles-aiml.mdx
index 5d65d6bc4..4c760bd43 100644
--- a/docs/docs/Components/bundles-aiml.mdx
+++ b/docs/docs/Components/bundles-aiml.mdx
@@ -41,7 +41,7 @@ For more information about using embedding model components in flows, see [**Emb
### AI/ML API Embeddings parameters
-Some **AI/ML API** component input parameters are hidden by default in the visual editor.
+Some **AI/ML API Embeddings** component input parameters are hidden by default in the visual editor.
You can toggle parameters through the **Controls** in the [component's header menu](/concepts-components#component-menus).
| Name | Type | Description |
diff --git a/docs/docs/Components/bundles-exa.mdx b/docs/docs/Components/bundles-exa.mdx
index bcd3496b5..15aff1b91 100644
--- a/docs/docs/Components/bundles-exa.mdx
+++ b/docs/docs/Components/bundles-exa.mdx
@@ -11,7 +11,7 @@ This page describes the components that are available in the **Exa** bundle.
## Exa Search
-This component provides an [Exa Search](https://exa.ai/) toolkit for search and content retrieval by a [Langflow agent](/agents) or [MCP client](/mcp-client).
+This component provides an [Exa Search](https://exa.ai/) toolkit for search and content retrieval by a Langflow [**Agent** component](/agents) or [MCP client](/mcp-client).
The output is exclusively [`Tools`](/data-types#tool).
diff --git a/docs/docs/Components/bundles-huggingface.mdx b/docs/docs/Components/bundles-huggingface.mdx
index 7cff12453..3d7e0ea9a 100644
--- a/docs/docs/Components/bundles-huggingface.mdx
+++ b/docs/docs/Components/bundles-huggingface.mdx
@@ -51,7 +51,7 @@ For more information about using embedding model components in flows, see [**Emb
| Name | Display Name | Info |
|------|--------------|------|
-| API Key | API Key | Input parameter. Your [Hugging Face API token](https://huggingface.co/docs/hub/security-tokens) for accessing the Hugging Face Inference API, if required. Local inference models do not require an API key. |
+| API Key | API Key | Input parameter. Your [Hugging Face API token](https://huggingface.co/docs/hub/security-tokens) for accessing the Hugging Face Inference API, if required. Local inference models don't require an API key. |
| API URL | API URL | Input parameter. The URL of the Hugging Face Inference API. |
| Model Name | Model Name | Input parameter. The name of the model to use for embeddings. |
@@ -65,9 +65,9 @@ To connect the local Hugging Face model to the **Hugging Face Embeddings Inferen
3. Replace the two **OpenAI Embeddings** components with **Hugging Face Embeddings Inference** components.
- Make sure to reconnect the **Embedding Model** ports from each embedding model component to its corresponding **Astra DB** vector store component.
+ Make sure to reconnect the **Embedding Model** ports from each **Embeddings Inference** component to its corresponding **Astra DB** vector store component.
-4. Configure the **Astra DB** vector store components to connect to your Astra organization, or replace both **Astra DB** vector store components with other [vector store components](/components-vector-stores).
+4. Configure the **Astra DB** vector store components to connect to your Astra organization, or replace both **Astra DB** vector store components with other [**Vector Store** components](/components-vector-stores).
5. Connect each **Hugging Face Embeddings Inference** component to your local inference model:
diff --git a/docs/docs/Components/bundles-ibm.mdx b/docs/docs/Components/bundles-ibm.mdx
index 0673442cc..78a1bd3e3 100644
--- a/docs/docs/Components/bundles-ibm.mdx
+++ b/docs/docs/Components/bundles-ibm.mdx
@@ -16,7 +16,7 @@ The **IBM watsonx.ai** component generates text using [supported foundation mode
You can use this component anywhere you need a language model in a flow.
-
+
### IBM watsonx.ai parameters {#ibm-watsonxai-parameters}
diff --git a/docs/docs/Components/bundles-ollama.mdx b/docs/docs/Components/bundles-ollama.mdx
index 906060127..c72e35b73 100644
--- a/docs/docs/Components/bundles-ollama.mdx
+++ b/docs/docs/Components/bundles-ollama.mdx
@@ -32,11 +32,11 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn
5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model.
- Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Function** component. For more information, see [**Language Model** components](/components-models).
+ **Language Model** components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Function** component. For more information, see [**Language Model** components](/components-models).
In the following example, the flow uses `LanguageModel` output to use an Ollama model as the LLM for an [**Agent** component](/components-agents).
- 
+ 
## Ollama Embeddings
diff --git a/docs/docs/Components/components-agents.mdx b/docs/docs/Components/components-agents.mdx
index 30255f933..e90901288 100644
--- a/docs/docs/Components/components-agents.mdx
+++ b/docs/docs/Components/components-agents.mdx
@@ -3,7 +3,7 @@ title: Agents
slug: /components-agents
---
-Langflow's **Agent** and **MCP Tools** components are critical for building agentic flows.
+Langflow's **Agent** and **MCP Tools** components are critical for building agent flows.
These components define the behavior and capabilities of AI agents in your flows.
@@ -25,13 +25,13 @@ The `Tool` object's description tells the agent what the tool can do so that it
-## Examples of agentic flows
+## Examples of agent flows
-For examples of agentic flows using the **Agent** and **MCP Tools** components, see the following:
+For examples of flows using the **Agent** and **MCP Tools** components, see the following:
-* [Langflow quickstart](/get-started-quickstart): Start with the **Simple Agent** template, modify its tools, and then learn how to use an agentic flow in an application.
+* [Langflow quickstart](/get-started-quickstart): Start with the **Simple Agent** template, modify its tools, and then learn how to use an agent flow in an application.
- The **Simple Agent** template creates a basic agentic flow with an **Agent** component that can use two other Langflow components as tools.
+ The **Simple Agent** template creates a basic agent flow with an **Agent** component that can use two other Langflow components as tools.
The LLM specified in the **Agent** component's settings can use its own built-in functionality as well as the functionality provided by the connected tools when generating responses.
* [Use an agent as a tool](/agents-tools#use-an-agent-as-a-tool): Create a multi-agent flow.
@@ -40,7 +40,7 @@ For examples of agentic flows using the **Agent** and **MCP Tools** components,
## Agent component {#agent-component}
-The **Agent** component is the primary agent actor in your agentic flows.
+The **Agent** component is the primary agent actor in your agent flows.
This component uses an LLM integration to respond to input, such as a chat message or file upload.
The agent can use the tools already available in the base LLM model as well as additional tools that you connect to the **Agent** component's **Tools** port.
@@ -50,7 +50,7 @@ For more information about using this component, see [Use Langflow agents](/agen
## MCP Tools component {#mcp-connection}
-The **MCP Tools** component connects to a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) server and exposes the MCP server's functions as tools for Langflow agents to use to respond to input.
+The **MCP Tools** component connects to a Model Context Protocol (MCP) server and exposes the MCP server's functions as tools for Langflow agents to use to respond to input.
In addition to publicly available MCP servers and your own custom-built MCP servers, you can connect Langflow MCP servers, which allow your agent to use your Langflow flows as tools.
To do this, use the **MCP Tools** component's [SSE mode](/mcp-client#mcp-sse-mode) to connect to your Langflow MCP server at the `/api/v1/mcp/sse` endpoint.
@@ -65,7 +65,7 @@ For more information about using this component and serving flows as MCP tools,
-## Legacy agent components
+## Legacy Agent components
The following components are legacy components.
You can still use these components in your flows, but they are no longer maintained and they can be removed in future releases.
@@ -147,7 +147,7 @@ It accepts the following parameters:
CrewAI Sequential Task Agent
-This component creates a CrewAI Task and its associated Agent allowing for the definition of sequential tasks with specific agent roles and capabilities.
+This component creates a CrewAI Task and its associated agent allowing for the definition of sequential tasks with specific agent roles and capabilities.
For more information, see the [CrewAI sequential agents documentation](https://docs.crewai.com/how-to/Sequential/).
It accepts the following parameters:
diff --git a/docs/docs/Components/components-bundles.mdx b/docs/docs/Components/components-bundles.mdx
index dc5633448..229a7f0e0 100644
--- a/docs/docs/Components/components-bundles.mdx
+++ b/docs/docs/Components/components-bundles.mdx
@@ -6,7 +6,7 @@ slug: /components-bundle-components
import Icon from "@site/src/components/icon";
Bundles contain custom components that support specific third-party integrations with Langflow.
-Bundles are derived from the core Langflow components so you can add them to your flows and configure them in the same way as the core components.
+You add them to your flows and configure them in the same way as Langflow's core components.
## Bundle maintenance and documentation
@@ -42,8 +42,8 @@ If you can't find a component that you used in an earlier version of Langflow, i
Langflow offers core components in addition to third-party, provider-specific bundles.
-Core components are meant to support a wide range of use cases and are typically not tied to a specific provider.
-Exceptions include the [**Embedding Model** core component](/components-embedding-models), [**Language Model** core component](/components-models), and most [vector store core components](/components-vector-stores), which are integrated with several providers.
+Core components are meant to support a wide range of use cases and typically aren't tied to a specific provider.
+Exceptions include the [**Embedding Model** core component](/components-embedding-models), [**Language Model** core component](/components-models), and [**Vector Store** components](/components-vector-stores), which are integrated with one or more specific providers.
If you are looking for a specific service or integration, try searching the **Components** menu or browsing both the core components and bundles.
diff --git a/docs/docs/Components/components-custom-components.mdx b/docs/docs/Components/components-custom-components.mdx
index 4471f2187..5a3d31198 100644
--- a/docs/docs/Components/components-custom-components.mdx
+++ b/docs/docs/Components/components-custom-components.mdx
@@ -3,6 +3,10 @@ title: Create custom Python components
slug: /components-custom-components
---
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
Custom components extend Langflow's functionality through Python classes that inherit from `Component`. This enables integration of new features, data manipulation, external services, and specialized tools.
In Langflow's node-based environment, each node is a "component" that performs discrete functions. Custom components are Python classes which define:
@@ -11,7 +15,7 @@ In Langflow's node-based environment, each node is a "component" that performs d
* **Outputs** — Data your component provides to downstream nodes.
* **Logic** — How you process inputs to produce outputs.
-The benefits of creating custom components include unlimited extensibility, reusability, automatic UI field generation based on inputs, and type-safe connections between nodes.
+The benefits of creating custom components include unlimited extensibility, reusability, automatic field generation in the visual editor based on inputs, and type-safe connections between nodes.
Create custom components for performing specialized tasks, calling APIs, or adding advanced logic.
@@ -28,26 +32,25 @@ Define these attributes to control a custom component's appearance and behavior:
```python
class MyCsvReader(Component):
- display_name = "CSV Reader" # Shown in node header
- description = "Reads CSV files" # Tooltip text
- icon = "file-text" # Visual identifier
- name = "CSVReader" # Unique internal ID
- documentation = "http://docs.example.com/csv_reader" # Optional
+ display_name = "CSV Reader"
+ description = "Reads CSV files"
+ icon = "file-text"
+ name = "CSVReader"
+ documentation = "http://docs.example.com/csv_reader"
```
-* **display_name**: A user-friendly label in the node header.
-* **description**: A brief summary shown in tooltips.
-* **icon**: A visual identifier from Langflow's icon library.
-* **name**: A unique internal identifier.
-* **documentation**: An optional link to external docs.
+* `display_name`: A user-friendly label shown in the **Components** menu and on the component itself when you add it to a flow.
+* `description`: A brief summary shown in tooltips and printed below the component name when added to a flow.
+* `icon`: A decorative icon from Langflow's icon library, printed next to the name.
-:::tip Icon usage
-Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
-:::
+ Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
+
+* `name`: A unique internal identifier.
+* `documentation`: An optional link to external documentation, such as API or product documentation.
### Structure of a custom component
-A **Langflow custom component** goes beyond a simple class with inputs and outputs. It includes an internal structure with optional lifecycle steps, output generation, front-end interaction, and logic organization.
+A Langflow custom component is more than a class with inputs and outputs. It includes an internal structure with optional lifecycle steps, output generation, front-end interaction, and logic organization.
A basic component:
@@ -79,7 +82,7 @@ class MyComponent(Component):
Langflow's engine manages:
* **Instantiation**: A component is created and internal structures are initialized.
-* **Assigning Inputs**: Values from the UI or connections are assigned to component fields.
+* **Assigning Inputs**: Values from the visual editor or connections are assigned to component fields.
* **Validation and Setup**: Optional hooks like `_pre_run_setup`.
* **Outputs Generation**: `run()` or `build_results()` triggers output methods.
@@ -148,17 +151,24 @@ outputs = [
#### Output Grouping Behavior with `group_outputs`
-By default, components in Langflow that define multiple outputs will display them as a dropdown in the UI. This behavior is controlled by the `group_outputs` parameter.
+By default, components in Langflow that produce multiple outputs only allow one output selection in the visual editor.
+The component will have only one output port where the user can select the preferred output type.
-- `group_outputs=False` (default):
- When a component has more than one output and `group_outputs` is not specified (or set to `False`), the outputs are grouped into a dropdown. The user can choose only one output at a time from the UI.
+This behavior is controlled by the `group_outputs` parameter:
-- `group_outputs=True`:
- All outputs will be shown simultaneously in the UI. This is useful when the component is expected to return multiple values that should be used in parallel downstream.
+- **`group_outputs=False` (default)**: When a component has more than one output and `group_outputs` is `False` or not set, the outputs are grouped in the visual editor, and the user must select one.
-#### Example:
+ Use this option when the component is expected to return only one type of output when used in a flow.
-1. `group_outputs=False` (default behavior)
+- **`group_outputs=True`**: All outputs are available simultaneously in the visual editor. The component has one output port for each output, and the user can connect zero or more outputs to other components.
+
+ Use this option when the component is expected to return multiple values that are used in parallel by downstream components or processes.
+
+
+
+
+In this example, the visual editor provides a single output port, and the user can select one of the outputs.
+Since `group_outputs=False` is the default behavior, it doesn't need to be explicitly set in the component, as shown in this example:
```python
outputs = [
@@ -175,9 +185,10 @@ outputs = [
]
```
-In this example, both outputs are available through a dropdown menu in the Langflow UI.
+
+
-Note: Since `group_outputs=False` is the default behavior, it does not need to be explicitly set in the component.
+In this example, all outputs are available simultaneously in the visual editor:
2. `group_outputs=True`
@@ -198,13 +209,8 @@ outputs = [
]
```
-Here, both outputs will appear independently and be selectable directly in the UI.
-
-#### When to Use
-
-- Use `group_outputs=False` when the component is expected to return only one of the outputs depending on the flow logic.
-
-- Use `group_outputs=True` when the component should expose multiple outputs simultaneously, such as structured data and a table that are meant to be used in parallel.
+
+
### Common internal patterns
@@ -233,9 +239,9 @@ def some_method(self):
## Directory structure requirements
-By default, Langflow looks for custom components in the `langflow/components` directory.
+By default, Langflow looks for custom components in the `/components` directory.
-If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be organized in a specific directory structure to be properly loaded and displayed in the UI:
+If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor:
```
/your/custom/components/path/ # Base directory set by LANGFLOW_COMPONENTS_PATH
@@ -243,10 +249,10 @@ If you're creating custom components in a different location using the `LANGFLOW
└── custom_component.py # Component file
```
-Components must be placed inside **category folders**, not directly in the base directory.
-The category folder name determines where the component appears in the Langflow **Components** menu.
+Components must be placed inside category folders, not directly in the base directory.
-For example, to add a component to the **Helpers** category, place it in a `helpers` subfolder:
+The category folder name determines where the component appears in the Langflow **Components** menu.
+For example, to add a component to the **Helpers** category, place it in the `helpers` subfolder:
```
/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
@@ -263,7 +269,7 @@ You can have multiple category folders to organize components into different cat
└── tool_component.py
```
-This folder structure is required for Langflow to properly discover and load your custom components. Components placed directly in the base directory will not be loaded.
+This folder structure is required for Langflow to properly discover and load your custom components. Components placed directly in the base directory aren't loaded.
```
/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
@@ -272,16 +278,16 @@ This folder structure is required for Langflow to properly discover and load you
## Custom component inputs and outputs
-Inputs and outputs define how data flows through the component, how it appears in the UI, and how connections to other components are validated.
+Inputs and outputs define how data flows through the component, how it appears in the visual editor, and how connections to other components are validated.
### Inputs
-Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the UI. Users or other components provide values or connections to fill these inputs.
+Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the visual editor. Users or other components provide values or connections to fill these inputs.
An input is usually an instance of a class from `langflow.io` (such as `StrInput`, `DataInput`, or `MessageTextInput`). The most common constructor parameters are:
* **`name`**: The internal variable name, accessed with `self.`.
-* **`display_name`**: The label shown to users in the UI.
+* **`display_name`**: The label shown to users in the visual editor.
* **`info`** *(optional)*: A tooltip or short description.
* **`value`** *(optional)*: The default value.
* **`advanced`** *(optional)*: If `True`, moves the field into the "Advanced" section.
@@ -301,18 +307,18 @@ Here are the most commonly used input classes and their typical usage.
**Dropdowns**: For selecting from predefined options, useful for modes or levels.
* **`DropdownInput`**
-**Secrets**: A specialized input for sensitive data, ensuring input is hidden in the UI.
+**Secrets**: A specialized input for sensitive data, ensuring input is hidden in the visual editor.
* **`SecretStrInput`** for API keys and passwords.
-**Specialized Data Inputs**: Ensures type-checking and color-coded connections in the UI.
+**Specialized Data Inputs**: Ensures type-checking and color-coded connections in the visual editor.
* **`DataInput`** expects a `Data` object (typically with `.data` and optional `.text`).
-* **`MessageInput`** expects a `Message` object, used in chat or agent-based flows.
+* **`MessageInput`** expects a `Message` object, used in chat or agent flows.
* **`MessageTextInput`** simplifies access to the `.text` field of a `Message`.
**Handle-Based Inputs**: Used to connect outputs of specific types, ensuring correct pipeline connections.
- **`HandleInput`**
-**File Uploads**: Allows users to upload files directly through the UI or receive file paths from other components.
+**File Uploads**: Allows users to upload files directly through the visual editor or receive file paths from other components.
- **`FileInput`**
**Lists**: Set `is_list=True` to accept multiple values, ideal for batch or grouped operations.
@@ -331,12 +337,12 @@ inputs = [
### Outputs
-Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the UI. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
+Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the visual editor. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
An output is usually an instance of `Output` from `langflow.io`, with common parameters:
* **`name`**: The internal variable name.
-* **`display_name`**: The label shown in the UI.
+* **`display_name`**: The label shown in the visual editor.
* **`method`**: The name of the method called to produce the output.
* **`info`** *(optional)*: Help text shown on hover.
@@ -349,7 +355,7 @@ You can also set a `self.status` message inside the method to show progress or l
- **`DataFrame`**: Pandas-based tables (`langflow.schema.DataFrame`).
- **Primitive types**: `str`, `int`, `bool` (not recommended if you need type/color consistency).
-In this example, the `DataToDataFrame` component defines its output using the outputs list. The `df_out` output is linked to the `build_df` method, so when connected in the UI, Langflow calls this method and passes its returned DataFrame to the next node. This demonstrates how each output maps to a method that generates the actual output data.
+In this example, the `DataToDataFrame` component defines its output using the outputs list. The `df_out` output is linked to the `build_df` method, so when connected to another component (node), Langflow calls this method and passes its returned `DataFrame` to the next node. This demonstrates how each output maps to a method that generates the actual output data.
```python
from langflow.custom import Component
@@ -392,18 +398,11 @@ class DataToDataFrame(Component):
```
-### Tool mode
+### Tool Mode
-You can configure a Custom Component to work as a **Tool** by setting the parameter `tool_mode=True`. This allows the component to be used in Langflow's Tool Mode workflows, such as by Agent components.
+Components that support **Tool Mode** can be used as standalone components (when _not_ in **Tool Mode**) or as tools for other components with a **Tools** input, such as **Agent** components.
-Langflow currently supports the following input types for Tool Mode:
-
-* `DataInput`
-* `DataFrameInput`
-* `PromptInput`
-* `MessageTextInput`
-* `MultilineInput`
-* `DropdownInput`
+You can allow a custom component to support **Tool Mode** by setting `tool_mode=True`:
```python
inputs = [
@@ -416,6 +415,15 @@ inputs = [
]
```
+Langflow currently supports the following input types for **Tool Mode**:
+
+* `DataInput`
+* `DataFrameInput`
+* `PromptInput`
+* `MessageTextInput`
+* `MultilineInput`
+* `DropdownInput`
+
## Typed annotations
In Langflow, **typed annotations** allow Langflow to visually guide users and maintain flow consistency.
@@ -429,65 +437,51 @@ Typed annotations provide:
### Common Return Types
-**`Message`**
+* `Message`: For chat-style outputs. Connects to any of several `Message`-compatible inputs.
-For chat-style outputs.
+ ```python
+ def produce_message(self) -> Message:
+ return Message(text="Hello! from typed method!", sender="System")
+ ```
-```python
-def produce_message(self) -> Message:
- return Message(text="Hello! from typed method!", sender="System")
-```
-In the UI, connects only to Message-compatible inputs.
+* `Data`: For structured data like dicts or partial texts. Connects only to `DataInput` (ports that accept `Data`).
-**`Data`**
+ ```python
+ def get_processed_data(self) -> Data:
+ processed = {"key1": "value1", "key2": 123}
+ return Data(data=processed)
+ ```
-For structured data like dicts or partial texts.
-```python
-def get_processed_data(self) -> Data:
- processed = {"key1": "value1", "key2": 123}
- return Data(data=processed)
-```
+* `DataFrame`: For tabular data. Connects only to `DataFrameInput` (ports that accept `DataFrame`).
-In the UI, connects only with DataInput.
+ ```python
+ def build_df(self) -> DataFrame:
+ pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
+ return DataFrame(pdf)
+ ```
-**`DataFrame`**
+* Primitive Types (`str`, `int`, `bool`): Returning primitives is allowed but wrapping in `Data` or `Message` is recommended for better consistency in the visual editor.
-For tabular data
-
-```python
-def build_df(self) -> DataFrame:
- pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
- return DataFrame(pdf)
-
-```
-
-In the UI, connects only to DataFrameInput.
-
-**Primitive Types (`str`, `int`, `bool`)**
-
-Returning primitives is allowed but wrapping in Data or Message is recommended for better UI consistency.
-
-```python
-def compute_sum(self) -> int:
- return sum(self.numbers)
-```
+ ```python
+ def compute_sum(self) -> int:
+ return sum(self.numbers)
+ ```
### Tips for typed annotations
When using typed annotations, consider the following best practices:
-* **Always Annotate Outputs**: Specify return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper UI color-coding and validation.
+* **Always Annotate Outputs**: Specify return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper visual editor color-coding and validation.
* **Wrap Raw Data**: Use `Data`, `Message`, or `DataFrame` wrappers instead of returning plain structures.
* **Use Primitives Carefully**: Direct `str` or `int` returns are fine for simple flows, but wrapping improves flexibility.
* **Annotate Helpers Too**: Even if internal, typing improves maintainability and clarity.
* **Handle Edge Cases**: Prefer returning structured `Data` with error fields when needed.
* **Stay Consistent**: Use the same types across your components to make flows predictable and easier to build.
-
## Enable dynamic fields
In **Langflow**, dynamic fields allow inputs to change or appear based on user interactions. You can make an input dynamic by setting `dynamic=True`.
-Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual UI that only displays relevant fields based on the user's choices.
+Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual visual editor experience that only exposes relevant fields based on the user's choices.
In this example, the operator field triggers updates with `real_time_refresh=True`.
The `regex_pattern` field is initially hidden and controlled with `dynamic=True`.
@@ -559,50 +553,52 @@ In Langflow, robust error handling ensures that your components behave predictab
### Error handling techniques
-* **Raise Exceptions**:
- If a critical error occurs, you can raise standard Python exceptions such as `ValueError`, or specialized exceptions like `ToolException`. Langflow will automatically catch these and display appropriate error messages in the UI, helping users quickly identify what went wrong.
- ```python
- def compute_result(self) -> str:
- if not self.user_input:
- raise ValueError("No input provided.")
- # ...
- ```
-* **Return Structured Error Data**:
- Instead of stopping a flow abruptly, you can return a Data object containing an "error" field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
- ```python
- def run_model(self) -> Data:
- try:
+* **Raise Exceptions**: If a critical error occurs, you can raise standard Python exceptions such as `ValueError`, or specialized exceptions like `ToolException`. Langflow will automatically catch these and display appropriate error messages in the visual editor, helping users quickly identify what went wrong.
+
+ ```python
+ def compute_result(self) -> str:
+ if not self.user_input:
+ raise ValueError("No input provided.")
# ...
- except Exception as e:
- return Data(data={"error": str(e)})
- ```
+ ```
+
+* **Return Structured Error Data**: Instead of stopping a flow abruptly, you can return a Data object containing an "error" field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
+
+ ```python
+ def run_model(self) -> Data:
+ try:
+ # ...
+ except Exception as e:
+ return Data(data={"error": str(e)})
+ ```
### Improve debugging and flow management
-* **Use `self.status`**:
- Each component has a status field where you can store short messages about the execution result—such as success summaries, partial progress, or error notifications. These appear directly in the UI, making troubleshooting easier for users.
- ```python
- def parse_data(self) -> Data:
- # ...
- self.status = f"Parsed {len(rows)} rows successfully."
- return Data(data={"rows": rows})
- ```
-* **Stop specific outputs with `self.stop(...)`**:
- You can halt individual output paths when certain conditions fail, without affecting the entire component. This is especially useful when working with components that have multiple output branches.
- ```python
- def some_output(self) -> Data:
- if :
- self.stop("some_output") # Tells Langflow no data flows
- return Data(data={"error": "Condition not met"})
- ```
+* **Use `self.status`**: Each component has a status field where you can store short messages about the execution result—such as success summaries, partial progress, or error notifications. These appear directly in the visual editor, making troubleshooting easier for users.
-* **Log events**:
- You can log key execution details inside components. Logs are displayed in the "Logs" or "Events" section of the component's detail view and can be accessed later through the flow's debug panel or exported files, providing a clear trace of the component's behavior for easier debugging.
- ```python
- def process_file(self, file_path: str):
- self.log(f"Processing file {file_path}")
- # ...
- ```
+ ```python
+ def parse_data(self) -> Data:
+ # ...
+ self.status = f"Parsed {len(rows)} rows successfully."
+ return Data(data={"rows": rows})
+ ```
+
+* **Stop specific outputs with `self.stop(...)`**: You can halt individual output paths when certain conditions fail, without affecting the entire component. This is especially useful when working with components that have multiple output branches.
+
+ ```python
+ def some_output(self) -> Data:
+ if :
+ self.stop("some_output") # Tells Langflow no data flows
+ return Data(data={"error": "Condition not met"})
+ ```
+
+* **Log events**: You can log key execution details inside components. Logs are displayed in the "Logs" or "Events" section of the component's detail view and can be accessed later through the flow's debug panel or exported files, providing a clear trace of the component's behavior for easier debugging.
+
+ ```python
+ def process_file(self, file_path: str):
+ self.log(f"Processing file {file_path}")
+ # ...
+ ```
### Tips for error handling and logging
@@ -610,11 +606,10 @@ To build more reliable components, consider the following best practices:
* **Validate inputs early**: Catch missing or invalid inputs at the start to prevent broken logic.
* **Summarize with `self.status`**: Use short success or error summaries to help users understand results quickly.
-* **Keep logs concise**: Focus on meaningful messages to avoid cluttering the UI.
+* **Keep logs concise**: Focus on meaningful messages to avoid cluttering the visual editor.
* **Return structured errors**: When appropriate, return `Data(data={"error": ...})` instead of raising exceptions to allow downstream handling.
* **Stop outputs selectively**: Only halt specific outputs with `self.stop(...)` if necessary, to preserve correct flow behavior elsewhere.
## Contribute custom components to Langflow
-See [How to Contribute](/contributing-components) to contribute your custom component to Langflow.
-
+See [How to Contribute](/contributing-components) to contribute your custom component to Langflow.
\ No newline at end of file
diff --git a/docs/docs/Components/components-data.mdx b/docs/docs/Components/components-data.mdx
index 635b2ed2f..e3ddbd6f2 100644
--- a/docs/docs/Components/components-data.mdx
+++ b/docs/docs/Components/components-data.mdx
@@ -8,13 +8,13 @@ import Icon from "@site/src/components/icon";
You can use Langflow's **Data** components to bring data into your flows from various sources like files, API endpoints, and URLs.
For example:
-* **Load files**: Import data from a file or directory with the [**File**](#file) and [**Directory**](#directory) components.
+* **Load files**: Import data from a file or directory with the [**File** component](#file) and [**Directory** component](#directory).
-* **Search the web**: Fetch data from the web with components like the [**News Search**](#news-search), [**RSS Reader**](#rss-reader), [**Web Search**](#web-search), and [**URL**](#url) components.
+* **Search the web**: Fetch data from the web with components like the [**News Search** component](#news-search), [**RSS Reader** component](#rss-reader), [**Web Search** component](#web-search), and [**URL** component](#url).
-* **Make API calls**: Use APIs to trigger flows or perform actions with the [**API Request**](#api-request) and [**Webhook**](#webhook) components.
+* **Make API calls**: Use APIs to trigger flows or perform actions with the [**API Request** component](#api-request) and [**Webhook** component](#webhook).
-* **Run SQL queries**: Query an SQL database with the [**SQL Database**](#sql-database) component.
+* **Run SQL queries**: Query an SQL database with the [**SQL Database** component](#sql-database).
Each component runs different commands for retrieval, processing, and type checking.
Some components are a minimal wrapper for a command that you provide, and others include built-in scripts to fetch and process data based on variable inputs.
@@ -27,15 +27,15 @@ This means that some similar components might produce different results.
This can include basic operations, like saving a file in a specific format, or more complex tasks, like using a **Text Splitter** component to break down a large document into smaller chunks before generating embeddings for vector search.
:::
-## Use data components in flows
+## Use Data components in flows
-Data components are used often in flows because they offer a versatile way to perform common, basic functions.
+**Data** components are used often in flows because they offer a versatile way to perform common, basic functions.
-You can use data components to perform their base functions as isolated steps in your flow, or you can connect them to an **Agent** component as tools.
+You can use **Data** components to perform their base functions as isolated steps in your flow, or you can connect them to an **Agent** component as tools.
-
+
-For examples of data components in flows, see the following:
+For examples of **Data** components in flows, see the following:
* [Create a chatbot that can ingest files](/chat-with-files): Learn how to use a **File** component to load a file as context for a chatbot.
The file and user input are both passed to the LLM so you can ask questions about the file you uploaded.
@@ -70,7 +70,7 @@ You can toggle parameters through the
@@ -161,7 +161,7 @@ The notification data can then be passed to other components in the flow, such a
The **Run Flow** component runs another Langflow flow as a subprocess of the current flow.
-You can use this component to chain flows together, run flows conditionally, and attach flows to [**Agent** component](/components-agents) as [tools for the agent](/agents-tools) to run as needed.
+You can use this component to chain flows together, run flows conditionally, and attach flows to [**Agent** components](/components-agents) as [tools for agents](/agents-tools) to run as needed.
When used with an agent, the `name` and `description` metadata that the agent uses to register the tool are created automatically.
@@ -180,9 +180,9 @@ You can toggle parameters through the **Controls**, enable the **System Message** parameter, and then click **Close**.
@@ -118,13 +118,13 @@ This is a specific data type that is only required by certain components, such a
With this configuration, the **Language Model** component is meant to support an action completed by another component, rather than producing a text response for a standard chat-based interaction.
For an example, the **Smart Function** component uses an LLM to create a function from natural language input.
-## Additional language model components
+## Additional Language Model components
-If your provider or model isn't supported by the core **Language Model** component, additional single-provider language model components are available in the [**Bundles**](/components-bundle-components) section of the **Components** menu.
+If your provider or model isn't supported by the **Language Model** core component, additional single-provider **Language Model** components are available in the [**Bundles**](/components-bundle-components) section of the **Components** menu.
You can use bundled components directly in your flows or you can connect them to other components that accept a [`LanguageModel`](/data-types#languagemodel) input, such as the **Language Model** and **Agent** components.
-For example, to connect bundled components to the core **Language Model** component, do the following:
+For example, to connect bundled components to the **Language Model** core component, do the following:
1. In the **Language Model** component, set **Model Provider** to **Custom**.
diff --git a/docs/docs/Components/components-processing.mdx b/docs/docs/Components/components-processing.mdx
index a82989031..7fef6102b 100644
--- a/docs/docs/Components/components-processing.mdx
+++ b/docs/docs/Components/components-processing.mdx
@@ -5,13 +5,13 @@ slug: /components-processing
import Icon from "@site/src/components/icon";
-Langflow's processing components process and transform data within a flow.
+Langflow's **Processing** components process and transform data within a flow.
They have many uses, including:
* Feed instructions and context to your LLMs and agents with the [**Prompt Template** component](#prompt-template).
* Extract content from larger chunks of data with a [**Parser** component](#parser).
* Filter data with natural language with the [**Smart Function** component](#smart-function).
-* Save data to your local machine with the [**Save File** component](#save-file).
+* Save data to your local machine with the [**Save To File** component](#save-to-file).
* Transform data into a different data type with the [**Type Convert** component](#type-convert) to pass it between incompatible components.
## Prompt Template
@@ -68,7 +68,7 @@ You can toggle parameters through the
@@ -578,7 +577,7 @@ The **DataFrame** output returns a structured data format, with additional `text
1. To use this component in a flow, connect a component that outputs [Data or DataFrame](/data-types#data) to the **Split Text** component's **Data** port.
This example uses the **URL** component, which is fetching JSON placeholder data.
-
+
2. In the **Split Text** component, define your data splitting parameters.
@@ -642,7 +641,7 @@ Third chunk: "s of Artificial Intelligence and its applications"
### Other text splitters
-- [LangChain text splitter components](/bundles-langchain#text-splitters)
+See [LangChain text splitter components](/bundles-langchain#text-splitters).
## Structured output
@@ -690,7 +689,7 @@ For example, the template `EBITDA: {EBITDA} , Net Income: {NET_INCOME} , GROSS
## Type convert
-This component converts data types between different formats. It can transform data between [Data](/data-types#data), [DataFrame](/data-types#dataframe), and [Message](/data-types#message) objects.
+This component converts data types between different formats. It can transform data between [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), and [`Message`](/data-types#message) objects.
* **Data**: A structured object that contains both text and metadata.
```json
@@ -723,19 +722,18 @@ Keys are columns, and each dictionary (a collection of key-value pairs) in the l
To use this component in a flow, do the following:
-1. Add the **Web search** component to the **Basic Prompting** template. In the **Search Query** field, enter a query, such as `environmental news`.
-2. Connect the **Web search** component's output to a component that accepts the DataFrame input.
-This example uses a **Prompt** component to give the chatbot context, so you must convert the **Web search** component's DataFrame output to a Message type.
-3. Connect a **Type Convert** component to convert the DataFrame to a Message.
+1. Add the **Web Search** component to the **Basic Prompting** template. In the **Search Query** field, enter a query, such as `environmental news`.
+2. Connect the **Web Search** component's output to a component that accepts the `DataFrame` input.
+This example uses a **Prompt Template** component to give the chatbot context, so you must convert the **Web Search** component's `DataFrame` output to a `Message` type.
+3. Connect a **Type Convert** component to convert the `DataFrame` to a `Message`.
4. In the **Type Convert** component, in the **Output Type** field, select **Message**.
-Your flow looks like this:
-
+ 
5. In the **Language Model** component, in the **OpenAI API Key** field, add your OpenAI API key.
6. Click **Playground**, and then ask about `latest news`.
-The search results are returned to the Playground as a message.
+The search results are returned to the **Playground** as a message.
Result:
```text
@@ -765,9 +763,9 @@ Ozone Pollution and Global Warming: A recent study highlights that ozone polluti
-## Legacy processing components
+## Legacy Processing components
-The following processing components are legacy components.
+The following **Processing** components are legacy components.
You can still use them in your flows, but they are no longer supported and can be removed in a future release.
Replace these components with suggested alternatives as soon as possible.
@@ -844,7 +842,7 @@ This component extracts a specific key from a `Data` object and returns the valu
Data to DataFrame/Data to Message
-Replace these legacy components with newer processing components, such as the [**Data Operations** component](#data-operations) and [**Type Convert** component](#type-convert).
+Replace these legacy components with newer **Processing** components, such as the [**Data Operations** component](#data-operations) and [**Type Convert** component](#type-convert).
These components converted one or more `Data` objects into a `DataFrame` or `Message` object.
diff --git a/docs/docs/Components/components-prompts.mdx b/docs/docs/Components/components-prompts.mdx
index fd089feb3..655d7a4be 100644
--- a/docs/docs/Components/components-prompts.mdx
+++ b/docs/docs/Components/components-prompts.mdx
@@ -26,7 +26,9 @@ The **Prompt Template** component can also output variable instructions to other
Variables in a **Prompt Template** component dynamically add fields to the **Prompt Template** component so that your flow can receive definitions for those values from other components, Langflow global variables, or fixed input.
-For example, with the [**Message History**](/components-helpers#message-history) component, you can use a `{memory}` variable to pass chat history to the prompt.
+For example, with the [**Message History** component](/components-helpers#message-history), you can use a `{memory}` variable to pass chat history to the prompt.
+However, the **Language Model** and **Agent** components include built-in chat memory that is enabled by default.
+For more information, see [Memory management options](/memory).
The following steps demonstrate how to add variables to a **Prompt Template** component:
@@ -66,5 +68,5 @@ For example, you could add variables for `{references}` and `{instructions}`, an
## See also
-* [LangChain Prompt Hub](/bundles-langchain#prompt-hub)
-* [Processing components](/components-processing)
\ No newline at end of file
+* [**LangChain Prompt Hub** component](/bundles-langchain#prompt-hub)
+* [**Processing** components](/components-processing)
\ No newline at end of file
diff --git a/docs/docs/Components/components-tools.mdx b/docs/docs/Components/components-tools.mdx
index c1736e1c4..6f91821cc 100644
--- a/docs/docs/Components/components-tools.mdx
+++ b/docs/docs/Components/components-tools.mdx
@@ -14,12 +14,12 @@ You can use these components in your flows, but they are no longer maintained an
It is recommended that you replace all legacy components with the replacement components described on this page.
:::
-## Calculator Tool component
+## Calculator Tool
The **Calculator Tool** component is a legacy component.
Replace this component with the [**Calculator** component](/components-helpers#calculator) in the **Helpers** category.
-## MCP Connection component
+## MCP Connection
This component was moved to the **Agents** category and renamed to the [**MCP Tools** component](/components-agents#mcp-connection)
@@ -35,9 +35,10 @@ All such components in the **Tools** category are legacy components.
You have two options for replacing these components:
-* Use the generic [data components](/components-data) for search and API calls, such as the [**Web Search**](/components-data#web-search) and [**News Search**](/components-data#news-search) components.
+* Use the generic [**Data** components](/components-data) for search and API calls, such as the [**Web Search** component](/components-data#web-search) and [**News Search** component](/components-data#news-search).
* Use the provider-specific search and API components in the **Bundles** category:
+
* [**arXiv** bundle](/bundles-arxiv)
* [**Bing** bundle](/bundles-bing)
* [**DataStax** bundle](/bundles-datastax)
@@ -57,7 +58,7 @@ You have two options for replacing these components:
SearXNG Search Tool
The **SearXNG Search Tool** component is a legacy component.
-Replace this component with a [data component](/components-data) or another metasearch provider's [bundle](/components-bundle-components).
+Replace this component with a [**Data** component](/components-data) or another metasearch provider's [bundle](/components-bundle-components).
This component creates a tool for searching using SearXNG, a metasearch engine.
It accepts the following parameters:
diff --git a/docs/docs/Components/components-vector-stores.mdx b/docs/docs/Components/components-vector-stores.mdx
index d6df9cdb4..75a368817 100644
--- a/docs/docs/Components/components-vector-stores.mdx
+++ b/docs/docs/Components/components-vector-stores.mdx
@@ -5,36 +5,35 @@ slug: /components-vector-stores
import Icon from "@site/src/components/icon";
-Langflow's vector store components connect to your vector databases or create in-memory vector stores for storing and retrieving vector data in flows.
+Langflow's **Vector Store** components connect to your vector databases or create in-memory vector stores for storing and retrieving vector data in flows.
-Vector databases and vector store components are specifically designed for storing and retrieving vector data, such as embeddings generated by language models. They are used to perform similarity searches, enabling applications like chatbots to retrieve relevant context from large datasets.
+Vector databases and **Vector Store** components are specifically designed for storing and retrieving vector data, such as embeddings generated by language models. They are used to perform similarity searches, enabling applications like chatbots to retrieve relevant context from large datasets.
Other types of storage, like traditional structured databases and chat memory, are handled through other components like the [**SQL Database** component](/components-data#sql-database) or the [**Message History** component](/components-helpers#message-history).
-## Use a vector store component in a flow
+## Use a Vector Store component in a flow
:::tip
-For examples of vector store components in flows, see [Create a vector RAG chatbot](/chat-with-rag) and [**Embedding Model** components](/components-embedding-models).
+For an tutorial using **Vector Store** components in a flow, see [Create a vector RAG chatbot](/chat-with-rag).
:::
-This example uses the **Chroma DB** vector store component. Your vector store component's parameters and authentication may be different, but the document ingestion workflow is the same. A document is loaded from a local machine and chunked. The vector store component generates embeddings with the connected [model](/components-models) component, and stores them in the connected vector database.
-
-This vector data can then be retrieved for workloads like Retrieval Augmented Generation.
+This example uses the **Chroma DB** vector store component. Your **Vector Store** component's parameters and authentication may be different, but the document ingestion workflow is the same. A document is loaded from a local machine and chunked. The **Vector Store** component generates embeddings with the connected [**Embedding Model** component](/components-embedding-models), and stores them in the connected vector database.
+This vector data can then be retrieved for workloads like Retrieval Augmented Generation (RAG).

The user's chat input is embedded and compared to the vectors embedded during document ingestion for a similarity search.
-The results are output from the vector database component as a [`Data`](/data-types#data) object and parsed into text.
-This text fills the `{context}` variable in the **Prompt Template** component, which informs the **OpenAI model** component's responses.
+The results are output from the **Vector Store** component as a [`Data`](/data-types#data) object and parsed into text.
+This text fills the `{context}` variable in the **Prompt Template** component, which informs the **OpenAI** language model component's responses.

### Configure vector store parameters
-Most vector store components have the same utility within a flow, but each provider can offer different parameters and functionality.
+Most **Vector Store** components have the same utility within a flow, but each provider can offer different parameters and functionality.
Inspect a component's parameters to learn more about the inputs it accepts and how to configure it.
-Many input parameters for vector store components are hidden by default in the visual editor.
+Many input parameters for **Vector Store** components are hidden by default in the visual editor.
You can toggle parameters through the **Controls** in each [component's header menu](/concepts-components#component-menus).
For details about a specific provider's parameters, see the provider's documentation.
@@ -111,14 +110,14 @@ The following **Astra DB** component parameters are used for hybrid search:
To use hybrid search through the **Astra DB** component, do the following:
-1. Click **New Flow** > **RAG** > **Hybrid Search RAG**.
-2. In the **OpenAI** model component, add your **OpenAI API key**.
+1. Create a flow based on the **Hybrid Search RAG** template.
+2. In the **OpenAI** component, add your OpenAI API key.
3. In the **Astra DB** vector store component, add your **Astra DB Application Token**.
4. In the **Database** field, select your database.
5. In the **Collection** field, select or create a collection with hybrid search capabilities enabled.
6. In the **Playground**, enter a question about your data, such as `What are the features of my data?`
- Your query is sent to two components: an **OpenAI** model component and the **Astra DB** vector database component.
+ Your query is sent to the **OpenAI** and **Astra DB** components.
The **OpenAI** component contains a prompt for creating the lexical query from your input:
```text
@@ -129,7 +128,7 @@ To use hybrid search through the **Astra DB** component, do the following:
Avoid common keywords associated with the user's subject matter.
```
-7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click **Inspect output**.
+7. To view the keywords and questions the **OpenAI** component generates from your collection, in the **OpenAI** component, click **Inspect Output**.
```
1. Keywords: features, data, attributes, characteristics
@@ -141,9 +140,8 @@ To use hybrid search through the **Astra DB** component, do the following:
The DataFrame is passed to a **Parser** component, which parses the contents of the **Keywords** column into a string.
This string of comma-separated words is passed to the **Lexical Terms** port of the **Astra DB** component.
- Note that the **Search Query** port of the Astra DB port is connected to the **Chat Input** component from step 6.
- This **Search Query** is vectorized, and both the **Search Query** and **Lexical Terms** content are sent to the reranker at the `find_and_rerank` endpoint.
-
+ Note that the **Search Query** port of the **Astra DB** component is connected to the **Chat Input** component.
+ The search query is vectorized, and both the **Search Query** and **Lexical Terms** content are sent to the reranker at the `find_and_rerank` endpoint.
The reranker compares the vector search results against the string of terms from the lexical search.
The highest-ranked results of your hybrid search are returned to the **Playground**.
@@ -270,19 +268,22 @@ For more information, see the [Chroma documentation](https://docs.trychroma.com/
Chroma DB sample flow
-1. To use this component in a flow, connect it to a component that outputs **Data** or **DataFrame**.
-This example splits text from a [URL](/components-data#url) component, and computes embeddings with the connected **OpenAI Embeddings** component. Chroma DB computes embeddings by default, but you can connect your own embeddings model, as seen in this example.
+1. To use this component in a flow, connect it to a component that outputs `Data` or `DataFrame`.
-
+ This example splits text from a [**URL** component](/components-data#url), and then computes embeddings with the connected **OpenAI Embeddings** component. Chroma DB computes embeddings by default, but you can connect your own embeddings model, as seen in this example.
+
+ 
2. In the **Chroma DB** component, in the **Collection** field, enter a name for your embeddings collection.
-3. Optionally, to persist the Chroma database, in the **Persist** field, enter a directory to store the `chroma.sqlite3` file.
+3. Optional: To persist the Chroma database, in the **Persist** field, enter a directory to store the `chroma.sqlite3` file.
This example uses `./chroma-db` to create a directory relative to where Langflow is running.
4. To load data and embeddings into your Chroma database, in the **Chroma DB** component, click **Run component**.
-:::tip
-When loading duplicate documents, enable the **Allow Duplicates** option in Chroma DB if you want to store multiple copies of the same content, or disable it to automatically deduplicate your data.
-:::
-5. To view the split data, in the **Split Text** component, click **Inspect output**.
+
+ :::tip
+ When loading duplicate documents, enable the **Allow Duplicates** option in Chroma DB if you want to store multiple copies of the same content, or disable it to automatically deduplicate your data.
+ :::
+
+5. To view the split data, in the **Split Text** component, click **Inspect Output**.
6. To query your loaded data, open the **Playground** and query your database.
Your input is converted to vector data and compared to the stored vectors in a vector similarity search.
@@ -465,7 +466,7 @@ For an example flow, see the **Graph RAG** template in Langflow.
| Name | Display Name | Info |
|------|--------------|------|
-| embedding_model | Embedding Model | Specify the embedding model. This is not required for collections embedded with [Astra vectorize](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html). |
+| embedding_model | Embedding Model | Specify the embedding model. This isn't required for collections embedded with [Astra vectorize](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html). |
| vector_store | Vector Store Connection | Connection to the vector store. |
| edge_definition | Edge Definition | Edge definition for the graph traversal. For more information, see the [GraphRAG documentation](https://datastax.github.io/graph-rag/reference/graph_retriever/edges/). |
| strategy | Traversal Strategies | The strategy to use for graph traversal. Strategy options are dynamically loaded from available strategies. |
@@ -560,7 +561,7 @@ For more information, see the [Chroma documentation](https://docs.trychroma.com/
| persist_directory | String | Custom base directory to save the vector store. Collections are stored under `$DIRECTORY/vector_stores/$COLLECTION_NAME`. If not specified, it uses your system's cache folder. |
| existing_collections | String | Select a previously created collection to search through its stored data. |
| embedding | Embeddings | The embedding function to use for the vector store. |
-| allow_duplicates | Boolean | If false, will not add documents that are already in the vector store. |
+| allow_duplicates | Boolean | If false, the component won't add documents that are already in the vector store. |
| search_type | String | Type of search to perform: "Similarity" or "MMR". |
| ingest_data | Data/DataFrame | Data to store. It is embedded and indexed for semantic search. |
| search_query | String | Enter text to search for similar content in the selected collection. |
diff --git a/docs/docs/Components/mcp-client.mdx b/docs/docs/Components/mcp-client.mdx
index 1c07a4d18..12aeb14ea 100644
--- a/docs/docs/Components/mcp-client.mdx
+++ b/docs/docs/Components/mcp-client.mdx
@@ -9,13 +9,13 @@ import Icon from "@site/src/components/icon";
Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) as both an MCP server and an MCP client.
-This page describes how to use Langflow as an MCP client with the [MCP Tools](#use-the-mcp-tools-component) component and the [MCP servers](#manage-connected-mcp-servers) page in **Settings**.
+This page describes how to use Langflow as an MCP client with the [**MCP Tools** component](#use-the-mcp-tools-component) and connected [MCP servers](#manage-connected-mcp-servers).
For information about using Langflow as an MCP server, see [Use Langflow as an MCP server](/mcp-server).
## Use the MCP tools component
-The **MCP Tools** component connects to a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) server and exposes the MCP server's tools as tools for [Langflow agents](/agents).
+The **MCP Tools** component connects to an MCP server so that a [Langflow agent](/agents) can use the server's tools when responding to user queries.
This component has two modes, depending on the type of server you want to access:
@@ -43,7 +43,7 @@ This component has two modes, depending on the type of server you want to access
3. To use environment variables in your server command, enter each variable in the **Env** fields as you would define them in a script, such as `VARIABLE=value`.
:::important
- Langflow passes environment variables from the `.env` file to MCP, but it doesn't pass global variables declared in the Langflow UI.
+ Langflow passes environment variables from the `.env` file to MCP, but it doesn't pass global variables declared in your Langflow **Settings**.
To define an MCP server environment variable as a global variable, add it to Langflow's `.env` file at startup.
For more information, see [global variables](/configuration-global-variables).
:::
@@ -54,16 +54,17 @@ This component has two modes, depending on the type of server you want to access
At this point, the **MCP Tools** component is serving a tool, but nothing is using the tool. The next steps explain how to make the tool available to an [**Agent** component](/components-agents) so that the agent can use the tool in its responses.
-5. In the [component menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
+5. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
6. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
- If not already present in your flow, make sure you also attach **Chat input** and **Chat output** components to the **Agent** component.
+ If not already present in your flow, make sure you also attach **Chat Input** and **Chat Output** components to the **Agent** component.
- 
+ 
-7. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent: Click **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
-For example, if you use `mcp-server-fetch` with the `fetch` tool, you could ask the agent to summarize recent tech news. The agent calls the MCP server function `fetch`, and then returns the response.
+7. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent. Open the **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
+
+ For example, if you use `mcp-server-fetch` with the `fetch` tool, you could ask the agent to summarize recent tech news. The agent calls the MCP server function `fetch`, and then returns the response.
8. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
@@ -77,11 +78,15 @@ To leverage flows-as-tools, use the **MCP Tools** component in **Server-Sent Eve
1. Add an **MCP Tools** component to your flow, click **Add MCP Server**, and then select **SSE** mode.
2. In the **MCP SSE URL** field, modify the default address to point at your Langflow server's SSE endpoint. The default value for other Langflow installations is `http://localhost:7860/api/v1/mcp/sse`.
In SSE mode, all flows available from the targeted server are treated as tools.
-3. In the [component menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
-4. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port. If not already present in your flow, make sure you also attach **Chat input** and **Chat output** components to the **Agent** component.
-
-5. Test your flow to make sure the agent uses your flows to respond to queries: Click **Playground**, and then enter a prompt that uses a flow that you connected through the **MCP Tools** component.
-6. If you want the agent to be able to use more flows, repeat these steps to add more **MCP Tools** components with different servers or tools selected.
+3. In the [component's header menu](/concepts-components#component-menus), enable **Tool Mode** so you can use the component with an agent.
+4. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
+5. If not already present in your flow, make sure you also attach **Chat Input** and **Chat Output** components to the **Agent** component.
+
+ 
+
+6. Test your flow to make sure the agent uses your flows to respond to queries. Open the **Playground**, and then enter a prompt that uses a flow that you connected through the **MCP Tools** component.
+
+7. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
## MCP Tools parameters
@@ -93,13 +98,13 @@ In SSE mode, all flows available from the targeted server are treated as tools.
## Manage connected MCP servers
-The **Settings > MCP Servers** page manages the MCP servers connected to the Langflow client.
+To manage MCP servers connected to your Langflow client, go to **Settings**, and then click **MCP Servers**.
-To add a new MCP server, click **Add MCP Server** to open the configuration pane, and follow the steps in [Use the MCP Tools component](#use-the-mcp-tools-component).
+To add a new MCP server, click **Add MCP Server**, and then follow the steps in [Use the MCP Tools component](#use-the-mcp-tools-component) to configure the connection.
-Click **More** to configure or delete the MCP server.
+Click **More** to edit or delete an MCP server.
## See also
- [Use Langflow as an MCP server](/mcp-server)
-- [Use a DataStax Astra DB MCP server with the MCP tools component](/mcp-component-astra)
\ No newline at end of file
+- [Use a DataStax Astra DB MCP server with the MCP Tools component](/mcp-component-astra)
\ No newline at end of file
diff --git a/docs/docs/Concepts/concepts-components.mdx b/docs/docs/Concepts/concepts-components.mdx
index cf6587858..fc453ba65 100644
--- a/docs/docs/Concepts/concepts-components.mdx
+++ b/docs/docs/Concepts/concepts-components.mdx
@@ -9,20 +9,20 @@ Components are the building blocks of your flows.
Like classes in an application, each component is designed for a specific use case or integration.
:::tip
-Langflow provides keyboard shortcuts for the **Workspace**.
+Langflow provides keyboard shortcuts for the workspace.
In the Langflow header, click your profile icon, select **Settings**, and then click **Shortcuts** to view the available shortcuts.
:::
## Add a component to a flow {#component-menus}
-To add a component to a flow, drag the component from the **Components** menu to the [**Workspace**](/concepts-overview).
+To add a component to a flow, drag the component from the **Components** menu to the [workspace](/concepts-overview).
-The **Components** menu is organized by component type, and some components are hidden by default:
+The **Components** menu is organized by component type or provider, and some components are hidden by default.
-* **Beta components**: These are Langflow's core components. They are grouped by purpose, such as **Inputs** or **Data**. Be aware that these components are in beta and not suitable for production workloads.
-* **Legacy components**: You can still use these components, but they are no longer supported. Legacy components are hidden by default; click **Component settings** to expose legacy components.
-* **Bundles**: These components support specific integrations, and they are grouped by provider.
+* **Core**: The categories near the top of the **Components** menu are Langflow's core components. They are grouped by purpose, such as **Inputs and Outputs** or **Data**. **Beta** components are newly released and still in development.
+* **Bundles**: These components support specific third-party integrations, and they are grouped by provider.
+* **Legacy**: You can still use these components, but they are no longer supported. Legacy components are hidden by default; click **Component settings** to expose legacy components.
### Configure a component
@@ -32,7 +32,7 @@ Each component has inputs, outputs, parameters, and controls related to the comp
By default, components show only required and common options.
To access additional settings and controls, including meta settings, use the component's header menu.
-To access a component's header menu, click the component in your **Workspace**.
+To access a component's header menu, click the component in your workspace.

@@ -47,7 +47,7 @@ For all other options, including **Delete** and **Duplicate** controls, click **Edit**.
+To modify a component's name or description, click the component in the workspace, and then click **Edit**.
Component descriptions accept Markdown syntax.
### Run a component
@@ -55,7 +55,7 @@ Component descriptions accept Markdown syntax.
To run a single component, click **Run component**.
A **Last Run** value indicates that the component ran successfully.
-Running a single component is different from running an entire flow. In a single component run, the `build_vertex` function is called, which builds and runs only the single component with direct inputs provided through the UI (the `inputs_dict` parameter). The `VertexBuildResult` data is passed to the `build_and_run` method that calls the component's `build` method and runs it. Unlike running an entire flow, running a single component doesn't automatically execute its upstream dependencies.
+Running a single component is different from running an entire flow. In a single component run, the `build_vertex` function is called, which builds and runs only the single component with direct inputs provided through the visual editor (the `inputs_dict` parameter). The `VertexBuildResult` data is passed to the `build_and_run` method that calls the component's `build` method and runs it. Unlike running an entire flow, running a single component doesn't automatically execute its upstream dependencies.
### Inspect component output and logs
@@ -72,7 +72,7 @@ Use the freeze option if you expect consistent output from a component _and all
Freezing a component prevents that component and all upstream components from re-running, and it preserves the last output state for those components.
Any future flow runs use the preserved output.
-To freeze a component, click the component in the **Workspace** to expose the component's header menu, click **Show More**, and then select **Freeze**.
+To freeze a component, click the component in the workspace to expose the component's header menu, click **Show More**, and then select **Freeze**.
## Component ports
@@ -83,7 +83,7 @@ Ports either accept input or produce output of a specific data type.
You can infer the data type from the field the port is attached to or from the [port's color](#port-colors).
For example, the **System Message** field accepts [message data](/data-types#message), as illustrated by the blue port icon: .
-
+
When building flows, connect output ports to input ports of the same type (color) to transfer that type of data between two components.
For information about the programmatic representation of each data type, see [Langflow data types](/data-types).
@@ -99,20 +99,23 @@ For information about the programmatic representation of each data type, see [La
### Dynamic ports
Some components have ports that are dynamically added or removed.
-For example, the **Prompt** component accepts inputs within curly braces, and new ports are opened when a value within curly braces is detected in the **Template** field.
+For example, the **Prompt Template** component accepts inputs within curly braces, and new ports are opened when a value within curly braces is detected in the **Template** field.
-
+
### Output type selection
-Some components include dropdown menus to select the type of output sent to the next component.
+All components produce output that is either sent to another component in the flow or returned as the final flow result.
-For example, the **Language Model** component includes **Model Response** or **Language Model** outputs.
-The **Model Response** output sends a [Message](/data-types#message) output on to another Message port.
-The **Language Model** output can be connected to components like [Structured output](/components-processing#structured-output) to use the LLM to power the component's reasoning.
+Some components can produce multiple types of output:
-In the component code, `group_outputs` is set to `False` by default, which forces the outputs into the same dropdown menu, and only allows one output to be selected.
-When `group_outputs=True`, outputs are displayed individually.
+* If the component emits all types at once, the component has multiple output ports in the visual editor. In component code, this is represented by `group_outputs=True`
+
+* If the component emits only one type, you must select the output type by clicking the output label near the output port, and then selecting the desired output type. In component code, this is represented by `group_outputs=False` or omitting the `group_outputs` parameter.
+
+For example, the **Language Model** component can output _either_ a **Model Response** or **Language Model**.
+The **Model Response** output produces [`Message`](/data-types#message) data that can be passed to another component's `Message` port.
+The **Language Model** output must be connected to a component with a **Language Model** input, such as the [**Structured Output** component](/components-processing#structured-output), that uses the attached LLM to power the receiving component's reasoning.

@@ -136,12 +139,12 @@ The following table lists the component data types and their corresponding port
## Component code
-You can edit components in the visual editor and in code. When editing a flow, select a component, and then click **Code** to see and edit the component's underlying Python code.
+You can edit components in the [workspace](/concepts-overview#workspace) and in code. When editing a flow, select a component, and then click **Code** to see and edit the component's underlying Python code.
All components have underlying code that determines how you configure them and what actions they can perform.
In the context of creating and running flows, component code does the following:
-* Determines what configuration options to show in the Langflow UI.
+* Determines what configuration options to show in the visual editor.
* Validates inputs based on the component's defined input types.
* Processes data using the configured parameters, methods, and functions.
* Passes results to the next component in the flow.
@@ -149,9 +152,9 @@ In the context of creating and running flows, component code does the following:
All components inherit from a base `Component` class that defines the component's interface and behavior.
For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/base/textsplitters/model.py) class.
-Each component's code includes definitions for inputs and outputs, which are represented in the **Workspace** as [component ports](/concepts-components#component-ports).
+Each component's code includes definitions for inputs and outputs, which are represented in the workspace as [component ports](#component-ports).
For example, the `RecursiveCharacterTextSplitter` has four inputs. Each input definition specifies the input type, such as `IntInput`, as well as the encoded name, display name, description, and other parameters for that specific input.
-These values determine the component settings, such as display names and tooltips in the Langflow UI.
+These values determine the component settings, such as display names and tooltips in the visual editor.
```python
inputs = [
@@ -216,7 +219,7 @@ In other words, an individual instance of a component retains the version number
### Update component versions
-When editing a flow in the **Workspace**, Langflow notifies you if a component's workspace version is behind the database version so you can update the component's workspace version:
+When editing a flow in the workspace, Langflow notifies you if a component's workspace version is behind the database version so you can update the component's workspace version:
* **Update ready**: This notification means the component update contains no breaking changes.
* **Update available**: This notification means the component update might contain breaking changes.
@@ -236,7 +239,7 @@ Components are updated to the latest available version, based on the version of
## Group components
-Multiple components can be grouped into a single component for reuse. This is useful for organizing large flows by combining related components together, such as a RAG **Agent** component and an associated vector database component.
+Multiple components can be grouped into a single component for reuse. This is useful for organizing large flows by combining related components together, such as a RAG **Agent** component and it's associated **Embeddings Model** and **Vector Store** components.
1. Hold Shift, and then click and drag to highlight all components you want to merge. Components must be completely within the selection area to be merged.
@@ -246,6 +249,6 @@ Multiple components can be grouped into a single component for reuse. This is us
Grouped components are configured and managed as a single component, including the component name, code, and settings.
-To ungroup the components, click the component in the **Workspace** to expose the component's header menu, click **Show More**, and then select **Ungroup**.
+To ungroup the components, click the component in the workspace to expose the component's header menu, click **Show More**, and then select **Ungroup**.
-If you want to reuse this grouping in other flows, click the component in the **Workspace** to expose the component's header menu, click **Show More**, and then select **Save** to save the component to the **Components** menu.
+If you want to reuse this grouping in other flows, click the component in the workspace to expose the component's header menu, click **Show More**, and then select **Save** to save the component to the **Components** menu.
\ No newline at end of file
diff --git a/docs/docs/Concepts/concepts-file-management.mdx b/docs/docs/Concepts/concepts-file-management.mdx
index 659a431ed..584c1f3c1 100644
--- a/docs/docs/Concepts/concepts-file-management.mdx
+++ b/docs/docs/Concepts/concepts-file-management.mdx
@@ -18,7 +18,7 @@ You can also manage all files that have been uploaded to your Langflow server.
1. Navigate to Langflow file management:
- * In the Langflow UI, on the [**Projects** page](/concepts-flows#projects) page, click **My Files** below the list of projects.
+ * In Langflow, on the [**Projects** page](/concepts-flows#projects) page, click **My Files** below the list of projects.
* From a browser, navigate to your Langflow server's `/files` endpoint, such as `http://localhost:7860/files`. Modify the base URL as needed for your Langflow server.
* For programmatic file management, use the [Langflow API files endpoints](/api-files). However, the following steps assume you're using the file management UI.
@@ -49,7 +49,7 @@ For example, add a **File** component to your flow, click **Select files**, and
This list includes all files in your server's file management system, but you can only select [file types that are supported by the **File** component](/components-data#file).
If you need another file type, you must use a different component that supports that file type, or you need to convert it to a supported type before uploading it.
-For more information about the **File** component and other data loading components, see [Data components](/components-data).
+For more information about the **File** component and other data loading components, see [**Data** components](/components-data).
### Load files at runtime
@@ -98,5 +98,5 @@ To modify this value, change the `--max-file-size-upload` [environment variable]
## See also
-* [Data components](/components-data)
-* [Processing components](/components-processing)
\ No newline at end of file
+* [**Data** components](/components-data)
+* [**Processing** components](/components-processing)
\ No newline at end of file
diff --git a/docs/docs/Concepts/concepts-flows-import.mdx b/docs/docs/Concepts/concepts-flows-import.mdx
index a5eecd452..b7109c011 100644
--- a/docs/docs/Concepts/concepts-flows-import.mdx
+++ b/docs/docs/Concepts/concepts-flows-import.mdx
@@ -9,18 +9,22 @@ You can export flows to transfer them between Langflow instances, share them wit
## Export a flow
-There are three ways to export a flow:
+There are three ways to export flows:
-* From the **Projects** page, find the flow you want to export, click **More**, and then select **Export**.
-* When editing a flow, click **Share**, and then click **Export**.
-* Use the Langflow API [`/flows/download`](/api-flows#export-flows) endpoint.
+* **Export from projects**: On the [**Projects** page](/concepts-flows#projects), find the flow you want to export, click **More**, and then select **Export**. To export all flows in a project, click **Options** on the **Projects** list, and then select **Download**.
-An exported flow is downloaded to your local machine as a JSON file named `FLOW_NAME.json`.
+* **Export by sharing**: When editing a flow, click **Share**, and then click **Export**.
+
+* **Export with the Langflow API**: To export one flow, use the [`/flows/download`](/api-flows#export-flows) endpoint.
+To export all flows in a project, use the [`/projects/download`](/api-projects#export-a-project) endpoint.
+
+Exported flows are downloaded to your local machine as JSON files named `FLOW_NAME.json`.
+If you export an entire project, the JSON files are packaged in a zip archive.
For more information, see [Langflow JSON file contents](#langflow-json-file-contents).
### Save with my API keys
-When exporting from the Langflow UI, you can select **Save with my API keys** to export the flow _and_ any defined API key variables.
+When exporting from the **Projects** page or **Share** menu, you can select **Save with my API keys** to export the flow _and_ any defined API key variables.
Non-API key variables are included in the export regardless of the **Save with my API keys** setting.
:::warning
@@ -32,23 +36,14 @@ If your key is stored in a Langflow global variable, **Save with my API keys** e
When you or another user import the flow to another Langflow instance, that instance must have Langflow global variables with the same names and valid values in order to run the flow successfully.
If any variables are missing or invalid, those variables must be created or edited after importing the flow.
-### Export all flows
-
-If you want to export all flows within a project, do either of the following:
-
-* Go to the **Projects** page, find the project you want to export, click **Options**, and then select **Download**.
-* Use the Langflow API [`/projects/download`](/api-projects#export-a-project) endpoint.
-
-The project's flows are downloaded as JSON files in a zip archive.
-
## Import a flow
You can import Langflow JSON files from your local machine in the following ways:
-* From the **Projects** page, click **Upload a flow**.
-* Drag and drop Langflow JSON files from your file explorer into your Langflow window to import a flow from any Langflow page.
-* Use the Langflow API [`/flows/upload/`](/api-flows#import-flows) endpoint to upload one JSON file.
-* Use the Langflow API [`/projects/upload`](/api-projects#import-a-project) endpoint to upload a Langflow project zip file.
+* **Import to projects**: On the **Projects** page, click **Upload a flow**, and then select the Langflow JSON file to import.
+* **Import anywhere**: Drag and drop Langflow JSON files from your file explorer into your Langflow window to import a flow from any Langflow page.
+* **Import with the Langflow API**: To import one Langflow JSON file, use the [`/flows/upload/`](/api-flows#import-flows) endpoint.
+To import a zip archive of Langflow JSON files, use the [`/projects/upload`](/api-projects#import-a-project) endpoint.
### Run an imported flow
@@ -149,39 +144,42 @@ The `OpenAIModel` component accepts the `Message` type at the `input_value` fiel
Additional information about the flow is stored in the root `data` object.
-* Metadata and project information including the name, description, and `last_tested_version` of the flow.
-```json
-{
- "name": "Basic Prompting",
- "description": "Perform basic prompting with an OpenAI model.",
- "tags": ["chatbots"],
- "id": "1511c230-d446-43a7-bfc3-539e69ce05b8",
- "last_tested_version": "1.0.19.post2",
- "gradient": "2",
- "icon": "Braces"
-}
-```
+* Metadata and project information including the name, description, and `last_tested_version` of the flow:
-* Visual information about the flow defining the initial position of the flow in the workspace.
-```json
-"viewport": {
- "x": -37.61270157375441,
- "y": -155.91266341888854,
- "zoom": 0.7575251406952855
-}
-```
+ ```json
+ {
+ "name": "Basic Prompting",
+ "description": "Perform basic prompting with an OpenAI model.",
+ "tags": ["chatbots"],
+ "id": "1511c230-d446-43a7-bfc3-539e69ce05b8",
+ "last_tested_version": "1.0.19.post2",
+ "gradient": "2",
+ "icon": "Braces"
+ }
+ ```
+
+* Visual information about the flow defining the initial position of the flow in the workspace:
+
+ ```json
+ "viewport": {
+ "x": -37.61270157375441,
+ "y": -155.91266341888854,
+ "zoom": 0.7575251406952855
+ }
+ ```
* Notes are comments that help you understand the flow within the workspace.
They may contain links, code snippets, and other information.
Notes are written in Markdown and stored as `node` objects.
-```json
-{
- "id": "undefined-kVLkG",
- "node": {
- "description": "## 📖 README\nPerform basic prompting with an OpenAI model.\n\n#### Quick Start\n- Add your **OpenAI API key** to the **OpenAI Model**\n- Open the **Playground** to chat with your bot.\n..."
- }
-}
-```
+
+ ```json
+ {
+ "id": "undefined-kVLkG",
+ "node": {
+ "description": "## 📖 README\nPerform basic prompting with an OpenAI model.\n\n#### Quick Start\n- Add your **OpenAI API key** to the **OpenAI Model**\n- Open the **Playground** to chat with your bot.\n..."
+ }
+ }
+ ```
## See also
diff --git a/docs/docs/Concepts/concepts-flows.mdx b/docs/docs/Concepts/concepts-flows.mdx
index d4e767b66..29e5b06ee 100644
--- a/docs/docs/Concepts/concepts-flows.mdx
+++ b/docs/docs/Concepts/concepts-flows.mdx
@@ -10,7 +10,7 @@ Flows receive input, process it, and produce output.
Flows consist of _components_ that represent individual steps in your application's workflow.
-
+
Langflow flows are fully serializable and can be saved and loaded from the file system where Langflow is installed.
@@ -20,7 +20,7 @@ To try building and running a flow in a few minutes, see the [Langflow quickstar
## Create a flow
-From the [**Projects** page](#projects), there are four ways to create a flow in the Langflow UI:
+From the [**Projects** page](#projects), there are four ways to create a flow:
* **Create a blank flow**: Select a project, click **New Flow**, and then click **Blank Flow**.
@@ -43,17 +43,17 @@ From the [**Projects** page](#projects), there are four ways to create a flow in
* **Import a flow**: See [Import and export flows](/concepts-flows-import).
-You can also create a flow with the [Langflow API](/api-flows), but the Langflow team recommends using the visual editor until you are familiar with flow creation.
+You can also create a flow with the [Langflow API](/api-flows), but the Langflow team recommends using the [visual editor](/concepts-overview) until you are familiar with flow creation.
### Add components
-Flows consist of [components](/concepts-components), which are nodes that you configure and connect in the Langflow [visual editor](/concepts-overview).
+Flows consist of [components](/concepts-components), which are nodes that you configure and connect in the [workspace](/concepts-overview#workspace).
Each component performs a specific task, like serving an AI model or connecting a data source.
Drag and drop components from the **Components** menu to add them to your flow.
Then, configure the component settings and connect the components together.
-
+
Each component has configuration settings and options. Some of these are common to all components, and some are unique to specific components.
@@ -89,12 +89,17 @@ To create a project, click **Create new p

From the **Projects** page, you can manage flows within each of your projects:
+
* **View flows in a project**: Select the project name in the **Projects** list.
* **Create flows**: See [Create a flow](#create-a-flow).
* **Edit a flow's name and description**: Locate the flow you want to edit, click **More**, and then select **Edit details**.
* **Delete a flow**: Locate the flow you want to delete, click **More**, and then select **Delete**.
* **Serve flows as MCP tools**: See [Use Langflow as an MCP server](/mcp-server).
+:::tip
+To get back to the **Projects** page after editing a flow, click the project name or Langflow icon in the Langflow header.
+:::
+
## Flow storage and logs
By default, flows and [flow logs](/logging) are stored on local disk at the following default locations:
diff --git a/docs/docs/Concepts/concepts-overview.mdx b/docs/docs/Concepts/concepts-overview.mdx
index 10e39b8fc..7376c97f9 100644
--- a/docs/docs/Concepts/concepts-overview.mdx
+++ b/docs/docs/Concepts/concepts-overview.mdx
@@ -17,12 +17,12 @@ To try building and running a flow in a few minutes, see the [Langflow quickstar
## Workspace
-When building a [flow](/concepts-flows), you primarily interact with the **Workspace**.
+When building a [flow](/concepts-flows), you primarily interact with the workspace.
This is where you add [components](/concepts-components), configure them, and attach them together.

-From the **Workspace**, you can also access the [**Playground**](#playground), [**Share** menu](#share-menu), and [**Logs**](/concepts-flows#flow-storage-and-logs).
+From the workspace, you can also access the [**Playground**](#playground), [**Share** menu](#share-menu), and [**Logs**](/concepts-flows#flow-storage-and-logs).
### Workspace gestures and interactions
@@ -35,6 +35,7 @@ From the **Workspace**, you can also access the [**Playground**](#playground), [
- To lock the visual position of the components, click **Lock**.
- To zoom, use any of the following options:
+
- Scroll up or down on the mouse or trackpad.
- Click **Zoom In** or **Zoom Out**.
- Click **Fit To Zoom** to scale the zoom level to show the entire flow.
@@ -45,16 +46,16 @@ From the **Workspace**, you can also access the [**Playground**](#playground), [
If your flow has a **Chat Input** component, you can use the **Playground** to run your flow, chat with your flow, view inputs and outputs, and modify the LLM's memories to tune the flow's responses in real time.
-To try this for yourself, create a flow based on the **Basic Prompting** template, and then click **Playground** when editing the flow in the **Workspace**.
+To try this for yourself, create a flow based on the **Basic Prompting** template, and then click **Playground** when editing the flow in the workspace.
-
+
If you have an **Agent** component in your flow, the **Playground** displays its tool calls and outputs so you can monitor the agent's tool use and understand the reasoning behind its responses.
-To try an agentic flow in the **Playground**, use the **Simple Agent** template or see the [Langflow quickstart](/get-started-quickstart).
+To try an agent flow in the **Playground**, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).
-
+
-For more information, see [Playground](/concepts-playground).
+For more information, see [Test flows in the Playground](/concepts-playground).
## Share {#share-menu}
diff --git a/docs/docs/Concepts/concepts-playground.mdx b/docs/docs/Concepts/concepts-playground.mdx
index f7bd230cc..69dbfd5c0 100644
--- a/docs/docs/Concepts/concepts-playground.mdx
+++ b/docs/docs/Concepts/concepts-playground.mdx
@@ -10,7 +10,7 @@ import Icon from "@site/src/components/icon";
Langflow's **Playground** is a dynamic interface you can use to test your LLM-based flows in real-time.
You can test how a flow responds to different inputs, review and modify memories, and monitor flow output and logic.
-For example, you can make sure agentic flows use the appropriate tools to respond to different inputs.
+For example, you can make sure agent flows use the appropriate tools to respond to different inputs.
The **Playground** allows you to quickly iterate over your flow's logic and behavior, making it easier to prototype and refine your applications.
@@ -22,7 +22,7 @@ Then, if your flow has a [**Chat Input** component](/components-io), enter a pro
:::tip
If there is no message input field in the **Playground**, make sure your flow has a **Chat Input** component that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
-Because the **Playground** is designed for flows that use an LLM in a query-and-response format, such as chatbots and agents, a flow must have **Chat Input**, **Language Model**/**Agent**, and **Chat Output** components to be fully supported by the **Playground**'s chat interface
+Because the **Playground** is designed for flows that use an LLM in a query-and-response format, such as chatbots and agents, a flow must have **Chat Input**, **Language Model**/**Agent**, and **Chat Output** components to be fully supported by the **Playground** chat interface
For flows that require another type of input, such as a webhook event, file upload, or text input, you can [use the Langflow API to trigger the flow](/api-flows-run), and then open the **Playground** to review the LLM activity for the flow run, if applicable.
:::
@@ -38,13 +38,13 @@ The `build` function allows components to execute logic at runtime. For example,
-### Review Agent logic
+### Review agent logic
If your flow has an **Agent** component, the **Playground** prints the tools used by the agent and the output from each tool.
This helps you monitor the agent's tool use and understand the logic behind the agent's responses.
For example, the following agent used a connected `fetch_content` tool to perform a web search:
-
+
### View chat history {#view-chat-history}
@@ -89,7 +89,7 @@ You can set custom session IDs in the visual editor and programmatically.
-In your [input and output components](/components-io), use the **Session ID** field:
+In your [**Input and Output** components](/components-io), use the **Session ID** field:
1. Click the component where you want to set a custom session ID.
2. In the [component's header menu](/concepts-components#component-menus), click **Controls**.
@@ -150,7 +150,7 @@ The user can interact with the flow's chat input and output and view the results
To share a flow's **Playground** with another user, do the following:
1. In Langflow, open the flow you want share.
-2. From the **Workspace**, click **Share**, and then enable **Shareable Playground**.
+2. In the [workspace](/concepts-overview#workspace), click **Share**, and then enable **Shareable Playground**.
3. Click **Shareable Playground** again to open the **Playground** window.
This window's URL is the flow's **Shareable Playground** address, such as `https://3f7c-73-64-93-151.ngrok-free.app/playground/d764c4b8-5cec-4c0f-9de0-4b419b11901a`.
4. Send the URL to another user to give them access to the flow's **Playground**.
diff --git a/docs/docs/Concepts/concepts-publish.mdx b/docs/docs/Concepts/concepts-publish.mdx
index 5963072d7..a15cf289e 100644
--- a/docs/docs/Concepts/concepts-publish.mdx
+++ b/docs/docs/Concepts/concepts-publish.mdx
@@ -76,7 +76,7 @@ You can create one flow and use it for multiple applications by passing applicat
In the **API access** pane, click **Input Schema** to add `tweaks` to the request payload in a flow's code snippets.
Changes to a flow's **Input Schema** are saved exclusively as tweaks for that flow's **API access** code snippets.
-These tweaks don't change the flow parameters set in the **Workspace**, and they don't apply to other flows.
+These tweaks don't change the flow parameters set in the [workspace](/concepts-overview#workspace), and they don't apply to other flows.
Adding tweaks through the **Input Schema** can help you troubleshoot formatting issues with tweaks that you manually added to Langflow API requests.
@@ -531,4 +531,4 @@ For more information, see [Use Langflow as an MCP server](/mcp-server) and [Use
* [Import and export flows](/concepts-flows-import)
* [Files endpoints](/api-files)
-* [Use the Playground](/concepts-playground)
\ No newline at end of file
+* [Test flows in the Playground](/concepts-playground)
\ No newline at end of file
diff --git a/docs/docs/Concepts/concepts-voice-mode.mdx b/docs/docs/Concepts/concepts-voice-mode.mdx
index 29f729357..6dfdf8650 100644
--- a/docs/docs/Concepts/concepts-voice-mode.mdx
+++ b/docs/docs/Concepts/concepts-voice-mode.mdx
@@ -54,7 +54,7 @@ This option changes both the expected input language and the response language.
10. Speak into your microphone to start the chat.
- If configured correctly, the waveform in the voice mode dialog registers your input, and then the agent's logic and response are described verbally and in the **Playground**.
+ If configured correctly, the waveform registers your input, and then the agent's logic and response are described verbally and in the **Playground**.
## Develop applications with websockets endpoints
diff --git a/docs/docs/Concepts/data-types.mdx b/docs/docs/Concepts/data-types.mdx
index 1703a717f..430d2057a 100644
--- a/docs/docs/Concepts/data-types.mdx
+++ b/docs/docs/Concepts/data-types.mdx
@@ -19,7 +19,7 @@ For example **Data** ports, represented by emit or ingest vector embeddings to support functions like similarity search.
-The `Embeddings` data type is used specifically by components that either produce or consume vector embeddings, such as the [embedding model components](/components-embedding-models) and [vector store components](/components-vector-stores).
+The `Embeddings` data type is used specifically by components that either produce or consume vector embeddings, such as the [**Embedding Model** components](/components-embedding-models) and [**Vector Store** components](/components-vector-stores).
-For example, the **Embedding Model** component outputs `Embeddings` data that you can connect to an **Embedding** input port on a vector store component.
+For example, **Embedding Model** components output `Embeddings` data that you can connect to an **Embedding** input port on a **Vector Store** component.
For information about the underlying Python classes that produce `Embeddings`, see the [LangChain Embedding models documentation](https://python.langchain.com/docs/integrations/text_embedding/).
## LanguageModel
-The `LanguageModel` type is a specific data type that can be produced by language model components and accepted by components that use an LLM.
+The `LanguageModel` type is a specific data type that can be produced by **Language Model** components and accepted by components that use an LLM.
-When you change a language model component's output type from **Model Response** to **Language Model**, the component's output port changes from a **Message** port to a **Language Model** port .
+When you change a **Language Model** component's output type from **Model Response** to **Language Model**, the component's output port changes from a **Message** port to a **Language Model** port .
Then, you connect the outgoing **Language Model** port to a **Language Model** input port on a compatible component, such as a **Smart Function** component.
-For more information about using language model components in flows and toggling `LanguageModel` output, see [**Language Model** components](/components-models#language-model-output-types).
+For more information about using these components in flows and toggling `LanguageModel` output, see [**Language Model** components](/components-models#language-model-output-types).
LanguageModel is an instance of LangChain ChatModel
@@ -179,7 +179,7 @@ This data type is used by many components.
Components that accept or produce `Message` data may not include all attributes in the incoming or outgoing `Message` data.
As long as the data is compatible with the `Message` schema, it can be valid.
-When building flows, focus on the fields shown on each component in the visual editor, rather than the data types passed between components.
+When building flows, focus on the fields shown on each component in the workspace, rather than the data types passed between components.
The details of a particular data type are often only relevant when you are debugging a flow or component that isn't producing the expected output.
For example, a **Chat Input** component only requires the content of the **Input Text** (`input_value`) field.
@@ -223,23 +223,23 @@ Some common attributes include the following:
Not all attributes are required, and some components accept message-compatible input, such as raw text input.
The strictness depends on the component.
-### Message data in Input/Output components
+### Message data in Input and Output components
-In flows with [**Chat Input/Output** components](/components-io#chat-io), `Message` data provides a consistent structure for chat interactions, and it is ideal for chatbots, conversational analysis, and other use cases based on a dialog with an LLM or agent.
+In flows with [**Chat Input and Output** components](/components-io#chat-io), `Message` data provides a consistent structure for chat interactions, and it is ideal for chatbots, conversational analysis, and other use cases based on a dialogue with an LLM or agent.
In these flows, the **Playground** chat interface prints only the `Message` attributes that are relevant to the conversation, such as `text`, `files`, and error messages from `content_blocks`.
To see all `Message` attributes, inspect the message logs in the **Playground**.
-In flows with [**Text Input/Output** components](/components-io#text-io), `Message` data is used to pass simple text strings without the chat-related metadata.
+In flows with [**Text Input and Output** components](/components-io#text-io), `Message` data is used to pass simple text strings without the chat-related metadata.
These components handle `Message` data as independent text strings, not as part of an ongoing conversation.
-For this reason, a flow with only **Text Input/Output** components isn't compatible with the **Playground**.
-For more information, see [Input/Output components](/components-io).
+For this reason, a flow with only **Text Input and Output** components isn't compatible with the **Playground**.
+For more information, see [**Input and Output** components](/components-io).
When using the Langflow API, the response includes the `Message` object along with other response data from the flow run.
Langflow API responses can be extremely verbose, so your applications must include code to extract relevant data from the response to return to the user.
For an example, see the [Langflow quickstart](/get-started-quickstart).
Additionally, input sent to the input port of input/output components does _not_ need to be a complete `Message` object because the component constructs the `Message` object that is then passed to other components in the flow or returned as flow output.
-In fact, some components should not receive a complete `Message` object because some attributes, like `timestamp` should be added by the component for accuracy.
+In fact, some components shouldn't receive a complete `Message` object because some attributes, like `timestamp` should be added by the component for accuracy.
## Tool
@@ -248,7 +248,7 @@ In fact, some components should not receive a complete `Message` object because
Tools can be other components where you enabled **Tool Mode**, they can be the dedicated **MCP Tools** component, or they can be other components that only support **Tool Mode**.
Multiple tools can be connected to the same **Agent** component at the same port.
-Functionally, `Tool` data is a LangChain `StructuredTool` object that can be used in agent workflows.
+Functionally, `Tool` data is a LangChain `StructuredTool` object that can be used in agent flows.
For more information, see [Configure tools for agents](/agents-tools) and [Use Langflow as an MCP client](/mcp-client).
diff --git a/docs/docs/Concepts/mcp-server.mdx b/docs/docs/Concepts/mcp-server.mdx
index d3586192d..7a2be384e 100644
--- a/docs/docs/Concepts/mcp-server.mdx
+++ b/docs/docs/Concepts/mcp-server.mdx
@@ -26,7 +26,7 @@ As an MCP server, Langflow exposes your flows as [tools](https://modelcontextpro
## Select and configure flows to expose as MCP tools {#select-flows-to-serve}
:::important MCP flow requirements
-A flow must contain a [**Chat Output**](/components-io#chat-output) component to be used as a tool by MCP clients.
+A flow must contain a [**Chat Output** component](/components-io#chat-output) to be used as a tool by MCP clients.
:::
Each [Langflow project](/concepts-flows#projects) has an MCP server that exposes the project's flows as tools that MCP clients can use to generate responses.
@@ -48,7 +48,7 @@ Alternatively, you can quickly access the **MCP Server** tab from within any flo
3. In the **MCP Server Tools** window, select the flows that you want exposed as tools.
- 
+ 
4. Recommended: Edit the **Tool Name** and **Tool Description** to help MCP clients determine which actions your flows provide and when to use those actions:
@@ -105,7 +105,7 @@ However, you can connect any [MCP-compatible client](https://modelcontextprotoco
:::important
Auto installation only works if your HTTP client and Langflow server are on the same local machine.
-If this is not the case, use the **JSON** option to configure the MCP server.
+If this isn't the case, use the **JSON** option to configure the MCP server.
:::
1. Install [Cursor](https://docs.cursor.com/get-started/installation).
@@ -246,7 +246,7 @@ The default address is `http://localhost:6274`.
## Troubleshooting MCP server
-If Claude for Desktop is not using your server's tools correctly, you may need to explicitly define the path to your local `uvx` or `npx` executable file in the `claude_desktop_config.json` configuration file.
+If Claude for Desktop isn't using your server's tools correctly, you may need to explicitly define the path to your local `uvx` or `npx` executable file in the `claude_desktop_config.json` configuration file.
1. To find your UVX path, run `which uvx`.
diff --git a/docs/docs/Configuration/api-keys-and-authentication.mdx b/docs/docs/Configuration/api-keys-and-authentication.mdx
index 12b6e0368..0d77fe614 100644
--- a/docs/docs/Configuration/api-keys-and-authentication.mdx
+++ b/docs/docs/Configuration/api-keys-and-authentication.mdx
@@ -41,14 +41,14 @@ You must [start your Langflow server with authentication enabled](#start-a-langf
### Create a Langflow API key
-You can generate a Langflow API key with the UI or the CLI.
+You can generate a Langflow API key in your Langflow **Settings** or with the Langflow CLI.
-The UI-generated key is appropriate for most development use cases. The CLI-generated key is needed when your Langflow server is running in `--backend-only` mode.
+The CLI option is required if your Langflow server is running in `--backend-only` mode.
-
+
-1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Langflow API Keys**, and then click **Add New**.
3. Name your key, and then click **Create API Key**.
4. Copy the API key and store it securely.
@@ -56,7 +56,7 @@ The UI-generated key is appropriate for most development use cases. The CLI-gene
-If you're serving your flow with `--backend-only=true`, you can't create API keys in the UI because the frontend isn't running.
+If you're serving your flow with `--backend-only=true`, you can't create API keys in your Langflow **Settings** because the frontend isn't running.
In this case, you must create API keys with the Langflow CLI.
1. Recommended: [Start your Langflow server with authentication enabled](#start-a-langflow-server-with-authentication-enabled).
@@ -110,7 +110,7 @@ For more information about forming Langflow API requests, see [Get started with
To revoke and delete an API key, do the following:
-1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Langflow API Keys**.
3. Select the keys you want to delete, and then click **Delete**.
@@ -118,14 +118,18 @@ This action immediately invalidates the key and prevents it from being used agai
## Component API keys
-Component API keys authorize access to external services that are called by components in your flows. These are not general application credentials, but specifically the API keys required by individual components to connect to services like model providers, databases, or third-party APIs.
-
-Component API keys are added as global variables with the Langflow UI, or sourced from the `.env` file. When stored as global variables, use the **Credential** type for secure handling of sensitive information.
-
-Component API keys are created and managed within the service provider's platform. Deleting a global variable from Langflow does not delete or invalidate the actual API key in the service provider's system. You must delete or rotate component API keys directly within the service provider's interface or API. Langflow only stores the encrypted value of your keys; it does not have control over the actual credentials.
+Component API keys authorize access to external services that are called by components in your flows, such as model providers, databases, or third-party APIs.
+These aren't Langflow API keys or general application credentials.
+In Langflow, you can store component API keys in global variables in your **Settings** or import them from your Langflow `.env` file.
+When creating global variables, use the **Credential** type for secure handling of sensitive information.
For more information, see [Global variables](/configuration-global-variables).
+You create and manage component API keys within the service provider's platform.
+Langflow only stores the encrypted key value or a secure reference to a key stored elsewhere; it doesn't manage the actual credentials at the source.
+This means that deleting a global variable from Langflow doesn't delete or invalidate the actual API key in the service provider's system.
+You must delete or rotate component API keys directly using the service provider's interface or API.
+
## Authentication environment variables
This section describes the available authentication configuration variables.
@@ -134,13 +138,17 @@ You can use the [`.env.example`](https://github.com/langflow-ai/langflow/blob/ma
### LANGFLOW_AUTO_LOGIN {#langflow-auto-login}
-Langflow doesn't allow users to have simultaneous or shared access to flows in the Langflow UI.
+This variable controls whether authentication is required to access your Langflow server, including the visual editor and API:
-If `LANGFLOW_AUTO_LOGIN=False`, automatic login is disabled. The login form is required to access the Langflow UI, and Langflow API requests require a Langflow API key.
+* If `LANGFLOW_AUTO_LOGIN=False`, automatic login is disabled. Users must sign in to the visual editor and use a Langflow API key for Langflow API requests.
If `false`, you must also set [`LANGFLOW_SUPERUSER` and `LANGFLOW__SUPERUSER_PASSWORD`](#langflow-superuser).
-If `LANGFLOW_AUTO_LOGIN=True`, Langflow bypasses UI authentication.
-If you also disable user management (`LANGFLOW_NEW_USER_IS_ACTIVE=true`), users can access the same environment without password protection. If two users access the same flow, Langflow saves only the work of the most recent user.
+* If `LANGFLOW_AUTO_LOGIN=True`, Langflow bypasses authentication for the visual editor and API requests.
+All users can access the same environment without password protection.
+If you don't have user management enabled, all users are effectively superusers.
+
+Langflow doesn't allow users to simultaneously edit the same flow in real time.
+If two users edit the same flow, Langflow saves only the work of the most recent editor based on the state of that user's [workspace](/concepts-overview#workspace). Any changes made by the other user in the interim are overwritten.
#### AUTO_LOGIN and API authentication in version 1.5 and later
@@ -238,7 +246,7 @@ To generate a secret encryption key for `LANGFLOW_SECRET_KEY`, do the following:
### LANGFLOW_NEW_USER_IS_ACTIVE {#langflow-new-user-is-active}
-When `LANGFLOW_NEW_USER_IS_ACTIVE=False` (default), a superuser must explicitly activate a new user's account before they can sign in to the Langflow UI.
+When `LANGFLOW_NEW_USER_IS_ACTIVE=False` (default), a superuser must explicitly activate a new user's account before they can sign in to the visual editor.
The superuser can also deactivate a user's account as needed.
When `LANGFLOW_NEW_USER_IS_ACTIVE=True`, new user accounts are automatically activated.
@@ -247,6 +255,7 @@ When `LANGFLOW_NEW_USER_IS_ACTIVE=True`, new user accounts are automatically act
LANGFLOW_NEW_USER_IS_ACTIVE=False
```
+Only superusers can manage user accounts for a Langflow server, but user management only matters if your server has authentication enabled.
For more information, see [Start a Langflow server with authentication enabled](#start-a-langflow-server-with-authentication-enabled).
### LANGFLOW_ENABLE_SUPERUSER_CLI {#langflow-enable-superuser-cli}
@@ -262,7 +271,7 @@ This involves disabling automatic login, setting superuser credentials, generati
This configuration is recommended for any deployment where Langflow is exposed to a shared or public network, or where multiple users access the same Langflow server.
-With authentication enabled, all users must sign in to the Langflow UI with valid credentials, and API requests require authentication with a Langflow API key.
+With authentication enabled, all users must sign in to the visual editor with valid credentials, and API requests require authentication with a Langflow API key.
Additionally, you must sign in as a superuser to manage users and [create a Langflow API key](#create-a-langflow-api-key) with superuser privileges.
### Start the Langflow server
@@ -324,7 +333,7 @@ Next, you can add users to your Langflow server to collaborate with others on fl
2. Log in with the superuser credentials you set in your `.env` (`LANGFLOW_SUPERUSER` and `LANGFLOW_SUPERUSER_PASSWORD`).
-3. To manage users on your server, navigate to `/admin`, such as `http://localhost:7860/admin`, click your user profile image, and then click **Admin Page**.
+3. To manage users on your server, navigate to `/admin`, such as `http://localhost:7860/admin`, click your profile icon, and then click **Admin Page**.
As a superuser, you can add users, set permissions, reset passwords, and delete accounts.
diff --git a/docs/docs/Configuration/configuration-cli.mdx b/docs/docs/Configuration/configuration-cli.mdx
index 5341c9fe3..55dd595a3 100644
--- a/docs/docs/Configuration/configuration-cli.mdx
+++ b/docs/docs/Configuration/configuration-cli.mdx
@@ -138,7 +138,7 @@ python -m langflow run [OPTIONS]
| `--workers` | `1` | Integer | Number of worker processes. |
| `--worker-timeout` | `300` | Integer | Worker timeout in seconds. |
| `--port` | `7860` | Integer | The port on which the Langflow server will run. The server automatically selects a free port if the specified port is in use. |
-| `--components-path` | `langflow/components` | String | Path to the directory containing custom components. |
+| `--components-path` | `/components` | String | Path to the directory containing custom components. |
| `--env-file` | Not set | String | Path to the `.env` file containing environment variables. |
| `--log-level` | `critical` | Enum | Set the logging level as one of `debug`, `info`, `warning`, `error`, or `critical`. |
| `--log-file` | `logs/langflow.log` | String | Set the path to the log file for Langflow. |
diff --git a/docs/docs/Configuration/configuration-global-variables.mdx b/docs/docs/Configuration/configuration-global-variables.mdx
index d9d5d7bf4..b77ee85b9 100644
--- a/docs/docs/Configuration/configuration-global-variables.mdx
+++ b/docs/docs/Configuration/configuration-global-variables.mdx
@@ -7,7 +7,7 @@ import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-Global variables let you store and reuse generic input values and credentials across your projects.
+Global variables let you store and reuse generic input values and credentials across your [Langflow projects](/concepts-flows#projects).
You can use a global variable in any text input field that displays the **Globe** icon.
Langflow stores global variables in its internal database, and encrypts the values using a secret key.
@@ -16,7 +16,7 @@ Langflow stores global variables in its internal database, and encrypts the valu
To create a new global variable, follow these steps.
-1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Global Variables**.
3. Click **Add New**.
@@ -25,12 +25,12 @@ To create a new global variable, follow these steps.
5. Optional: Select a **Type** for your global variable. The available types are **Generic** (default) and **Credential**.
- Langflow encrypts both **Generic** and **Credential** type global variables. The difference is in how the variables are displayed in the UI.
+ Langflow encrypts both **Generic** and **Credential** type global variables. The difference is in how the variables are displayed in the visual editor:
- Global variables of the **Generic** type are displayed in a standard input field with no masking.
+ * Global variables of the **Generic** type are displayed in a standard input field with no masking.
+ * Global variables of the **Credential** type are hidden in the visual editor and entered in a password-style input field that masks the value. This type isn't allowed in session ID fields where the value is exposed.
- Global variables of the **Credential** type are hidden in the UI and entered in a password-style input field that masks the value. They are also not allowed in session ID fields, where they would be exposed.
- All default environment variables and variables sourced from the environment using `LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT` are automatically set as **Credential** type global variables.
+ All default environment variables and variables sourced from the environment using `LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT` are automatically set as **Credential** type.
6. Enter the **Value** for your global variable.
@@ -42,7 +42,7 @@ You can now select your global variable from any text input field that displays
## Edit a global variable
-1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Global Variables**.
@@ -55,10 +55,10 @@ You can now select your global variable from any text input field that displays
## Delete a global variable
:::warning
-Deleting a global variable permanently deletes any references to it from your existing projects.
+Deleting a global variable permanently deletes any references to it from all flows in your Langflow projects.
:::
-1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Global Variables**.
@@ -112,7 +112,7 @@ If you installed Langflow locally, you must define the `LANGFLOW_VARIABLES_TO_GE
5. Confirm that Langflow successfully sourced the global variables from the environment:
- 1. In the Langflow UI header, click your profile icon, and then select **Settings**.
+ 1. In the Langflow header, click your profile icon, and then select **Settings**.
2. Click **Global Variables**, and then make sure that your environment variables appear in the **Global Variables** list.
@@ -150,7 +150,8 @@ docker run -it --rm \
When adding global variables from the environment, the following limitations apply:
- You can only source the **Name** and **Value** from the environment.
- To add additional parameters, such as the **Apply To Fields** parameter, you must edit the global variables in the Langflow UI.
+
+ To add additional parameters, such as the **Apply To Fields** parameter, you must edit the global variables in your Langflow **Settings**.
- Global variables that you add from the environment always have the **Credential** type.
:::
@@ -161,7 +162,7 @@ If you want to explicitly prevent Langflow from sourcing global variables from t
LANGFLOW_STORE_ENVIRONMENT_VARIABLES=false
```
-If you want to automatically set fallback values for your global variables from environment variables, set the `LANGFLOW_FALLBACK_FROM_ENV_VAR` environment variable to `true` in your `.env` file. When this feature is enabled, if a global variable is not found, Langflow attempts to use an environment variable with the same name as a fallback.
+If you want to automatically set fallback values for your global variables from environment variables, set the `LANGFLOW_FALLBACK_FROM_ENV_VAR` environment variable to `true` in your `.env` file. When this feature is enabled, if a global variable isn't found, Langflow attempts to use an environment variable with the same name as a backup.
```text
LANGFLOW_FALLBACK_FROM_ENV_VAR=true
diff --git a/docs/docs/Configuration/environment-variables.mdx b/docs/docs/Configuration/environment-variables.mdx
index 1d65b17ff..245e35b27 100644
--- a/docs/docs/Configuration/environment-variables.mdx
+++ b/docs/docs/Configuration/environment-variables.mdx
@@ -145,7 +145,7 @@ The following table lists the environment variables supported by Langflow.
| Variable | Format | Default | Description |
|----------|--------|---------|-------------|
-| `DO_NOT_TRACK` | Boolean | `false` | If this option is enabled, Langflow does not track telemetry. |
+| `DO_NOT_TRACK` | Boolean | `false` | If `true`, Langflow telemetry is disabled. |
| `LANGFLOW_AUTO_LOGIN` | Boolean | `true` | See [`LANGFLOW_AUTO_LOGIN`](/api-keys-and-authentication#langflow-auto-login). |
| `LANGFLOW_AUTO_SAVING` | Boolean | `true` | Enable flow auto-saving. See [`--auto-saving`](./configuration-cli.mdx#run-auto-saving). |
| `LANGFLOW_AUTO_SAVING_INTERVAL` | Integer | `1000` | Set the interval for flow auto-saving in milliseconds. See [`--auto-saving-interval`](./configuration-cli.mdx#run-auto-saving-interval). |
@@ -164,18 +164,18 @@ The following table lists the environment variables supported by Langflow.
| `LANGFLOW_DISABLE_TRACK_APIKEY_USAGE` | Boolean | `false` | If set to `true`, disables tracking of API key usage (`total_uses` and `last_used_at`) to avoid database contention under high concurrency. |
| `LANGFLOW_ENABLE_LOG_RETRIEVAL` | Boolean | `false` | Enable log retrieval functionality. |
| `LANGFLOW_ENABLE_SUPERUSER_CLI` | Boolean | `true` | Allow creation of superusers with the Langflow CLI command [`langflow superuser`](./configuration-cli.mdx#langflow-superuser). Recommended to be `false` in production for security reasons. |
-| `LANGFLOW_FALLBACK_TO_ENV_VAR` | Boolean | `true` | If enabled, [global variables](/configuration-global-variables) set in the Langflow UI fall back to an environment variable with the same name when Langflow fails to retrieve the variable value. |
+| `LANGFLOW_FALLBACK_TO_ENV_VAR` | Boolean | `true` | If enabled, [global variables](/configuration-global-variables) set in your Langflow **Settings** can use an environment variable with the same name if Langflow can't retrieve the variable value from the global variables. |
| `LANGFLOW_FRONTEND_PATH` | String | `./frontend` | Path to the frontend directory containing build files. This is for development purposes only. See [`--frontend-path`](./configuration-cli.mdx#run-frontend-path). |
| `LANGFLOW_HEALTH_CHECK_MAX_RETRIES` | Integer | `5` | Set the maximum number of retries for the health check. See [`--health-check-max-retries`](./configuration-cli.mdx#run-health-check-max-retries). |
| `LANGFLOW_HOST` | String | `localhost` | The host on which the Langflow server will run. See [`--host`](./configuration-cli.mdx#run-host). |
| `LANGFLOW_LANGCHAIN_CACHE` | String | `InMemoryCache` | Type of cache to use. Possible values: `InMemoryCache`, `SQLiteCache`. See [`--cache`](./configuration-cli.mdx#run-cache). |
| `LANGFLOW_LOG_LEVEL` | String | `INFO` | Set the logging level for Langflow. Possible values: `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`. |
-| `LANGFLOW_LOG_FILE` | String | Not set | Path to the log file. If this option is not set, logs are written to stdout. |
+| `LANGFLOW_LOG_FILE` | String | Not set | Path to the log file. If this option isn't set, logs are written to stdout. |
| `LANGFLOW_LOG_RETRIEVER_BUFFER_SIZE` | Integer | `10000` | Set the buffer size for log retrieval. Only used if `LANGFLOW_ENABLE_LOG_RETRIEVAL` is enabled. |
| `LANGFLOW_MAX_FILE_SIZE_UPLOAD` | Integer | `100` | Set the maximum file size for the upload in megabytes. See [`--max-file-size-upload`](./configuration-cli.mdx#run-max-file-size-upload). |
-| `LANGFLOW_MAX_ITEMS_LENGTH` | Integer | `100` | Maximum number of items to store and display in the UI. Lists longer than this will be truncated when displayed in the UI. Does not affect data passed between components nor outputs. |
-| `LANGFLOW_MAX_TEXT_LENGTH` | Integer | `1000` | Maximum number of characters to store and display in the UI. Responses longer than this will be truncated when displayed in the UI. Does not truncate responses between components nor outputs. |
-| `LANGFLOW_MCP_SERVER_ENABLED` | Boolean | `true` | If this option is set to False, Langflow does not enable the MCP server. |
+| `LANGFLOW_MAX_ITEMS_LENGTH` | Integer | `100` | Maximum number of items to store and display in the visual editor. Lists longer than this will be truncated when displayed in the visual editor. Doesn't affect data passed between components nor outputs. |
+| `LANGFLOW_MAX_TEXT_LENGTH` | Integer | `1000` | Maximum number of characters to store and display in the visual editor. Responses longer than this will be truncated when displayed in the visual editor. Doesn't truncate responses between components nor outputs. |
+| `LANGFLOW_MCP_SERVER_ENABLED` | Boolean | `true` | If this option is set to False, Langflow doesn't enable the MCP server. |
| `LANGFLOW_MCP_SERVER_ENABLE_PROGRESS_NOTIFICATIONS` | Boolean | `false` | If this option is set to True, Langflow sends progress notifications in the MCP server. |
| `LANGFLOW_NEW_USER_IS_ACTIVE` | Boolean | `false` | See [`LANGFLOW_NEW_USER_IS_ACTIVE`](/api-keys-and-authentication#langflow-new-user-is-active). |
| `LANGFLOW_OPEN_BROWSER` | Boolean | `false` | Open the system web browser on startup. See [`--open-browser`](./configuration-cli.mdx#run-open-browser). |
@@ -213,7 +213,7 @@ The following examples show how to configure Langflow using environment variable
The `.env` file is a text file that contains key-value pairs of environment variables.
-Create or edit a file named `.env` in your project root directory and add your configuration:
+Create or edit a `.env` file in the root directory of your application or Langflow environment, and then add your configuration variables to the file:
```text title=".env"
DO_NOT_TRACK=true
diff --git a/docs/docs/Contributing/contributing-bundles.mdx b/docs/docs/Contributing/contributing-bundles.mdx
index 64823ed9d..0c60797fb 100644
--- a/docs/docs/Contributing/contributing-bundles.mdx
+++ b/docs/docs/Contributing/contributing-bundles.mdx
@@ -3,9 +3,11 @@ title: Contribute bundles
slug: /contributing-bundles
---
-Follow these steps to add new component bundles to the Langflow sidebar.
+Bundles are groups of components that are related to a specific service provider.
-This example adds a new bundle named `DarthVader`.
+Follow these steps to add components to the **Bundles** section of the **Components** menu in the Langflow visual editor.
+
+Example adds a new bundle named `DarthVader`.
## Add the bundle to the backend folder
@@ -94,7 +96,7 @@ For example:
import("@/icons/DeepSeek").then((mod) => ({ default: mod.DeepSeekIcon })),
```
-8. To update the bundles sidebar, add the new icon to the `SIDEBAR_BUNDLES` array in `src > frontend > src > utils > styleUtils.ts`.
+8. To update the **Bundles** section in the **Components** menu, add the new icon to the `SIDEBAR_BUNDLES` array in `src > frontend > src > utils > styleUtils.ts`.
You can view the [SIDEBAR_BUNDLES array](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/utils/styleUtils.ts#L231) in the Langflow repository.\
The `name` must point to the folder you created within the `src > backend > base > langflow > components` directory.
For example:
@@ -124,4 +126,4 @@ class DarthVaderAPIComponent(LCToolComponent):
1. To rebuild the backend and frontend, run `make install_frontend && make build_frontend && make install_backend && uv run langflow run --port 7860`.
2. Refresh the frontend application.
-Your new bundle called `DarthVader` is available in the sidebar.
\ No newline at end of file
+Your new bundle called `DarthVader` is available in the **Components** menu in the visual editor.
\ No newline at end of file
diff --git a/docs/docs/Contributing/contributing-community.mdx b/docs/docs/Contributing/contributing-community.mdx
index ada282099..045867372 100644
--- a/docs/docs/Contributing/contributing-community.mdx
+++ b/docs/docs/Contributing/contributing-community.mdx
@@ -18,7 +18,7 @@ Follow [@langflow_ai](https://twitter.com/langflow_ai) on X to get the latest ne
If you like Langflow, you can star the [Langflow GitHub repository](https://github.com/langflow-ai/langflow).
Stars help other users find Langflow more easily, and quickly understand that other users have found it useful.
-Because Langflow is an open-source project, the more visible the repository is, the more likely the project is to attract [contributors](/contributing-how-to-contribute).
+Because Langflow is open-source, the more visible the repository is, the more likely the codebase is to attract [contributors](/contributing-how-to-contribute).
## Watch the GitHub repository
diff --git a/docs/docs/Contributing/contributing-component-tests.mdx b/docs/docs/Contributing/contributing-component-tests.mdx
index eb6b0236f..b237d6d0e 100644
--- a/docs/docs/Contributing/contributing-component-tests.mdx
+++ b/docs/docs/Contributing/contributing-component-tests.mdx
@@ -9,11 +9,11 @@ This guide outlines how to structure and implement tests for application compone
* The test file should follow the same directory structure as the component being tested, but should be placed in the corresponding unit tests folder.
-For example, if the file path for the component is `src/backend/base/langflow/components/prompts/`, then the test file should be located at `src/backend/tests/unit/components/prompts`.
+ For example, if the file path for the component is `src/backend/base/langflow/components/prompts/`, then the test file should be located at `src/backend/tests/unit/components/prompts`.
* The test file name should use snake case and follow the pattern `test_.py`.
-For example, if the file to be tested is `PromptComponent.py`, then the test file should be named `test_prompt_component.py`.
+ For example, if the file to be tested is `PromptComponent.py`, then the test file should be named `test_prompt_component.py`.
## File structure
@@ -36,33 +36,33 @@ These base classes enforce mandatory methods that the component test classes mus
* `component_class:` Returns the class of the component to be tested. For example:
-```python
-@pytest.fixture
-def component_class(self):
- return PromptComponent
-```
+ ```python
+ @pytest.fixture
+ def component_class(self):
+ return PromptComponent
+ ```
* `default_kwargs:` Returns a dictionary with the default arguments required to instantiate the component. For example:
-```python
-@pytest.fixture
-def default_kwargs(self):
- return {"template": "Hello {name}!", "name": "John", "_session_id": "123"}
-```
+ ```python
+ @pytest.fixture
+ def default_kwargs(self):
+ return {"template": "Hello {name}!", "name": "John", "_session_id": "123"}
+ ```
* `file_names_mapping:` Returns a list of dictionaries representing the relationship between `version`, `module`, and `file_name` that the tested component has had over time. This can be left empty if it is an unreleased component. For example:
-```python
-@pytest.fixture
-def file_names_mapping(self):
- return [
- {"version": "1.0.15", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.16", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.17", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.18", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.19", "module": "prompts", "file_name": "Prompt"},
- ]
-```
+ ```python
+ @pytest.fixture
+ def file_names_mapping(self):
+ return [
+ {"version": "1.0.15", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.16", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.17", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.18", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.19", "module": "prompts", "file_name": "Prompt"},
+ ]
+ ```
## Testing component functionalities
@@ -78,37 +78,37 @@ Once the basic structure of the test file is defined, implement test methods for
1. **Arrange**: Prepare the data.
-It is **recommended** to use the fixtures defined in the basic structure, but not mandatory.
+ It is recommended, but not mandatory, that you use the fixtures defined in the basic structure.
-```python
-def test_post_code_processing(self, component_class, default_kwargs):
- component = component_class(**default_kwargs)
-```
+ ```python
+ def test_post_code_processing(self, component_class, default_kwargs):
+ component = component_class(**default_kwargs)
+ ```
2. **Act**: Execute the component.
-Call the `.to_frontend_node()` method of the component prepared during the **Arrange** step.
+ Call the `.to_frontend_node()` method of the component prepared during the **Arrange** step.
-```python
-def test_post_code_processing(self, component_class, default_kwargs):
- component = component_class(**default_kwargs)
+ ```python
+ def test_post_code_processing(self, component_class, default_kwargs):
+ component = component_class(**default_kwargs)
- frontend_node = component.to_frontend_node()
-```
+ frontend_node = component.to_frontend_node()
+ ```
3. **Assert**: Verify the result.
-After executing the `.to_frontend_node()` method, the resulting data is available for verification in the dictionary `frontend_node["data"]["node"]`. Assertions should be clear and cover the expected outcomes.
+ After executing the `.to_frontend_node()` method, the resulting data is available for verification in the dictionary `frontend_node["data"]["node"]`. Assertions should be clear and cover the expected outcomes.
-```python
-def test_post_code_processing(self, component_class, default_kwargs):
- component = component_class(**default_kwargs)
+ ```python
+ def test_post_code_processing(self, component_class, default_kwargs):
+ component = component_class(**default_kwargs)
- frontend_node = component.to_frontend_node()
+ frontend_node = component.to_frontend_node()
- node_data = frontend_node["data"]["node"]
- assert node_data["template"]["template"]["value"] == "Hello {name}!"
- assert "name" in node_data["custom_fields"]["template"]
- assert "name" in node_data["template"]
- assert node_data["template"]["name"]["value"] == "John"
-```
+ node_data = frontend_node["data"]["node"]
+ assert node_data["template"]["template"]["value"] == "Hello {name}!"
+ assert "name" in node_data["custom_fields"]["template"]
+ assert "name" in node_data["template"]
+ assert node_data["template"]["name"]["value"] == "John"
+ ```
\ No newline at end of file
diff --git a/docs/docs/Contributing/contributing-components.mdx b/docs/docs/Contributing/contributing-components.mdx
index 00c8b6985..1f9c84ee6 100644
--- a/docs/docs/Contributing/contributing-components.mdx
+++ b/docs/docs/Contributing/contributing-components.mdx
@@ -41,7 +41,7 @@ class DataFrameProcessor(Component):
name: str = "dataframe_processor"
```
- * `display_name`: A user-friendly name shown in the UI.
+ * `display_name`: A user-friendly name shown in the visual editor.
* `description`: A brief description of what your component does.
* `documentation`: A link to detailed documentation.
* `icon`: An emoji or icon identifier for visual representation.
diff --git a/docs/docs/Contributing/contributing-how-to-contribute.mdx b/docs/docs/Contributing/contributing-how-to-contribute.mdx
index 87735e833..cebadf66e 100644
--- a/docs/docs/Contributing/contributing-how-to-contribute.mdx
+++ b/docs/docs/Contributing/contributing-how-to-contribute.mdx
@@ -7,9 +7,9 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This guide is intended to help you start contributing to Langflow.
-As an open-source project in a rapidly developing field, Langflow welcomes contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
+As an open-source codebase in a rapidly developing field, Langflow welcomes contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
-To contribute code or documentation to this project, follow the [fork and pull request](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
+To contribute code or documentation to Langflow, follow the [fork and pull request quickstart](https://docs.github.com/en/get-started/quickstart/contributing-to-projects).
## Install Langflow from source
@@ -35,7 +35,7 @@ Replace the following:
### Run Langflow from source
-You can run Langflow from source after cloning the repository, even if you're not contributing to the codebase.
+You can run Langflow from source after cloning the repository, even if you aren't contributing to the codebase.
Run from source on macOS/Linux
@@ -55,7 +55,7 @@ The Langflow frontend is served at `http://localhost:7860`.
Run from source with Windows CMD
-To run Langflow from source on Windows, you can use the Langflow project's included scripts, or run the commands in the terminal.
+To run Langflow from source on Windows, you can use the Langflow repository's included scripts, or run the commands in the terminal.
Do one of the following:
@@ -84,7 +84,7 @@ The Langflow frontend is served at `http://localhost:7860`.
Run from source with Powershell
-To run Langflow from source on Windows, you can use the Langflow project's included scripts, or run the commands in the terminal.
+To run Langflow from source on Windows, you can use the Langflow repository's included scripts, or run the commands in the terminal.
Do one of the following:
@@ -163,7 +163,7 @@ To run all tests, including coverage, unit, and integration, tests, run `make te
-Since Windows does not include `make`, building and running Langflow from source uses `npm` and `uv`.
+Since Windows doesn't include `make`, building and running Langflow from source uses `npm` and `uv`.
To set up the Langflow development environment, run the frontend and backend in separate terminals:
diff --git a/docs/docs/Contributing/contributing-telemetry.mdx b/docs/docs/Contributing/contributing-telemetry.mdx
index 6ea60c420..0f82e107c 100644
--- a/docs/docs/Contributing/contributing-telemetry.mdx
+++ b/docs/docs/Contributing/contributing-telemetry.mdx
@@ -5,7 +5,7 @@ slug: /contributing-telemetry
Langflow uses anonymous telemetry to collect essential usage statistics to enhance functionality and the user experience. This data helps us identify popular features and areas that need improvement, and ensures development efforts align with what you need.
-We respect your privacy and are committed to protecting your data. We do not collect any personal information or sensitive data. All telemetry data is anonymized and used solely for improving Langflow.
+We respect your privacy and are committed to protecting your data. We don't collect any personal information or sensitive data. All telemetry data is anonymized and used solely for improving Langflow.
## Opt out of telemetry
@@ -36,9 +36,9 @@ To opt out of telemetry, set the `LANGFLOW_DO_NOT_TRACK` or `DO_NOT_TRACK` e
### Playground {#ae6c3859f612441db3c15a7155e9f920}
-- **Seconds**: Duration in seconds for Playground execution, offering insights into performance during testing or experimental stages.
-- **ComponentCount**: Number of components used in the Playground, which helps understand complexity and usage patterns.
-- **Success**: Success status of the Playground operation, aiding in identifying the stability of experimental features.
+- **Seconds**: Duration in seconds for **Playground** execution, offering insights into performance during testing or experimental stages.
+- **ComponentCount**: Number of components used in the **Playground**, which helps understand complexity and usage patterns.
+- **Success**: Success status of the **Playground** operation, aiding in identifying the stability of experimental features.
### Component {#630728d6654c40a6b8901459a4bc3a4e}
diff --git a/docs/docs/Contributing/contributing-templates.mdx b/docs/docs/Contributing/contributing-templates.mdx
index ef6d790ae..de71a7f02 100644
--- a/docs/docs/Contributing/contributing-templates.mdx
+++ b/docs/docs/Contributing/contributing-templates.mdx
@@ -5,65 +5,70 @@ slug: /contributing-templates
Follow these best practices when submitting a template to Langflow.
-For template formatting examples, see the Langflow repository's [starter_projects](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/initial_setup/starter_projects) folder.
+For template formatting examples, see [`/starter_projects`](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/initial_setup/starter_projects) in the Langflow repository.
## Create a PR to submit your template
Follow these steps to submit your template:
1. Fork the [Langflow repository](https://github.com/langflow-ai/langflow) on GitHub.
-2. Add your `template.json` file to the Langflow repository's [starter_projects](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/initial_setup/starter_projects) folder in your fork.
-3. Include the [Required items for template submission](#required-items-for-template-submission) listed below.
-4. Create a Pull Request from your fork to the main Langflow repository.
+2. On your fork, add your `template.json` file to `/starter_projects`.
+3. Include the [required items for template submission](#required-items-for-template-submission).
+4. Create a Pull Request (PR) from your fork to the main Langflow repository.
5. Include a screenshot of your template in the PR.
-The Langflow team will review your PR, offer feedback, and merge the template.
+The Langflow team will review your PR, offer feedback, and, if approved, merge the template.
## Required items for template submission
Include the following items and follow these guidelines when submitting your template.
### Name
+
The template name must be concise and contain no more than three words.
Capitalize only the first letter of each word.
For example: **Blog Writer** or **Travel Planning Agent**.
### Description
-The description is displayed in the UI to guide users to your template.
-The description should be brief and informative, and describe what the template does and its intended use cases.
-For example:```json "description": "Auto-generate a customized blog post from instructions and referenced articles.",```
+
+A brief, informative description that is shown in the visual editor to help users understand the template's purpose and use cases.
+For example:
+
+```json
+ "description": "Auto-generate a customized blog post from instructions and referenced articles.",
+```
### Icons
Use icons from the [Lucide](https://lucide.dev/icons/) icon library.
### Flow
-Use only the components that are available in the sidebar.
-Do not use custom components.
-Include a note to guide users. Notes accept Markdown syntax.
-A single note usually suffices.
+Use only the components that are available in the **Components** menu in the visual editor.
+Don't use custom components.
- For example:
- ```text
- # Financial Assistant Agents
+Include brief README, quickstart, or other essential details in a note. Notes accept Markdown syntax.
+For example:
- The Financial Assistant Agent retrieves web content and writes reports about finance.
+```text
+# Financial Assistant Agents
- ## Prerequisites
+The Financial Assistant Agent retrieves web content and writes reports about finance.
- * [OpenAI API key](https://platform.openai.com/api-keys)
- * [Tavily AI Search key](https://docs.tavily.com/welcome)
- * [Sambanova API key](https://sambanova.ai/)
+## Prerequisites
- ## Quickstart
+* [OpenAI API key](https://platform.openai.com/api-keys)
+* [Tavily AI Search key](https://docs.tavily.com/welcome)
+* [Sambanova API key](https://sambanova.ai/)
- 1. In both **Agent** components, add your OpenAI API key.
- 2. In the **Model Provider** field, select **Sambanova**, and select a model.
- 3. In the **Sambanova** component, add your **Sambanova API key**.
- 4. In the **Tavily Search** component, add your **Tavily API key**.
- 5. Click the **Playground** and ask `Why did Nvidia stock drop in January?`
- ```
+## Quickstart
+
+1. In both **Agent** components, add your OpenAI API key.
+2. In the **Model Provider** field, select **Sambanova**, and select a model.
+3. In the **Sambanova** component, add your **Sambanova API key**.
+4. In the **Tavily Search** component, add your **Tavily API key**.
+5. Click the **Playground** and ask `Why did Nvidia stock drop in January?`
+```
### Format
@@ -81,4 +86,4 @@ Assign the template to one of the following categories:
- RAG
- Agents
-For more information, see the Langflow repository's [template categories](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/modals/templatesModal/index.tsx#L27-L57).
+For more information, see the Langflow repository's [template categories](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/modals/templatesModal/index.tsx#L27-L57).
\ No newline at end of file
diff --git a/docs/docs/Deployment/deployment-caddyfile.mdx b/docs/docs/Deployment/deployment-caddyfile.mdx
index 2af8bcfe9..6e07f4a05 100644
--- a/docs/docs/Deployment/deployment-caddyfile.mdx
+++ b/docs/docs/Deployment/deployment-caddyfile.mdx
@@ -54,7 +54,7 @@ Now that your local machine is connected to your remote server with SSH, you can
1. Install Docker on your server.
Since this example server is an Ubuntu server, it can install snap packages.
-If you are not using Ubuntu or prefer a different installation method, see the [official Docker installation guide](https://docs.docker.com/get-started/get-docker/) for instructions for your operating system.
+If you aren't using Ubuntu or you prefer a different installation method, see the [official Docker installation guide](https://docs.docker.com/get-started/get-docker/) for instructions for your operating system.
```bash
snap install docker
```
diff --git a/docs/docs/Deployment/deployment-docker.mdx b/docs/docs/Deployment/deployment-docker.mdx
index 4d5f29619..4ff9461a7 100644
--- a/docs/docs/Deployment/deployment-docker.mdx
+++ b/docs/docs/Deployment/deployment-docker.mdx
@@ -150,7 +150,7 @@ Your custom image now contains your flow and can be deployed anywhere Docker run
While the previous section showed how to package a flow with a Docker image, this section shows how to customize the Langflow application itself. This is useful when you need to add custom Python packages or dependencies, modify Langflow's configuration or settings, include custom components or tools, or add your own code to extend Langflow's functionality.
-This example demonstrates how to customize the message history component, but the same approach can be used for any code modifications.
+This example demonstrates how to customize the **Message History** component, but the same approach can be used for any code modifications.
```dockerfile
FROM langflowai/langflow:latest
diff --git a/docs/docs/Deployment/deployment-gcp.mdx b/docs/docs/Deployment/deployment-gcp.mdx
index 499c84aa7..43eac3fc4 100644
--- a/docs/docs/Deployment/deployment-gcp.mdx
+++ b/docs/docs/Deployment/deployment-gcp.mdx
@@ -3,9 +3,9 @@ title: Deploy Langflow on Google Cloud Platform
slug: /deployment-gcp
---
-This guide demonstrates how to deploy Langflow on Google Cloud Platform with a Cloud Shell script that walks through the process of setting up a Debian-based VM with the Langflow package, Nginx, and the necessary configurations to run the Langflow development environment in GCP.
+This guide demonstrates how to deploy Langflow on [Google Cloud Platform](https://console.cloud.google.com/) with a Cloud Shell script that walks through the process of setting up a Debian-based VM with the Langflow package, Nginx, and the necessary configurations to run the Langflow development environment in GCP.
-To use this script, you need a [Google Cloud](https://console.cloud.google.com/) project with the necessary permissions to create resources.
+To use this script, you need a Google Cloud project with the necessary permissions to create resources.
1. Follow this link to launch the Cloud Shell with the GCP deployment script from the Langflow repository:
diff --git a/docs/docs/Deployment/deployment-kubernetes-dev.mdx b/docs/docs/Deployment/deployment-kubernetes-dev.mdx
index 8b3369895..c534bd3a7 100644
--- a/docs/docs/Deployment/deployment-kubernetes-dev.mdx
+++ b/docs/docs/Deployment/deployment-kubernetes-dev.mdx
@@ -6,7 +6,7 @@ slug: /deployment-kubernetes-dev
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-The [Langflow Integrated Development Environment (IDE)](https://github.com/langflow-ai/langflow-helm-charts/tree/main/charts/langflow-ide) Helm chart is designed to provide a complete environment for developers to create, test, and debug their flows. It includes both the API and the UI.
+The [Langflow Integrated Development Environment (IDE)](https://github.com/langflow-ai/langflow-helm-charts/tree/main/charts/langflow-ide) Helm chart is designed to provide a complete environment for developers to create, test, and debug their flows. It includes both the API and the visual editor.
## Prerequisites
@@ -60,14 +60,15 @@ Enable local port forwarding to access Langflow from your local machine.
kubectl port-forward -n langflow svc/langflow-service-backend 7860:7860
```
-2. To make the Langflow UI accessible from your local machine at port 8080:
+2. To make the visual editor accessible from your local machine at port 8080:
+
```shell
kubectl port-forward -n langflow svc/langflow-service 8080:8080
```
Now you can do the following:
- Access the Langflow API at `http://localhost:7860`.
-- Access the Langflow UI at `http://localhost:8080`.
+- Access the visual editor at `http://localhost:8080`.
## Configure the Langflow version
diff --git a/docs/docs/Deployment/deployment-prod-best-practices.mdx b/docs/docs/Deployment/deployment-prod-best-practices.mdx
index fd65598d5..83601a666 100644
--- a/docs/docs/Deployment/deployment-prod-best-practices.mdx
+++ b/docs/docs/Deployment/deployment-prod-best-practices.mdx
@@ -28,8 +28,8 @@ Langflow can be deployed on cloud deployments like **AWS EKS, Google GKE, or Azu
A typical Langflow deployment includes:
-* **Langflow API and UI**: The Langflow service is the core component of the Langflow platform. It provides a RESTful API for executing flows.
-* **Kubernetes cluster**: The Kubernetes cluster provides a platform for deploying and managing the Langflow service and its supporting components.
+* **Langflow API and visual editor**: The Langflow service is the core part of the Langflow platform. It provides a RESTful API for executing flows.
+* **Kubernetes cluster**: The Kubernetes cluster provides a platform for deploying and managing the Langflow service and supporting services.
* **Persistent storage**: Persistent storage is used to store the Langflow service's data, such as models and training data.
* **Ingress controller**: The ingress controller provides a single entry point for traffic to the Langflow service.
* **Load balancer**: Balances traffic across multiple Langflow replicas.
@@ -53,7 +53,7 @@ This separation is designed to enhance security, optimize resource allocation, a
* **Security**
* **Isolation**: By separating the development and production environments, you can better isolate different phases of the application lifecycle. This isolation minimizes the risk of development-related issues impacting the production environments.
* **Access control**: Different security policies and access controls can be applied to each environment. Developers may require broader access in the IDE for testing and debugging, while the runtime environment can be locked down with stricter security measures.
- * **Reduced attack surface**: The runtime environment is configured to include only essential components, reducing the attack surface and potential vulnerabilities.
+ * **Reduced attack surface**: The runtime environment is configured to include only essential parts, reducing the attack surface and potential vulnerabilities.
* **Resource allocation**
* **Optimized resource usage and cost efficiency**: By separating the two environments, you can allocate resources more effectively. Each flow can be deployed independently, providing fine-grained resource control.
* **Scalability**: The runtime environment can be scaled independently based on application load and performance requirements, without affecting the development environment.
diff --git a/docs/docs/Develop/Clients/typescript-client.mdx b/docs/docs/Develop/Clients/typescript-client.mdx
index 838381a85..6f6936a55 100644
--- a/docs/docs/Develop/Clients/typescript-client.mdx
+++ b/docs/docs/Develop/Clients/typescript-client.mdx
@@ -128,7 +128,7 @@ This example builds on the quickstart with additional features for interacting w
Tweaks change values within components for all calls to your flow.
- This example tweaks the Open-AI model component to enforce using the `gpt-4o-mini` model:
+ This example tweaks the **OpenAI** component to enforce using the `gpt-4o-mini` model:
```tsx
const tweaks = { model_name: "gpt-4o-mini" };
diff --git a/docs/docs/Develop/develop-application.mdx b/docs/docs/Develop/develop-application.mdx
index 12f22d483..c1aa8be89 100644
--- a/docs/docs/Develop/develop-application.mdx
+++ b/docs/docs/Develop/develop-application.mdx
@@ -8,12 +8,12 @@ Designing flows in the visual editor is only the first step in building an appli
Once you have a functional flow, you can use that flow in a larger application, such as a website or mobile app.
Because Langflow is both an IDE and a runtime, you can use Langflow to build and test your flows locally, and then package and serve your flows in a production environment.
-This guide introduces application development with Langflow from project setup through packaging and deployment.
+This guide introduces application development with Langflow from initial setup through packaging and deployment.
This documentation doesn't explain how to write a complete application; it only describes how to include Langflow in the context of a larger application.
-## Project structure
+## Directory structure
-The following example describes the project directory structure for a minimal Langflow application:
+The following example describes the directory structure for a minimal Langflow application:
```text
LANGFLOW-APPLICATION/
@@ -26,7 +26,7 @@ LANGFLOW-APPLICATION/
├── README.md
```
-This project directory contains the following:
+This directory contains the following:
* [`docker.env`](#docker-env): This file is copied to the Docker image as a `.env` file in the container root.
* [`Dockerfile`](#dockerfile): This file controls how your Langflow image is built.
@@ -34,7 +34,7 @@ This project directory contains the following:
* `/langflow-config-dir`: This folder is referenced in the Dockerfile as the location for your Langflow deployment's configuration files, database, and logs.
* `README.md`: This is a typical README file for your application's documentation.
-This is a minimal example of a Langflow application project directory.
+This is a minimal example of a Langflow application directory.
Your application might have additional files and folders, such as a `/components` folder for custom components, or a `pyproject.toml` file for additional dependencies.
### Package management
@@ -93,15 +93,15 @@ When you package Langflow as a dependency of an application, you only want to in
If you have chained flows (flows that trigger other flows), make sure you export _all_ necessary flows.
-2. Add the exported Langflow JSON files to the `/flows` folder in your project directory.
+2. Add the exported Langflow JSON files to the `/flows` folder in your application directory.
### Components
-The core components and bundles that you see in the Langflow visual editors are automatically included in the base Langflow Docker image.
+The core components and bundles that you see in the Langflow visual editor are automatically included in the base Langflow Docker image.
-If you have any [custom components](/components-custom-components) that you created for your application, you must include these components in your project directory:
+If you have any [custom components](/components-custom-components) that you created for your application, you must include these components in your application directory:
-1. Create a `/components` folder in your project directory.
+1. Create a `/components` folder in your application directory.
2. Add your custom component files to the `/components` folder.
3. Specify the path to `/components` in your `docker.env`.
@@ -151,9 +151,9 @@ In this example, `ENV LANGFLOW_LOG_ENV=container` sets the logging behavior for
### Backend-only mode
The `--backend-only` flag in `CMD` starts Langflow in backend-only mode, which provides programmatic access only.
-This is recommended when running Langflow as a dependency of an application where you don't need access to the Langflow UI.
+This is recommended when running Langflow as a dependency of an application where you don't need access to the visual editor.
-If you want to serve the visual editor _and_ the Langflow backend, omit `--backend-only`.
+If you want to serve the Langflow visual editor _and_ backend, then omit `--backend-only`.
For more information, see [Deploy Langflow on Docker](/deployment-docker).
@@ -193,7 +193,7 @@ For information about publishing your image on Docker Hub and running a Langflow
The following example runs a simple LLM chat flow that responds to a chat input string.
If necessary, modify the payload for your flow.
- For example, if your flow doesn't have a **Chat input** component, you must modify the payload to provide the expected input for your flow.
+ For example, if your flow doesn't have a **Chat Input** component, you must modify the payload to provide the expected input for your flow.
```bash
curl --request POST \
diff --git a/docs/docs/Develop/logging.mdx b/docs/docs/Develop/logging.mdx
index d2969f175..e69956fdc 100644
--- a/docs/docs/Develop/logging.mdx
+++ b/docs/docs/Develop/logging.mdx
@@ -51,7 +51,7 @@ A complete example `.env` file is available in the [Langflow repository](https:/
## Flow and component logs
After you run a flow, you can inspect the logs for the each component and flow run.
-For example, you can inspect `Message` objects ingested and generated by [input and output components](/components-io).
+For example, you can inspect `Message` objects ingested and generated by [**Input and Output** components](/components-io).
### View flow logs
@@ -92,7 +92,7 @@ For more information, see [View chat history](/concepts-playground#view-chat-his
When debugging issues with the format or content of a flow's output, it can help to inspect each component's output to determine where data is being lost or malformed.
-To view the output produced by a single component during the most recent run, click **Inspect output** in the visual editor.
+To view the output produced by a single component during the most recent run, click **Inspect output** on the component in the visual editor.
## See also
diff --git a/docs/docs/Develop/memory.mdx b/docs/docs/Develop/memory.mdx
index b4447f092..0f9d3204e 100644
--- a/docs/docs/Develop/memory.mdx
+++ b/docs/docs/Develop/memory.mdx
@@ -36,11 +36,11 @@ The following tables are stored in `langflow.db`:
• **ApiKey**: Manages API authentication keys for Langflow users. Component API keys are stored in the **Variables** table. For more information, see [API keys and authentication](/api-keys-and-authentication).
-• **Project**: Provides a structure for flow storage. For more information, see [Projects](/concepts-flows#projects).
+• **Project**: Provides a structure for flow storage. For more information, see [Manage flows in projects](/concepts-flows#projects).
• **Variables**: Stores global encrypted values and credentials. For more information, see [Global variables](/configuration-global-variables).
-• **VertexBuild**: Tracks the build status of individual nodes within flows. For more information, see [Run a flow in the Playground](/concepts-playground).
+• **VertexBuild**: Tracks the build status of individual nodes within flows. For more information, see [Test flows in the Playground](/concepts-playground).
For more information, see the database models in the [source code](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/services/database/models).
@@ -79,7 +79,7 @@ LANGFLOW_LANGCHAIN_CACHE=InMemoryCache
LANGFLOW_CACHE_TYPE=Async
```
-Alternative caching options can be configured, but options other than the default asynchronous, in-memory cache are not supported.
+Alternative caching options can be configured, but options other than the default asynchronous, in-memory cache aren't supported.
The default behavior is suitable for most use cases.
For other options, see the `LANGFLOW_CACHE_TYPE` [environment variable](/environment-variables).
diff --git a/docs/docs/Develop/session-id.mdx b/docs/docs/Develop/session-id.mdx
index 8a9641e91..6f8ba4f79 100644
--- a/docs/docs/Develop/session-id.mdx
+++ b/docs/docs/Develop/session-id.mdx
@@ -34,7 +34,7 @@ The `my_custom_session_value` value is used in components that accept it, and th
## Retrieval of messages from memory by session ID
-To retrieve messages from local Langflow memory, add a [Message history](/components-helpers#message-history) component to your flow.
+To retrieve messages from local Langflow memory, add a [**Message History** component](/components-helpers#message-history) to your flow.
The component accepts `sessionID` as a filter parameter, and uses the session ID value from upstream automatically to retrieve message history by session ID from storage.
Messages can be retrieved by `session_id` from the Langflow API at `GET /v1/monitor/messages`. For more information, see [Monitor endpoints](https://docs.langflow.org/api-monitor).
diff --git a/docs/docs/Develop/webhook.mdx b/docs/docs/Develop/webhook.mdx
index 61e84a8c5..1504b4fff 100644
--- a/docs/docs/Develop/webhook.mdx
+++ b/docs/docs/Develop/webhook.mdx
@@ -8,7 +8,7 @@ import Icon from "@site/src/components/icon";
You can use the **Webhook** component to start a flow run in response to an external event.
With the **Webhook** component, a flow can receive data directly from external sources. Then, the flow can parse the data and pass it to other components in the flow to initiate other actions, such as calling APIs, writing to databases, and chatting with LLMs.
-If the input is not valid JSON, the **Webhook** component wraps it in a `payload` object so that it can be accepted as input to trigger the flow.
+If the input isn't valid JSON, the **Webhook** component wraps it in a `payload` object so that it can be accepted as input to trigger the flow.
The **Webhook** component provides a versatile entrypoint that can make your flows more event-driven and integrated with your entire stack of applications and services.
For example:
@@ -55,8 +55,8 @@ To use the **Webhook** component in a flow, do the following:
6. From the **Webhook** component's **Endpoint** field, copy the API endpoint that you will use to send data to the **Webhook** component and trigger the flow.
- Alternatively, to get a complete `POST /v1/webhook/$FLOW_ID` code snippet, open the flow's [**API access** pane](/concepts-publish#api-access), and then click the **Webhook cURL** tab.
- You can also modify the default curl command in the **Webhook** component's **cURL** field.
+ Alternatively, to get a complete `POST /v1/webhook/$FLOW_ID` code snippet, open the flow's [**API access** pane](/concepts-publish#api-access), and then click the **Webhook curl** tab.
+ You can also modify the default curl command in the **Webhook** component's **curl** field.
If this field isn't visible by default, click the **Webhook** component, and then click **Controls** in the [component's header menu](/concepts-components#component-menus).
7. Send a POST request with `data` to the flow's `webhook` endpoint to trigger the flow.
@@ -103,7 +103,7 @@ To troubleshoot a flow with a **Webhook** component and verify that the componen
This mode passes the data received by the **Webhook** component as a string that is printed by the **Chat Output** component.
-5. Click **Share**, select **API access**, and then copy the **Webhook cURL** code snippet.
+5. Click **Share**, select **API access**, and then copy the **Webhook curl** code snippet.
6. Optional: Edit the `data` in the code snippet if you want to pass a different payload.
7. Send the POST request to trigger the flow.
8. Click **Playground** to verify that the **Chat Output** component printed the JSON data from your POST request.
@@ -118,5 +118,5 @@ Then, you can examine the string output and troubleshoot your parsing template,
## See also
- [Get started with the Langflow API](/api-reference-api-examples)
-- [Webhook component](/components-data#webhook)
+- [**Webhook** component](/components-data#webhook)
- [Flow trigger endpoints](/api-flows-run)
\ No newline at end of file
diff --git a/docs/docs/Get-Started/about-langflow.mdx b/docs/docs/Get-Started/about-langflow.mdx
index dc78d3dad..cf7937117 100644
--- a/docs/docs/Get-Started/about-langflow.mdx
+++ b/docs/docs/Get-Started/about-langflow.mdx
@@ -27,11 +27,11 @@ To [build a flow](/concepts-flows), you connect and configure component nodes. E
With Langflow's [visual editor](/concepts-overview), you can drag and drop components to quickly build and test a functional AI application workflow.
For example, you could build a chatbot flow for an e-commerce store that uses an LLM and a product data store to allow customers to ask questions about the store's products.
-
+
### Test flows in real-time
-You can use the [Playground](/concepts-playground) to test flows without having to build your entire application stack.
+You can use the [**Playground**](/concepts-playground) to test flows without having to build your entire application stack.
You can interact with your flows and get real-time feedback about flow logic and response generation.
You can also run individual components to test dependencies in isolation.
@@ -40,7 +40,7 @@ You can also run individual components to test dependencies in isolation.
You can use your flows as prototypes for more formal application development, or you can use the Langflow API to embed your flows into your application code.
-For more extensive projects, you can build Langflow as a dependency or deploy a Langflow server to serve flows over the public internet.
+For more extensive development, you can build Langflow as a dependency or deploy a Langflow server to serve flows over the public internet.
For more information, see the following:
@@ -58,10 +58,10 @@ All components offer parameters that you can set to fixed or variable values. Yo
### Agent and MCP support
-In addition to building agentic flows with Langflow, you can leverage Langflow's built-in agent and MCP features:
+In addition to building agent flows with Langflow, you can leverage Langflow's built-in agent and MCP features:
* [Use Langflow Agents](/agents)
-* [Use components and flows as Agent tools](/agents-tools)
+* [Use components and flows as agent tools](/agents-tools)
* [Use Langflow as an MCP server](/mcp-server)
* [Use Langflow as an MCP client](/mcp-client)
diff --git a/docs/docs/Get-Started/get-started-installation.mdx b/docs/docs/Get-Started/get-started-installation.mdx
index c988f57d1..917a4ed13 100644
--- a/docs/docs/Get-Started/get-started-installation.mdx
+++ b/docs/docs/Get-Started/get-started-installation.mdx
@@ -24,7 +24,7 @@ Langflow Desktop is a desktop version of Langflow that simplifies dependency man
However, some features aren't available for Langflow Desktop, such as the **Shareable Playground**.
-
+
1. Navigate to [Langflow Desktop](https://www.langflow.org/desktop).
2. Click **Download Langflow**, enter your contact information, and then click **Download**.
diff --git a/docs/docs/Get-Started/get-started-quickstart.mdx b/docs/docs/Get-Started/get-started-quickstart.mdx
index 7b203dc30..6d55bf1e9 100644
--- a/docs/docs/Get-Started/get-started-quickstart.mdx
+++ b/docs/docs/Get-Started/get-started-quickstart.mdx
@@ -51,26 +51,26 @@ Get started with Langflow by loading a template flow, running it, and then servi
1. In Langflow, click **New Flow**, and then select the **Simple Agent** template.
-
+
-The Simple Agent flow consists of an [Agent component](/agents) connected to [Chat I/O components](/components-io), a [Calculator component](/components-helpers#calculator), and a [URL component](/components-data#url). When you run this flow, you submit a query to the agent through the Chat Input component, the agent uses the Calculator and URL tools to generate a response, and then returns the response through the Chat Output component.
+The **Simple Agent** template consists of an [**Agent** component](/agents) connected to [**Chat Input** and **Chat Output** components](/components-io), a [**Calculator** component](/components-helpers#calculator), and a [**URL** component](/components-data#url). When you run this flow, you submit a query to the agent through the **Chat Input** component, the agent uses the **Calculator** and **URL** tools to generate a response, and then returns the response through the **Chat Output** component.
Many components can be tools for agents, including [Model Context Protocol (MCP) servers](/mcp-server). The agent decides which tools to call based on the context of a given query.
-2. In the **Agent** component's settings, in the **OpenAI API Key** field, enter your OpenAI API key directly or click the **Globe** to create a [global variable](/configuration-global-variables).
+2. In the **Agent** component, enter your OpenAI API key directly or click the **Globe** to create a [global variable](/configuration-global-variables).
This guide uses an OpenAI model for demonstration purposes. If you want to use a different provider, change the **Model Provider** and **Model Name** fields, and then provide credentials for your selected provider.
3. To run the flow, click **Playground**.
-4. To test the Calculator tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
-To help you test and evaluate your flows, the Playground shows the agent's reasoning process as it analyzes the prompt, selects a tool, and then uses the tool to generate a response.
-In this case, a math question causes the agent to select the Calculator tool and use an action like `evaluate_expression`.
+4. To test the **Calculator** tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
+To help you test and evaluate your flows, the **Playground** shows the agent's reasoning process as it analyzes the prompt, selects a tool, and then uses the tool to generate a response.
+In this case, a math question causes the agent to select the **Calculator** tool and use an action like `evaluate_expression`.

-5. To test the URL tool, ask the agent about current events.
-For this request, the agent selects the URL tool's `fetch_content` action, and then returns a summary of current news headlines.
+5. To test the **URL** tool, ask the agent about current events.
+For this request, the agent selects the **URL** tool's `fetch_content` action, and then returns a summary of current news headlines.
6. When you are done testing the flow, click **Close**.
@@ -93,7 +93,7 @@ For example, you can use the `/run` endpoint to run a flow and get the result.
Langflow provides code snippets to help you get started with the Langflow API.
-1. To open the **API access pane**, in the **Playground**, click **Share**, and then click **API access**.
+1. When editing a flow, click **Share**, and then click **API access**.
The default code in the API access pane constructs a request with the Langflow server `url`, `headers`, and a `payload` of request data.
The code snippets automatically include the `LANGFLOW_SERVER_ADDRESS` and `FLOW_ID` values for the flow, and a script to include your `LANGFLOW_API_KEY` if you've set it as an environment variable in your terminal session.
@@ -361,10 +361,10 @@ In a production application, you probably want to select parts of this response
### Extract data from the response
-The following example builds on the API pane's example code to create a question-and-answer chat in your terminal that stores the Agent's previous answer.
+The following example builds on the API pane's example code to create a question-and-answer chat in your terminal that stores the agent's previous answer.
1. Incorporate your **Simple Agent** flow's `/run` snippet into the following script.
-This script runs a question-and-answer chat in your terminal and stores the Agent's previous answer so you can compare them.
+This script runs a question-and-answer chat in your terminal and stores the agent's previous answer so you can compare them.
@@ -514,7 +514,7 @@ This script runs a question-and-answer chat in your terminal and stores the Agen
-2. To view the Agent's previous answer, type `compare`. To close the terminal chat, type `exit`.
+2. To view the agent's previous answer, type `compare`. To close the terminal chat, type `exit`.
### Use tweaks to apply temporary overrides to a flow run
@@ -528,7 +528,7 @@ To assist with formatting, you can define tweaks in Langflow's **Input Schema**
1. To open the **Input Schema** pane, from the **API access** pane, click **Input Schema**.
2. In the **Input Schema** pane, select the parameter you want to modify in your next request.
-Enabling parameters in the **Input Schema** pane does not **allow** modifications to the listed parameters. It only adds them to the example code.
+Enabling parameters in the **Input Schema** pane doesn't permanently change the listed parameters. It only adds them to the sample code snippets.
3. For example, to change the LLM provider from OpenAI to Groq, and include your Groq API key with the request, select the values **Model Providers**, **Model**, and **Groq API Key**.
Langflow updates the `tweaks` object in the code snippets based on your input parameters, and includes default values to guide you.
Use the updated code snippets in your script to run your flow with your overrides.
diff --git a/docs/docs/Integrations/Apify/integrations-apify.mdx b/docs/docs/Integrations/Apify/integrations-apify.mdx
index c6f0bd68e..12ecde226 100644
--- a/docs/docs/Integrations/Apify/integrations-apify.mdx
+++ b/docs/docs/Integrations/Apify/integrations-apify.mdx
@@ -21,7 +21,7 @@ Your flows can use the **Apify Actors** component to run **Actors** to accomplis
2. Connect the component to other components in your flow.
The component can be used to perform tasks as a standalone step in a flow or as a tool for an agent.
- To enable tool mode for this component, change the component's output type from **Output** to **Tool**, and then connect it to the **Tools** port on an **Agent** component.
+ To enable **Tool Mode** for this component, change the component's output type from **Output** to **Tool**, and then connect it to the **Tools** port on an **Agent** component.
**Apify Actors** components output the results of the Actor run as a JSON object in Langflow's [`Data` type](/data-types#data).
diff --git a/docs/docs/Integrations/Arize/integrations-arize.mdx b/docs/docs/Integrations/Arize/integrations-arize.mdx
index a0ec43e96..c1b0a1b3e 100644
--- a/docs/docs/Integrations/Arize/integrations-arize.mdx
+++ b/docs/docs/Integrations/Arize/integrations-arize.mdx
@@ -41,7 +41,7 @@ Instructions for integrating Langflow and Arize are also available in the Arize
Replace `SPACE_ID` and `API_KEY` with the values you copied from the Arize platform.
- You do not need to specify the Arize project name if you're using the standard Arize platform.
+ You don't need to specify the Arize project name if you're using the standard Arize platform.
4. Start your Langflow application with your `.env` file:
diff --git a/docs/docs/Integrations/Cleanlab/integrations-cleanlab.mdx b/docs/docs/Integrations/Cleanlab/integrations-cleanlab.mdx
index 808ff1bcc..d976516a9 100644
--- a/docs/docs/Integrations/Cleanlab/integrations-cleanlab.mdx
+++ b/docs/docs/Integrations/Cleanlab/integrations-cleanlab.mdx
@@ -7,9 +7,9 @@ import Icon from "@site/src/components/icon";
[Cleanlab](https://www.cleanlab.ai/) adds automation and trust to every data point going in and every prediction coming out of AI and RAG solutions.
-Use the Cleanlab components to integrate Cleanlab Evaluations with Langflow and unlock trustworthy Agentic, RAG, and LLM pipelines with Cleanlab's evaluation and remediation suite.
+Use the Cleanlab components to integrate Cleanlab Evaluations with Langflow and unlock trustworthy agentic, RAG, and LLM pipelines with Cleanlab's evaluation and remediation suite.
-You can use these components to quantify the trustworthiness of any LLM response with a score between `0` and `1`, and explain why a response may be good or bad. For RAG/Agentic pipelines with context, you can evaluate context sufficiency, groundedness, helpfulness, and query clarity with quantitative scores. Additionally, you can remediate low-trust responses with warnings or fallback answers.
+You can use these components to quantify the trustworthiness of any LLM response with a score between `0` and `1`, and explain why a response may be good or bad. For RAG or agent pipelines with context, you can evaluate context sufficiency, groundedness, helpfulness, and query clarity with quantitative scores. Additionally, you can remediate low-trust responses with warnings or fallback answers.
Authentication is required with a Cleanlab API key.
@@ -28,7 +28,7 @@ You can toggle parameters through the

-Connect the `Message` output from any LLM component to the `response` input of the **CleanlabEvaluator** component, and then connect the Prompt component to its `prompt` input.
+When you run the flow, the **Cleanlab Evaluator** component returns a trust score and explanation from the flow.
-The **CleanlabEvaluator** component returns a trust score and explanation from the flow.
+The **Cleanlab Remediator** component uses this trust score to determine whether to output the original response, warn about it, or replace it with a fallback answer.
-The **CleanlabRemediator** component uses this trust score to determine whether to output the original response, warn about it, or replace it with a fallback answer.
+This example shows a response that was determined to be untrustworthy (a score of `.09`) and flagged with a warning by the **Cleanlab Remediator** component.
-This example shows a response that was determined to be untrustworthy (a score of `.09`) and flagged with a warning by the **CleanlabRemediator** component.
+
-
+To hide untrustworthy responses, configure the **Cleanlab Remediator** component to replace the response with a fallback message.
-To hide untrustworthy responses, configure the **CleanlabRemediator** component to replace the response with a fallback message.
-
-
+
### Evaluate RAG pipeline
-As an example, create a flow based on the **Vector Store RAG** template, and then add the **CleanlabRAGEvaluator** component to evaluate the flow's context, query, and response.
-Connect the **context**, **query**, and **response** outputs from the other components in the RAG flow to the **CleanlabRAGEvaluator** component.
+As an example, create a flow based on the **Vector Store RAG** template, and then add the **Cleanlab RAG Evaluator** component to evaluate the flow's context, query, and response.
+Connect the **context**, **query**, and **response** outputs from the other components in the RAG flow to the **Cleanlab RAG Evaluator** component.

-Here is an example of the `Evaluation Summary` output from the **CleanlabRAGEvaluator** component:
+Here is an example of the `Evaluation Summary` output from the **Cleanlab RAG Evaluator** component:

-The `Evaluation Summary` includes the query, context, response, and all evaluation results. In this example, the `Context Sufficiency` and `Response Groundedness` scores are low (a score of `0.002`) because the context doesn't contain information about the query, and the response is not grounded in the context.
\ No newline at end of file
+The `Evaluation Summary` includes the query, context, response, and all evaluation results. In this example, the `Context Sufficiency` and `Response Groundedness` scores are low (a score of `0.002`) because the context doesn't contain information about the query, and the response isn't grounded in the context.
\ No newline at end of file
diff --git a/docs/docs/Integrations/Composio/integrations-composio.mdx b/docs/docs/Integrations/Composio/integrations-composio.mdx
index e97d5dbd6..cf4af466a 100644
--- a/docs/docs/Integrations/Composio/integrations-composio.mdx
+++ b/docs/docs/Integrations/Composio/integrations-composio.mdx
@@ -26,9 +26,9 @@ Depending on the components you use, you may also need additional access, such a
## Use Composio components in a flow
-1. In the Langflow **Workspace**, add an **Agent** component.
+1. In Langflow, create a flow.
-2. In the **Workspace**, add the **Composio Tools** component.
+2. Add an **Agent** component and a **Composio Tools** component.
3. Connect the **Agent** component's **Tools** port to the **Composio Tools** component's **Tools** port.
@@ -36,9 +36,9 @@ Depending on the components you use, you may also need additional access, such a
5. In the **Tool Name** field, select the tool you want your agent to have access to.
- For this example, select the **Gmail** tool to allow your agent to control an email account with the Composio tool.
+ For this example, select the **Gmail** tool to allow your agent to control an email account with the **Composio Tools** component.
-6. In the **Actions** field, select the action you want the **Agent** to take with the **Gmail** tool.
+6. In the **Actions** field, select the action you want the agent to take with the **Gmail** tool.
The **Gmail** tool supports multiple actions, and it also supports multiple actions within the same tool.
For this example, select **GMAIL_CREATE_EMAIL_DRAFT**.
@@ -46,7 +46,7 @@ Depending on the components you use, you may also need additional access, such a
7. Add **Chat Input** and **Chat Output** components to your flow, and then connect them to the **Agent** component's **Input** and **Response**, respectively.
- 
+ 
8. In the **Agent** component, enter your OpenAI API key or configure the **Agent** component to use a different LLM.
diff --git a/docs/docs/Integrations/Docling/integrations-docling.mdx b/docs/docs/Integrations/Docling/integrations-docling.mdx
index 0655e8b6c..8292bf10b 100644
--- a/docs/docs/Integrations/Docling/integrations-docling.mdx
+++ b/docs/docs/Integrations/Docling/integrations-docling.mdx
@@ -38,28 +38,29 @@ To learn more about content extraction with Docling, see the video tutorial [Doc
This example demonstrates how to use Docling components to split a PDF in a flow:
-1. Connect a **Docling** and an **ExportDoclingDocument** component to a [**Split Text**](/components-processing#split-text) component.
- The **Docling** component loads the document, and the **ExportDoclingDocument** component converts the DoclingDocument into the format you select. This example converts the document to Markdown, with images represented as placeholders.
+1. Connect a **Docling** and an **Export DoclingDocument** component to a [**Split Text** component](/components-processing#split-text).
+
+ The **Docling** component loads the document, and the **Export DoclingDocument** component converts the `DoclingDocument` into the format you select. This example converts the document to Markdown, with images represented as placeholders.
The **Split Text** component will split the Markdown into chunks for the vector database to store in the next part of the flow.
-2. Connect a [**Chroma DB**](/components-vector-stores#chroma-db) component to the **Split text** component's **Chunks** output.
-3. Connect an [**Embedding Model**](/components-embedding-models) to Chroma's **Embedding** port, and a **Chat Output** component to view the extracted [DataFrame](/data-types#dataframe).
-4. Add your OpenAI API key to the Embedding Model.
-The flow looks like this:
+2. Connect a [**Chroma DB** vector store component](/components-vector-stores#chroma-db) to the **Split Text** component's **Chunks** output.
+3. Connect an [**Embedding Model** component](/components-embedding-models) to the **Chroma DB** component's **Embedding** port and a **Chat Output** component to view the extracted [`DataFrame`](/data-types#dataframe).
+4. Add your OpenAI API key to the **Embedding Model** component.
-
+ 
5. Add a file to the **Docling** component.
6. To run the flow, click **Playground**.
+
The chunked document is loaded as vectors into your vector database.
## Docling components
-The following sections describe the purpose and configuration options for each component in the Docling bundle.
+The following sections describe the purpose and configuration options for each component in the **Docling** bundle.
-### Docling
+### Docling language model
-The **Docling** component ingest documents, and then uses Docling to process them by running the Docling models locally.
+The **Docling** language model component ingest documents, and then uses Docling to process them by running the Docling models locally.
It outputs `files`, which is the processed files with `DoclingDocument` data.
@@ -104,7 +105,7 @@ It outputs the chunked documents as a [`DataFrame`](/data-types#dataframe).
| hf_model_name | String | Model name of the tokenizer to use with the HybridChunker when Hugging Face is chosen. |
| openai_model_name | String | Model name of the tokenizer to use with the HybridChunker when OpenAI is chosen. |
| max_tokens | Integer | Maximum number of tokens for the HybridChunker. |
-| doc_key | String | The key to use for the DoclingDocument column. |
+| doc_key | String | The key to use for the `DoclingDocument` column. |
### Export DoclingDocument
@@ -121,4 +122,4 @@ It can output the exported data as either [`Data`](/data-types#data) or [`DataFr
| image_mode | String | Specify how images are exported in the output (placeholder, embedded). |
| md_image_placeholder | String | Specify the image placeholder for markdown exports. |
| md_page_break_placeholder | String | Add this placeholder between pages in the markdown output. |
-| doc_key | String | The key to use for the DoclingDocument column. |
\ No newline at end of file
+| doc_key | String | The key to use for the `DoclingDocument` column. |
\ No newline at end of file
diff --git a/docs/docs/Integrations/Google/integrations-google-big-query.mdx b/docs/docs/Integrations/Google/integrations-google-big-query.mdx
index 38419c275..d1060fd6c 100644
--- a/docs/docs/Integrations/Google/integrations-google-big-query.mdx
+++ b/docs/docs/Integrations/Google/integrations-google-big-query.mdx
@@ -29,26 +29,26 @@ For more information, see [BigQuery access control with IAM](https://cloud.googl
5. Click **Add Key**, and then click **Create new key**.
6. Under **Key type**, select **JSON**, and then click **Create**.
A JSON private key file is downloaded to your machine.
-Now that you have a service account and a JSON private key, you need to configure the credentials in the Langflow BigQuery component.
+Now that you have a service account and a JSON private key, you need to configure the credentials in the Langflow **BigQuery** component.
## Configure credentials in the Langflow component
With your service account configured and your credentials JSON file created, follow these steps to authenticate the Langflow application.
-1. Create a new project in Langflow.
-2. From the **Components** menu, drag and drop the BigQuery component to your workspace.
-3. In the BigQuery component's **Upload Service Account JSON** field, click **Select file**.
+1. Create a new flow in Langflow.
+2. In the **Components** menu, find the **BigQuery** component, and then add it to your flow.
+3. In the **BigQuery** component's **Upload Service Account JSON** field, click **Select file**.
4. In the **My Files** pane, select **Click or drag files here**.
Your file browser opens.
5. In your file browser, select the service account JSON file, and then click **Open**.
6. In the **My Files** pane, select your service account JSON file, and then click **Select files**.
-The BigQuery component can now query your datasets and tables using your service account JSON file.
+The **BigQuery** component can now query your datasets and tables using your service account JSON file.
## Query a BigQuery dataset
With your component credentials configured, query your BigQuery datasets and tables to confirm connectivity.
-1. Connect **Chat Input** and **Chat Output** components to the BigQuery component.
+1. Connect **Chat Input** and **Chat Output** components to the **BigQuery** component.

diff --git a/docs/docs/Integrations/Notion/notion-agent-meeting-notes.mdx b/docs/docs/Integrations/Notion/notion-agent-meeting-notes.mdx
index d35590fa6..94cb6ce6a 100644
--- a/docs/docs/Integrations/Notion/notion-agent-meeting-notes.mdx
+++ b/docs/docs/Integrations/Notion/notion-agent-meeting-notes.mdx
@@ -42,7 +42,7 @@ This component allows users to input the meeting transcript directly into the fl
- Sort Direction
- **Output**: List of database data
-### Prompt
+### Prompt Template
This component creates a dynamic prompt template using the following inputs:
- Meeting Transcript
@@ -54,7 +54,7 @@ This component creates a dynamic prompt template using the following inputs:
- **Purpose**: Analyzes the meeting transcript and identifies tasks and action items.
- **Inputs**:
- - System Prompt (from the Prompt component)
+ - System Prompt (from the **Prompt Template** component)
- Language Model (OpenAI)
- Tools:
- Notion Search
@@ -69,7 +69,7 @@ This component creates a dynamic prompt template using the following inputs:
- **Purpose**: Executes actions in Notion based on the meeting summary.
- **Inputs**:
- - System Prompt (from the second Prompt component)
+ - System Prompt (from the second **Prompt Template** component)
- Language Model (OpenAI)
- Tools:
- List Database Properties
@@ -128,17 +128,18 @@ Displays the final output of the Notion Agent in the Playground.
## Run the Notion Meeting Notes flow
-To run the Notion Agent for Meeting Notes:
+1. Create a flow manually or import a pre-built flow JSON file:
-1. Open Langflow and create a new project.
-2. Add the components listed above to your flow canvas, or download the [Flow Meeting Agent Flow](./Meeting_Notes_Agent.json)(Download link) and **Import** the JSON file into Langflow.
-3. Connect the components as shown in the flow diagram.
-4. Input the Notion and OpenAI API keys in their respective components.
-5. Paste your meeting transcript into the Meeting Transcript component.
-6. Run the flow by clicking **Run component** on the **Chat Output** component.
-7. Review the output in the Chat Output component, which will summarize the actions taken in your Notion workspace.
+ * Recommended: [Download the Meeting Agent flow JSON](./Meeting_Notes_Agent.json) and then [import the flow](/concepts-flows-import) into Langflow.
+ * Create a blank flow, and then add the previously described components to your flow, connecting them as shown in the flow diagram.
-For optimal results, use detailed meeting transcripts. The quality of the output depends on the comprehensiveness of the input provided.
+2. Input the Notion and OpenAI API keys in their respective components.
+3. Paste your meeting transcript into the **Meeting Transcript** component.
+
+ For optimal results, use detailed meeting transcripts. The quality of the output depends on the comprehensiveness of the input provided.
+
+4. Run the flow by clicking **Run component** on the **Chat Output** component or open the **Playground**.
+5. Review the output summarizing the actions taken in your Notion workspace.
## Customization
diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-g-assist.mdx b/docs/docs/Integrations/Nvidia/integrations-nvidia-g-assist.mdx
index 0fe64fb00..8c3fc64a9 100644
--- a/docs/docs/Integrations/Nvidia/integrations-nvidia-g-assist.mdx
+++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-g-assist.mdx
@@ -4,14 +4,12 @@ slug: /integrations-nvidia-g-assist
---
:::important
-This component is available only for Langflow users with NVIDIA GPUs on Windows systems.
+The **NVIDIA G-Assist** component is available only for Langflow users with NVIDIA GPUs on Windows systems.
:::
The **NVIDIA G-Assist** component enables interaction with NVIDIA GPU drivers through natural language prompts.
-
For example, prompt G-Assist with `"What is my current GPU temperature?"` or `"Show me the available GPU memory"` to get information, and then tell G-Assist to modify your GPU settings.
-
-For more information, see the [NVIDIA G-assist project repository](https://github.com/NVIDIA/g-assist).
+For more information, see the [NVIDIA G-Assist repository](https://github.com/NVIDIA/g-assist).
## Prerequisites
@@ -20,18 +18,20 @@ For more information, see the [NVIDIA G-assist project repository](https://githu
* `gassist.rise` package installed. This package is included in your [Langflow installation](/get-started-installation).
## Use the G-Assist component in a flow
-1. Create a flow with a **Chat input** component, a **G-Assist** component, and a **Chat output** component.
-2. Connect the **Chat input** component to the **G-Assist** component's **Prompt** input, and then connect the **G-Assist** component's output to the **Chat output** component.
-3. Open the **Playground**, and then ask a question about your GPU. For example, "What is my current GPU temperature?".
+
+1. Create a flow with **Chat Input** component, **G-Assist** component, and **Chat Output** components.
+2. Connect the **Chat Input** component to the **G-Assist** component's **Prompt** input.
+3. Connect the **G-Assist** component's output to the **Chat Output** component.
+4. Open the **Playground**, and then ask a question about your GPU. For example, `"What is my current GPU temperature?"`.
The **G-Assist** component queries your GPU, and the response appears in the **Playground**.
### Inputs
The **NVIDIA G-Assist** component accepts a single input:
+
- `prompt`: A human-readable prompt processed by NVIDIA G-Assist.
### Outputs
-The **NVIDIA G-Assist** component outputs a [Message](/data-types#message) object that contains:
-- `text`: The response from NVIDIA G-Assist containing the completed operation result.
-- The NVIDIA G-Assist message response is wrapped in a Langflow [Message](/data-types#message) object.
\ No newline at end of file
+The **NVIDIA G-Assist** component outputs a [`Message`](/data-types#message) object that contains the NVIDIA G-Assist response.
+The string response with the completed operation result is available in the `text` field of the `Message` object.
\ No newline at end of file
diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.mdx b/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.mdx
index f21e5d530..3ee170b47 100644
--- a/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.mdx
+++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-ingest.mdx
@@ -3,12 +3,13 @@ title: Integrate NVIDIA Retriever Extraction with Langflow
slug: /integrations-nvidia-ingest
---
-The **NVIDIA Retriever Extraction** component integrates with the [NVIDIA nv-ingest](https://github.com/NVIDIA/nv-ingest) microservice for data ingestion, processing, and extraction of text files.
+import Icon from "@site/src/components/icon";
+The **NVIDIA Retriever Extraction** component integrates with the [NVIDIA nv-ingest](https://github.com/NVIDIA/nv-ingest) microservice for data ingestion, processing, and extraction of text files.
The `nv-ingest` service supports multiple extraction methods for PDF, DOCX, and PPTX file types, and includes pre- and post-processing services like splitting, chunking, and embedding generation. The extractor service's High Resolution mode uses the `nemoretriever-parse` extraction method for better quality extraction from scanned PDF documents. This feature is only available for PDF files.
-The **NVIDIA Retriever Extraction** component imports the NVIDIA `Ingestor` client, ingests files with requests to the NVIDIA ingest endpoint, and outputs the processed content as a list of [Data](/data-types#data) objects. `Ingestor` accepts additional configuration options for data extraction from other text formats. To configure these options, see the [component parameters](/integrations-nvidia-ingest#parameters).
+The **NVIDIA Retriever Extraction** component imports the NVIDIA `Ingestor` client, ingests files with requests to the NVIDIA ingest endpoint, and outputs the processed content as a list of [`Data`](/data-types#data) objects. `Ingestor` accepts additional configuration options for data extraction from other text formats. To configure these options, see the [component parameters](#parameters).
:::tip
NVIDIA Retriever Extraction is also known as NV-Ingest and NeMo Retriever Extraction.
@@ -36,31 +37,27 @@ NVIDIA Retriever Extraction is also known as NV-Ingest and NeMo Retriever Extrac
## Use the NVIDIA Retriever Extraction component in a flow
-The **NVIDIA Retriever Extraction** component accepts **Message** inputs and outputs **Data**. The component calls an NVIDIA Ingest microservice's endpoint to ingest a local file and extract the text.
+The **NVIDIA Retriever Extraction** component accepts `Message` inputs, and then outputs `Data`. The component calls an NVIDIA Ingest microservice's endpoint to ingest a local file and extract the text.
-To use the NVIDIA Retriever Extraction component in your flow, follow these steps:
-1. In the component library, click the **NVIDIA Retriever Extraction** component, and then drag it onto the canvas.
+To use the **NVIDIA Retriever Extraction** component in your flow, follow these steps:
+
+1. Add the **NVIDIA Retriever Extraction** component to your flow.
2. In the **Base URL** field, enter the URL of the NVIDIA Ingest endpoint.
-Optionally, add the endpoint URL as a **Global variable**:
- 1. Click **Settings**, and then click **Global Variables**.
- 2. Click **Add New**.
- 3. Name your variable. Paste your endpoint in the **Value** field.
- 4. In the **Apply To Fields** field, select the field you want to globally apply this variable to. In this case, select **NVIDIA Base URL**.
- 5. Click **Save Variable**.
-3. Click the **Select files** button to select which file you want to ingest.
-4. Select which text type to extract from the file.
-The component supports text, charts, tables, images, and infographics.
-Optionally, for PDF files, enable High Resolution mode for better quality extraction from scanned documents.
-5. Select whether to split the text into chunks.
-Modify the splitting parameters in the component's **Configuration** tab.
-6. Click **Run** to ingest the file.
-7. To confirm the component is ingesting the file, open the **Logs** pane to view the output of the flow.
-8. To store the processed data in a vector database, add an **AstraDB Vector** component to your flow, and connect the **NVIDIA Retriever Extraction** component to the **AstraDB Vector** component with a **Data** output.
+You can also store the URL as a [global variable](/configuration-global-variables) to reuse it in multiple components and flows.
+3. Click **Select Files** to select a file to ingest.
+4. Select which text type to extract from the file: text, charts, tables, images, or infographics.
+5. Optional: For PDF files, enable **High Resolution Mode** for better quality extraction from scanned documents.
+6. Select whether to split the text into chunks.
-
+ You can modify the splitting parameters and other hidden settings through the **Controls** in the [component's header menu](/concepts-components#component-menus).
-9. Run the flow.
-Inspect your Astra DB vector database to view the processed data.
+7. Click **Run component** to ingest the file, and then open the **Logs** pane to confirm the component ingested the file.
+8. To store the processed data in a vector database, add a **Vector Store** component to your flow, and then connect the **NVIDIA Retriever Extraction** component's `Data` output to the **Vector Store** component's input.
+
+ When you run the flow with a **Vector Store** component, the processed data is stored in the vector database.
+ You can query your database to retrieve the uploaded data.
+
+ 
## NVIDIA Retriever Extraction component parameters {#parameters}
@@ -93,7 +90,7 @@ For more information, see the [NV-Ingest documentation](https://nvidia.github.io
### Outputs
-The **NVIDIA Retriever Extraction** component outputs a list of [Data](/data-types#data) objects where each object contains:
+The **NVIDIA Retriever Extraction** component outputs a list of [`Data`](/data-types#data) objects where each object contains:
- `text`: The extracted content.
- For text documents: The extracted text content.
- For tables and charts: The extracted table/chart content.
diff --git a/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.mdx b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.mdx
index 908af8a56..d5a9958e2 100644
--- a/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.mdx
+++ b/docs/docs/Integrations/Nvidia/integrations-nvidia-nim-wsl2.mdx
@@ -6,7 +6,7 @@ slug: /integrations-nvidia-ingest-wsl2
Connect Langflow with NVIDIA NIM on an RTX Windows system with [Windows Subsystem for Linux 2 (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) installed.
[NVIDIA NIM (NVIDIA Inference Microservices)](https://docs.nvidia.com/nim/index.html) provides containers to self-host GPU-accelerated inferencing microservices.
-In this example, you connect a model component in **Langflow** to a deployed `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2**.
+In this example, you connect a Language Model component in Langflow to a deployed `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2**.
For more information on NVIDIA NIM, see the [NVIDIA documentation](https://docs.nvidia.com/nim/index.html).
@@ -23,11 +23,11 @@ For more information on NVIDIA NIM, see the [NVIDIA documentation](https://docs.
## Use the NVIDIA NIM in a flow
-To connect the NIM you've deployed with Langflow, add the **NVIDIA** model component to a flow.
+To connect the deployed NIM to Langflow, add the **NVIDIA** language model component to a flow:
-1. Create a [basic prompting flow](/get-started-quickstart).
+1. Create a flow based on the **Basic Prompting** template.
2. Replace the **OpenAI** model component with the **NVIDIA** component.
3. In the **NVIDIA** component's **Base URL** field, add the URL where your NIM is accessible. If you followed your model's [deployment instructions](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md), the value is `http://localhost:8000/v1`.
4. In the **NVIDIA** component's **NVIDIA API Key** field, add your NVIDIA API Key.
-5. Select your model from the **Model Name** dropdown.
+5. Select your model from the **Model Name** field.
6. Open the **Playground** and chat with your **NIM** model.
\ No newline at end of file
diff --git a/docs/docs/Integrations/integrations-assemblyai.mdx b/docs/docs/Integrations/integrations-assemblyai.mdx
index 1daafbecc..9956898c1 100644
--- a/docs/docs/Integrations/integrations-assemblyai.mdx
+++ b/docs/docs/Integrations/integrations-assemblyai.mdx
@@ -12,12 +12,7 @@ The AssemblyAI components allow you to apply powerful Speech AI models to your a
- Generating subtitles
- Applying LLMs to audio files
-More info about AssemblyAI:
-
-- [Website](https://www.assemblyai.com/)
-- [AssemblyAI API Docs](https://www.assemblyai.com/docs)
-- [Get a Free API key](https://www.assemblyai.com/dashboard/signup)
-
+For more information about AssemblyAI features and functionality used by AssemblyAI components, see the [AssemblyAI API Docs](https://www.assemblyai.com/docs).
## Prerequisites
@@ -72,7 +67,7 @@ This component allows you to generate subtitles in SRT or VTT format.
- **Input**:
- AssemblyAI API Key: Your API key.
- - Transcription Result: The output of the *Poll Transcript* component.
+ - Transcription Result: The output of the **Poll Transcript** component.
- Subtitle Format: The format of the captions (SRT or VTT).
- Character per Caption (Optional): The maximum number of characters per caption (0 for no limit).
@@ -88,8 +83,8 @@ LeMUR automatically ingests the transcript as additional context, making it easy
- **Input**:
- AssemblyAI API Key: Your API key.
- - Transcription Result: The output of the *Poll Transcript* component.
- - Input Prompt: The text to prompt the model. You can type your prompt in this field or connect it to a *Prompt* component.
+ - Transcription Result: The output of the **Poll Transcript** component.
+ - Input Prompt: The text to prompt the model. You can type your prompt in this field or connect it to a **Prompt Template** component.
- Final Model: The model that is used for the final prompt after compression is performed. Default is Claude 3.5 Sonnet.
- Temperature (Optional): The temperature to use for the model. Default is 0.0.
- Max Output Size (Optional): Max output size in tokens, up to 4000. Default is 2000.
@@ -128,33 +123,44 @@ This component can be used as a standalone component to list all previously gene
## Run the Transcription and Speech AI Flow
-To run the Transcription and Speech AI Flow:
+1. Build the flow manually or import a pre-build JSON file:
-1. Open Langflow and create a new project.
-2. Add the components listed above to your flow canvas, or download the [AssemblyAI Transcription and Speech AI Flow](./AssemblyAI_Flow.json)(Download link) and **Import** the JSON file into Langflow.
-3. Connect the components as shown in the flow diagram. **Tip**: Freeze the path of the *Start Transcript* component to only submit the file once.
-4. Input the AssemblyAI API key in in all components that require the key (Start Transcript, Poll Transcript, Get Subtitles, LeMUR, List Transcripts).
-5. Select an audio or video file in the *Start Transcript* component.
-6. Run the flow by clicking **Run component** on the *Parse Data* component. Make sure that the specified template is `{text}`.
-7. To generate subtitles, click **Run component** on the *List Transcript* component.
+ * Recommended: [Download the AssemblyAI Transcription and Speech AI flow JSON](./AssemblyAI_Flow.json), and then [import the flow](/concepts-flows-import) into Langflow.
+ * Create a blank flow, and then add the previously described components to your flow, connecting them as shown in the flow diagram.
+2. Input your AssemblyAI API key in all components that require the key (**Start Transcript**, **Poll Transcript**, **Get Subtitles**, **LeMUR**, **List Transcripts**).
+
+3. Select an audio or video file for the **Start Transcript** component.
+
+ Optional: After adding a file to the **Start Transcript** component, run and [freeze the component](/concepts-components#freeze-a-component) so you only submit the file once, no matter how many times you run the flow.
+ To do this, click **Run component** to preload the file, and then click **Show More** and select **Freeze** to lock the result.
+ Subsequent flow runs use the frozen component's cached output.
+
+4. Test the transcription by clicking **Run component** on the **Parser** component. Make sure that the specified template is `{text}`.
+
+ Running one component runs all upstream components as well as the selected component and then stops the flow run.
+ In this case, the **Start Transcript** and **Poll Transcript** components are upstream from the **Parser** component.
+ If you froze the **Start Transcript** component, the flow sends the cached output from **Start Transcript**, runs the **Poll Transcript** component, to get the transcription result.
+ Check the flow logs or inspect the output of the **Parser** component to see the transcribed text result.
+
+5. To generate subtitles and run the full flow, click **Run component** on the **List Transcript** component.
## Customization
The flow can be customized by:
-1. Modifying the parameters in the *Start Transcript* component.
-2. Modifying the subtitle format in the *Get Subtitles* component.
-3. Modifying the LLM prompt for input of the *LeMUR* component.
-4. Modifying the LLM parameters (e.g., temperature) in the *LeMUR* component.
+1. Modifying the parameters in the **Start Transcript** component.
+2. Modifying the subtitle format in the **Get Subtitles** component.
+3. Modifying the LLM prompt for input of the **LeMUR** component.
+4. Modifying the LLM parameters (e.g., temperature) in the **LeMUR** component.
## Troubleshooting
If you encounter issues:
1. Ensure the API key is correctly set in all components that require the key.
-2. To use LeMUR, you need to upgrade your AssemblyAI account, since this is not included in the free account.
+2. To use LeMUR, you need to upgrade your AssemblyAI account, since this isn't included in the free account.
3. Verify that all components are properly connected in the flow.
4. Review the Langflow logs for any error messages.
-
-For more advanced usage, refer to the [AssemblyAI API documentation](https://www.assemblyai.com/docs/). If you need more help, you can reach out to the [AssemblyAI support](https://www.assemblyai.com/contact/support).
+5. Check the [AssemblyAI API documentation](https://www.assemblyai.com/docs/).
+6. Contact [AssemblyAI support](https://www.assemblyai.com/contact/support).
diff --git a/docs/docs/Integrations/integrations-langsmith.mdx b/docs/docs/Integrations/integrations-langsmith.mdx
index 1cd88b14b..de2968d35 100644
--- a/docs/docs/Integrations/integrations-langsmith.mdx
+++ b/docs/docs/Integrations/integrations-langsmith.mdx
@@ -21,7 +21,7 @@ Alternatively, export the environment variables in your terminal:
`export LANGSMITH_TRACING=true && export LANGSMITH_ENDPOINT="https://api.smith.langchain.com/" && export LANGSMITH_API_KEY="LANGCHAIN_API_KEY" && export LANGSMITH_PROJECT="LANGSMITH_PROJECT_NAME"`
3. Restart Langflow using `langflow run --env-file .env`
-4. Run a project in Langflow.
+4. Run a flow in Langflow.
5. View the LangSmith dashboard for monitoring and observability.

\ No newline at end of file
diff --git a/docs/docs/Integrations/integrations-langwatch.mdx b/docs/docs/Integrations/integrations-langwatch.mdx
index 88ad4bca4..f08469905 100644
--- a/docs/docs/Integrations/integrations-langwatch.mdx
+++ b/docs/docs/Integrations/integrations-langwatch.mdx
@@ -39,5 +39,5 @@ To integrate with Langflow, add your LangWatch API key as a Langflow environment
In your flows, you can use the **LangWatch Evaluator** component to use LangWatch's evaluation endpoints to assess a model's performance.
-This component is available in the LangWatch bundle in the **Components** menu.
+This component is available in the **LangWatch** bundle in the **Components** menu.
For more information, see [Bundles](/components-bundle-components).
\ No newline at end of file
diff --git a/docs/docs/Integrations/mcp-component-astra.mdx b/docs/docs/Integrations/mcp-component-astra.mdx
index 85fa975d7..f1b850c4a 100644
--- a/docs/docs/Integrations/mcp-component-astra.mdx
+++ b/docs/docs/Integrations/mcp-component-astra.mdx
@@ -7,7 +7,7 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Icon from "@site/src/components/icon";
-This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by using the **MCP Tools** component to run a [DataStax Astra DB MCP server](https://github.com/datastax/astra-db-mcp) in an agentic flow.
+This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by using the **MCP Tools** component to run a [DataStax Astra DB MCP server](https://github.com/datastax/astra-db-mcp) in an agent flow.
1. Install an LTS release of [Node.js](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
@@ -52,7 +52,7 @@ This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by u
ASTRA_DB_API_ENDPOINT=https://...-us-east-2.apps.astra.datastax.com
```
-8. In the **Agent** component, add your **OpenAI API key**.
+8. In the **Agent** component, add your OpenAI API key.
The default model is an OpenAI model.
If you want to use a different model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
diff --git a/docs/docs/Support/release-notes.mdx b/docs/docs/Support/release-notes.mdx
index d62552d23..a23ffc08a 100644
--- a/docs/docs/Support/release-notes.mdx
+++ b/docs/docs/Support/release-notes.mdx
@@ -3,7 +3,6 @@ title: Langflow release notes
slug: /release-notes
---
-
This page summarizes significant changes to Langflow in each release.
For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases/latest).
@@ -28,13 +27,13 @@ To avoid the impact of potential breaking changes and test new versions, the Lan
-H "x-api-key: $LANGFLOW_API_KEY"
```
- To export flows from the Langflow UI, see [Import and export flows](/concepts-flows-import).
+ To export flows from the visual editor, see [Import and export flows](/concepts-flows-import).
2. Install the new version:
* **Langflow OSS Python package**: Install the new version in a new virtual environment. For instructions, see [Install and run the Langflow OSS Python package](/get-started-installation#install-and-run-the-langflow-oss-python-package).
* **Langflow Docker image**: Run the new image in a separate container.
- * **Langflow Desktop**: To upgrade in place, open Langflow Desktop, and then click **Upgrade Available** in the header. If you want to isolate the new version, you must install Langflow Desktop on a separate physical or virtual machine, and then [import your flows](/concepts-flows-import) to the new installation.
+ * **Langflow Desktop**: To upgrade in place, open Langflow Desktop, and then click **Upgrade Available** in the Langflow header. If you want to isolate the new version, you must install Langflow Desktop on a separate physical or virtual machine, and then [import your flows](/concepts-flows-import) to the new installation.
3. [Import your flows](/concepts-flows-import) to test them in the new version, [upgrading components](/concepts-components#component-versions) as needed.
@@ -46,7 +45,8 @@ To avoid the impact of potential breaking changes and test new versions, the Lan
## 1.5.0
-The following updates are included in this version:
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
### New features and enhancements
@@ -58,10 +58,10 @@ The following updates are included in this version:
- Centralized **Language Model** and **Embedding Model** components
- The [**Language Model**](/components-models) and [**Embedding Model**](/components-embedding-models) components are now core components for your LLM and embeddings flows. They support multiple models and model providers, and allow you to experiment with different models without swapping out single-provider components.
+ The [**Language Model** component](/components-models) and [**Embedding Model** component](/components-embedding-models) are now core components for your LLM and embeddings flows. They support multiple models and model providers, and allow you to experiment with different models without swapping out single-provider components.
Find them in the **Components** menu in the **Models** category.
- The single-provider components are still available for your flows in the **Components** menu in the [**Bundles**](/components-bundle-components) section.
+ The single-provider components are still available for your flows in the **Components** menu in the [**Bundles**](/components-bundle-components) section, and you can connect them to the **Language Model** and **Embedding Model** components with the **Custom** provider option.
- MCP server one-click installation
@@ -78,58 +78,66 @@ The following updates are included in this version:
The **Input schema** pane replaces the need to manage tweak values in the **API access** pane. When you enable a parameter in the **Input schema** pane, the parameter is automatically added to your flow's code snippets, providing ready-to-use templates for making requests in your preferred programming language.
-- Tool components are redistributed
+- **Tools** components are redistributed
- All components in the **Tools** category were moved to other core component categories, [bundles](/components-bundle-components), or marked as legacy.
+ All components in the **Tools** category were moved to other component categories, such as **Helpers** and [bundles](/components-bundle-components), or marked as legacy.
- The [**MCP Tools** component](/mcp-client) is now under the **Agents** core components category.
+ The [**MCP Tools** component](/mcp-client) is now under the **Agents** category.
Tools that performed the same function were combined into single components that support multiple providers, such as the [**Web Search** component](/components-data#web-search) and the [**News Search** component](/components-data#news-search).
- For more information, see the [Tools components](/components-tools) page.
+ For more information, see [**Tools** components](/components-tools).
- Stability improvements
General stability improvements and bug fixes for enhanced reliability.
See an issue? [Raise it on GitHub](https://github.com/langflow-ai/langflow/issues).
-### New integrations and bundles
+- New integrations and bundles
-- [Cleanlab integration](/integrations-cleanlab)
+ - [**Cleanlab** bundle](/integrations-cleanlab)
## 1.4.2
-The following updates are included in this version:
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
### New features and enhancements
+
- Enhanced file and flow management system with improved bulk capabilities.
+- Added the **BigQuery** component
+- Added the **Twelve Labs** bundle
+- Added the **NVIDIA System Assistant** component
-### New integrations and bundles
-- BigQuery component for connecting to BQ datasets.
-- Twelve Labs integration bundle.
-- NVIDIA system assistant component.
+### Deprecations
-### Deprecated features
-
-- Deprecated the Combine text component.
+- Deprecated the **Combine Text** component.
## 1.4.1
-The following updates are included in this version:
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
### New features and enhancements
-- Added an enhanced "Breaking Changes" feature to help update components during version updates without breaking flows.
+- Added an enhanced **Breaking Changes** feature to help update components without breaking flows after updating Langflow.
## 1.4.0
-The following updates are included in this version:
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
### New features and enhancements
- Introduced MCP server functionality to serve Langflow tools to MCP-compatible clients.
-- Renamed "Folders" to "Projects". The `/folders` endpoints now redirect to `/projects`.
+- Renamed **Folders** to **Projects** in the visual editor.
+- The `/folders` endpoints now redirect to `/projects`.
-### Deprecated features
+### Deprecations
-- Deprecated the Google Gmail, Drive, and Search components.
\ No newline at end of file
+- Deprecated the **Gmail**, **Google Drive**, and **Google Search** components.
+For alternatives, see the [**Google** bundle](/bundles-google).
+
+## Earlier releases
+
+See the [Changelog](https://github.com/langflow-ai/langflow/releases).
\ No newline at end of file
diff --git a/docs/docs/Support/troubleshooting.mdx b/docs/docs/Support/troubleshooting.mdx
index 60ea7fc87..685e5e096 100644
--- a/docs/docs/Support/troubleshooting.mdx
+++ b/docs/docs/Support/troubleshooting.mdx
@@ -14,7 +14,7 @@ As Langflow development continues, components are often recategorized or depreca
If a component appears to be missing from the expected location on the **Components** menu, try the following:
-* Search for the component or check other component categories, including [**Bundles**](/components-bundle-components).
+* Search for the component or check other component categories, including [bundles](/components-bundle-components).
* [Expose legacy components](/concepts-components#component-menus), which are hidden by default.
* Check the [Changelog](https://github.com/langflow-ai/langflow/releases/latest) for component changes in recent releases.
* Make sure the component isn't already present in your flow if it is a single-use component.
@@ -25,15 +25,15 @@ If you still cannot locate the component, see [Langflow GitHub Issues and Discus
If there is no message input field in the **Playground**, make sure your flow has a [**Chat Input** component](/components-io#chat-io) that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
-Because the **Playground** is designed for flows that use an LLM in a query-and-response format, such as chatbots and agents, a flow must have **Chat Input**, **Language Model**/**Agent**, and **Chat Output** components to be fully supported by the **Playground**'s chat interface.
+Because the **Playground** is designed for flows that use an LLM in a query-and-response format, such as chatbots and agents, a flow must have **Chat Input**, **Language Model**/**Agent**, and **Chat Output** components to be fully supported by the **Playground** chat interface.
-For more information, see [Test flows in the **Playground**](/concepts-playground).
+For more information, see [Test flows in the Playground](/concepts-playground).
## Missing key, no key found, or invalid API key
If you get an API key error when running a flow, try the following:
-* For all components that require credentials, make sure those components have a valid credential in the component's settings, such as the **API key** field.
+* For all components that require credentials, make sure those components have a valid credential in the component's settings, such as the **API Key** field.
* If you store your credentials in [Langflow global variables](/configuration-global-variables), make sure you selected the correct global variable and that the variable contains a valid credential.
* Make sure the provided credentials are active, have the required permissions, and, if applicable, have sufficient funds in the account to execute the required action. For example, model providers require credits to use their LLMs.
@@ -121,7 +121,7 @@ There are two possible reasons for this error:
### Environment variables not available from terminal
-Environment variables set in your terminal are not automatically available to GUI-based applications like Langflow Desktop when launched through the Finder or the Start Menu.
+Environment variables set in your terminal aren't automatically available to GUI-based applications like Langflow Desktop when launched through the Finder or the Start Menu.
To set environment variables for Langflow Desktop, see [Set environment variables for Langflow Desktop](/environment-variables#set-environment-variables-for-langflow-desktop).
### Package is not installed
diff --git a/docs/docs/Tutorials/agent.mdx b/docs/docs/Tutorials/agent.mdx
index bb1e30620..5a20e9dde 100644
--- a/docs/docs/Tutorials/agent.mdx
+++ b/docs/docs/Tutorials/agent.mdx
@@ -7,9 +7,9 @@ import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-This tutorial shows you how to connect a JavaScript application to a Langflow [agent](/agents).
+This tutorial shows you how to connect a JavaScript application to a [Langflow agent](/agents).
-With the agent connected, your application can use any connected tools to retrieve more contextual and timely data without changing any application code. The tools are selected by the agent's internal LLM to solve problems and answer questions.
+With an agent, your application can use any connected tools to retrieve more contextual and timely data without changing any application code. The tools are selected by the agent's internal LLM to solve problems and answer questions.
## Prerequisites
@@ -20,22 +20,23 @@ With the agent connected, your application can use any connected tools to retrie
This tutorial uses an OpenAI LLM. If you want to use a different provider, you need a valid credential for that provider.
-## Create an agentic flow
+## Create an agent flow
-The following steps modify the **Simple Agent** template to connect [**Directory**](/components-data#directory) and [**Web search**](/components-data#web-search) components as tools for an **Agent** component.
-The **Directory** component loads all files of a given type from a target directory on your local machine, and the **Web search** component performs a DuckDuckGo search.
+The following steps modify the **Simple Agent** template to connect a [**Directory** component](/components-data#directory) and a [**Web Search** component](/components-data#web-search) as tools for an **Agent** component.
+The **Directory** component loads all files of a given type from a target directory on your local machine, and the **Web Search** component performs a DuckDuckGo search.
When connected to an **Agent** component as tools, the agent has the option to use these components when handling requests.
1. In Langflow, click **New Flow**, and then select the **Simple Agent** template.
-2. Remove the **URL** and **Calculator** tools, and then drag the [**Directory**](/components-data#directory) and [**Web search**](/components-data#web-search) components into your workspace.
+2. Remove the **URL** and **Calculator** tools, and then add **Directory** and **Web Search** components to your flow.
3. In the **Directory** component's **Path** field, enter the directory path and file types that you want to make available to the **Agent** component.
- In this tutorial, the agent needs access to a record of customer purchases, so the directory name is `customer_orders` and the file type is `.csv`. Later in this tutorial, the agent will be prompted to find `email` values in the customer data.
+ In this tutorial, the agent needs access to a record of customer purchases, so the directory name is `customer_orders` and the file type is `.csv`. Later in this tutorial, the agent will be prompted to find `email` values in the customer data.
You can adapt the tutorial to suit your data, or, to follow along with the tutorial, you can download [`customer-orders.csv`](/files/customer_orders/customer_orders.csv) and save it in a `customer_orders` folder on your local machine.
-4. In the [component header menu](/concepts-components#component-menus) for the **Directory** and **Web search** components, enable **Tool Mode** so you can use the components with an agent.
-5. For both tool components, connect the **Toolset** port to the Agent component's **Tools** port.
+4. In the **Directory** and **Web Search** [components' header menus](/concepts-components#component-menus), enable **Tool Mode** so you can use the components with an agent.
+
+5. Connect the **Directory** and **Web Search** components' **Toolset** ports to the **Agent** component's **Tools** port.
6. In the **Agent** component, enter your OpenAI API key.
If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
@@ -45,21 +46,21 @@ When connected to an **Agent** component as tools, the agent has the option to u
Given the example prompt, the LLM would respond with recommendations and web links for items based on previous orders in `customer_orders.csv`.
The **Playground** prints the agent's chain of thought as it selects tools to use and interacts with functionality provided by those tools.
- For example, the agent can use the **Directory** component's `as_dataframe` tool to retrieve a [DataFrame](/data-types#dataframe), and the **Web search** component's `perform_search` tool to find links to related items.
+ For example, the agent can use the **Directory** component's `as_dataframe` tool to retrieve a [DataFrame](/data-types#dataframe), and the **Web Search** component's `perform_search` tool to find links to related items.
## Add a Prompt Template component to the flow
-In this example, the application sends a customer's email address to the Langflow agent. The agent compares the customer's previous orders within the Directory component, searches the web for used versions of those items, and returns three results.
+In this example, the application sends a customer's email address to the Langflow agent. The agent compares the customer's previous orders within the **Directory** component, searches the web for used versions of those items, and returns three results.
-1. To include the email address as a value in your flow, add a [**Prompt Template**](/components-prompts) component to your flow between the **Chat Input** and **Agent**.
+1. To include the email address as a value in your flow, add a [**Prompt Template** component](/components-prompts) to your flow between the **Chat Input** and **Agent** components.
2. In the **Prompt Template** component's **Template** field, enter `Recommend 3 used items for {email}, based on previous orders.`
Adding the `{email}` value in curly braces creates a new input in the **Prompt Template** component, and the component connected to the `{email}` port is supplying the value for that variable.
This creates a point for the user's email to enter the flow from your request.
If you aren't using the `customer_orders.csv` example file, modify the input to search for a value in your dataset.
- At this point your flow has six components. The **Chat Input** is connected to the **Prompt Template** component's **email** port. Then, the **Prompt Template** output is connected to the **Agent** component's **System Message** port. The **Directory** and **Web Search** components are connected to the **Agent** component's **Tools** port. Finally, the **Agent** component's output is connected to the **Chat Output** component, which returns the final response to the application.
+ At this point your flow has six components. The **Chat Input** component is connected to the **Prompt Template** component's **email** input port. Then, the **Prompt Template** component's output is connected to the **Agent** component's **System Message** input port. The **Directory** and **Web Search** components are connected to the **Agent** component's **Tools** port. Finally, the **Agent** component's output is connected to the **Chat Output** component, which returns the final response to the application.
- 
+ 
## Send requests to your flow from a JavaScript application
diff --git a/docs/docs/Tutorials/chat-with-files.mdx b/docs/docs/Tutorials/chat-with-files.mdx
index a50bf1edb..b9bb2cccb 100644
--- a/docs/docs/Tutorials/chat-with-files.mdx
+++ b/docs/docs/Tutorials/chat-with-files.mdx
@@ -53,13 +53,12 @@ To do this, edit the **Template** field, and then replace the default prompt wit
5. Add a [**File** component](/components-data#file) to the flow, and then connect the **Raw Content** output port to the **Prompt Template** component's **file** input port.
To connect ports, click and drag from one port to the other.
- You can add files directly to the file component to pre-load input before running the flow, or you can load files at runtime. The next section of this tutorial covers runtime file uploads.
+ You can add files directly to the **File** component to pre-load input before running the flow, or you can load files at runtime. The next section of this tutorial covers runtime file uploads.
- At this point your flow has five components. The Chat Input and File components are connected to the **Prompt Template** component's input ports. Then, the **Prompt Template** component's output port is connected to the Language Model component's input port. Finally, the Language Model component's output port is connected to the Chat Output component, which returns the final response to the user.
+ At this point your flow has five components. The **Chat Input** and **File** components are connected to the **Prompt Template** component's input ports. Then, the **Prompt Template** component's output port is connected to the **Language Model** component's input port. Finally, the **Language Model** component's output port is connected to the **Chat Output** component, which returns the final response to the user.

-
## Send requests to your flow from a Python application
This section of the tutorial demonstrates how you can send file input to a flow from an application.
@@ -78,7 +77,7 @@ For help with constructing file upload requests in Python, JavaScript, and curl,
* `LANGFLOW_SERVER_ADDRESS`: Your Langflow server's domain. The default value is `127.0.0.1:7860`. You can get this value from the code snippets on your flow's [**API access** pane](/concepts-publish#api-access).
* `FLOW_ID`: Your flow's UUID or custom endpoint name. You can get this value from the code snippets on your flow's [**API access** pane](/concepts-publish#api-access).
- * `FILE_COMPONENT_ID`: The UUID of the File component in your flow, such as `File-KZP68`. To find the component ID, open your flow in Langflow, click the File component, and then click **Controls**.
+ * `FILE_COMPONENT_ID`: The UUID of the **File** component in your flow, such as `File-KZP68`. To find the component ID, open your flow in Langflow, click the **File** component, and then click **Controls**. The component ID is at the top of the **Controls** pane.
* `CHAT_INPUT`: The message you want to send to the Chat Input of your flow, such as `Evaluate this resume for a job opening in my Marketing department.`
* `FILE_NAME` and `FILE_PATH`: The name and path to the local file that you want to send to your flow.
* `LANGFLOW_API_KEY`: A valid [Langflow API key](/api-keys-and-authentication).
@@ -145,7 +144,7 @@ For help with constructing file upload requests in Python, JavaScript, and curl,
The first request uploads a file, such as `fake-resume.txt`, to your Langflow server at the `/v2/files` endpoint. This request returns a file path that can be referenced in subsequent Langflow requests, such as `02791d46-812f-4988-ab1c-7c430214f8d5/fake-resume.txt`
The second request sends a chat message to the Langflow flow at the `/v1/run/` endpoint.
- The `tweaks` parameter includes the path to the uploaded file as the variable `uploaded_path`, and sends this file directly to the File component.
+ The `tweaks` parameter includes the path to the uploaded file as the variable `uploaded_path`, and sends this file directly to the **File** component.
3. Save and run the script to send the requests and test the flow.
@@ -193,8 +192,8 @@ To continue building on this tutorial, try these next steps.
To process multiple files in a single flow run, add a separate **File** component for each file you want to ingest. Then, modify your script to upload each file, retrieve each returned file path, and then pass a unique file path to each **File** component ID.
-For example, you can modify `tweaks` to accept multiple file components.
-The following code is just an example; it is not working code:
+For example, you can modify `tweaks` to accept multiple **File** components.
+The following code is just an example; it isn't working code:
```python
## set multiple file paths
@@ -216,14 +215,14 @@ You can also use a [**Directory** component](/components-data#directory) to load
### Upload external files at runtime
-To upload files from another machine that is not your local environment, your Langflow server must first be accessible over the internet. Then, authenticated users can upload files to your public Langflow server's `/v2/files/` endpoint, as shown in the tutorial. For more information, see [Langflow deployment overview](/deployment-overview).
+To upload files from another machine that isn't your local environment, your Langflow server must first be accessible over the internet. Then, authenticated users can upload files to your public Langflow server's `/v2/files/` endpoint, as shown in the tutorial. For more information, see [Langflow deployment overview](/deployment-overview).
### Preload files outside the chat session
You can use the **File** component to load files anywhere in a flow, not just in a chat session.
-In the visual editor, you can preload files to the file component by selecting them from your local machine or [Langflow file management](/concepts-file-management).
+In the visual editor, you can preload files to the **File** component by selecting them from your local machine or [Langflow file management](/concepts-file-management).
For example, you can preload an instructions file for a prompt template, or you can preload a vector store with documents that you want to query in a Retrieval Augmented Generation (RAG) flow.
-For more information about the **File** component and other data loading components, see [Data components](/components-data).
\ No newline at end of file
+For more information about the **File** component and other data loading components, see [**Data** components](/components-data).
\ No newline at end of file
diff --git a/docs/docs/Tutorials/chat-with-rag.mdx b/docs/docs/Tutorials/chat-with-rag.mdx
index dec329a08..de1368581 100644
--- a/docs/docs/Tutorials/chat-with-rag.mdx
+++ b/docs/docs/Tutorials/chat-with-rag.mdx
@@ -26,8 +26,8 @@ This tutorial demonstrates how you can use Langflow to create a chatbot applicat
This template has two flows.
- The **Load Data Flow** at the bottom of the workspace populates a vector store with data from a file.
- This data is used to respond to queries submitted to the **Retriever Flow**, which is at the top of the workspace.
+ The **Load Data Flow** populates a vector store with data from a file.
+ This data is used to respond to queries submitted to the **Retriever Flow**.
Specifically, the **Load Data Flow** ingests data from a local file, splits the data into chunks, loads and indexes the data in your vector database, and then computes embeddings for the chunks. The embeddings are also stored with the loaded data. This flow only needs to run when you need to load data into your vector database.
@@ -37,7 +37,7 @@ This tutorial demonstrates how you can use Langflow to create a chatbot applicat
2. Add your **OpenAI** API key to both **OpenAI Embeddings** components.
-3. Optional: Replace both **Astra DB** vector store components with a **Chrome DB** or another [vector store component](/components-vector-stores) of your choice.
+3. Optional: Replace both **Astra DB** vector store components with a **Chroma DB** or another [**Vector Store** component](/components-vector-stores) of your choice.
This tutorial uses Chroma DB.
The **Load Data Flow** should have **File**, **Split Text**, **Embedding Model**, vector store (such as **Chroma DB**), and **Chat Output** components:
@@ -53,19 +53,19 @@ This tutorial uses Chroma DB.
## Load data and generate embeddings
-To load data and generate embeddings, you can use the Langflow UI or the `/v2/files` endpoint.
+To load data and generate embeddings, you can use the visual editor or the `/v2/files` endpoint.
-The Langflow UI option is simpler, but it is only recommended for scenarios where the user who created the flow is the same user who loads data into the database.
+The visual editor option is simpler, but it is only recommended for scenarios where the user who created the flow is the same user who loads data into the database.
In situations where many users load data or you need to load data programmatically, use the Langflow API option.
-
+
-1. In your RAG chatbot flow, click the **File component**, and then click **File**.
+1. In your RAG chatbot flow, click the **File** component, and then click **File**.
2. Select the local file you want to upload, and then click **Open**.
The file is loaded to your Langflow server.
-3. To load the data into your vector store, click the vector store component, and then click **Run component** to run the selected component and all prior dependent components.
+3. To load the data into your vector store, click the **Vector Store** component, and then click **Run component** to run the selected component and all prior dependent components.
diff --git a/docs/docs/Tutorials/mcp-tutorial.mdx b/docs/docs/Tutorials/mcp-tutorial.mdx
index 0fc7cc20e..642e0fd74 100644
--- a/docs/docs/Tutorials/mcp-tutorial.mdx
+++ b/docs/docs/Tutorials/mcp-tutorial.mdx
@@ -7,7 +7,7 @@ import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-This tutorial shows you how to connect MCP tools to your applications using Langflow's [**MCP Tools**](/mcp-client) component.
+This tutorial shows you how to connect MCP servers to your applications using Langflow's [**MCP Tools** component](/mcp-client).
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) helps agents integrate with LLMs through _MCP clients_ and _MCP servers_.
Specifically, MCP servers host tools that agents (MCP clients) use to complete specialized tasks.
@@ -31,7 +31,7 @@ In this tutorial, you will use the Langflow **MCP Tools** component to connect m
This tutorial uses an OpenAI LLM. If you want to use a different provider, you need a valid credential for that provider.
-## Create an agentic flow
+## Create an agent flow
1. In Langflow, click **New Flow**, and then select the **Simple Agent** template.
@@ -79,7 +79,7 @@ You need one **MCP Tools** component for each MCP server that you want your flow
* Langflow Docker image: Install the server inside the Docker container.
* Langflow Desktop or system-wide Langflow OSS: Install the server globally or in the same user environment where you run Langflow.
-2. In your **Simple agent** flow, remove the **URL** and **Calculator** tools, and then add the [**MCP Tools**](/mcp-client) component to your workspace.
+2. In your **Simple Agent** flow, remove the **URL** and **Calculator** tools, and then add an [**MCP Tools**](/mcp-client) component.
3. Click the **MCP Tools** component, and then click **Add MCP Server**.
@@ -139,7 +139,7 @@ You need one **MCP Tools** component for each MCP server that you want your flow
* The **MCP Tools** component with the weather MCP server is connected to the **Agent** component's **Tools** port. The agent may not use this server for every request; the agent only uses this connection if it decides the server can help respond to the prompt.
* The **Agent** component's **Output** port is connected to the **Chat Output** component, which returns the final response to the user or application.
- 
+ 
7. To test the weather MCP server, click **Playground**, and then ask the LLM `Is it safe to go hiking in the Adirondacks today?`
@@ -198,7 +198,7 @@ To add the Toolkip MCP server to your flow, do the following:
Your flow now has an additional **MCP Tools** component for a total of five components.
- 
+ 
## Create a Python application that connects to Langflow