docs: Audit admonitions, related links, and prerequisites for style and consistency (#9217)
* add some related links * admonitions audit * initial prereq audit * standardize install LF prereqs * some coderabbit
This commit is contained in:
parent
cb34e4fe80
commit
575cde3e8d
47 changed files with 203 additions and 170 deletions
|
|
@ -7,7 +7,7 @@ import Tabs from '@theme/Tabs';
|
|||
import TabItem from '@theme/TabItem';
|
||||
|
||||
:::important
|
||||
The `/build` endpoints are used by Langflow's frontend Workspace and Playground code.
|
||||
The `/build` endpoints are used by Langflow's frontend **Workspace** and **Playground** code.
|
||||
These endpoints are part of the internal Langflow codebase.
|
||||
|
||||
Don't use these endpoints to run flows in applications that use your Langflow flows.
|
||||
|
|
|
|||
|
|
@ -233,8 +233,8 @@ You must include the ampersand (`@`) in the request to instruct curl to upload t
|
|||
### Send files to your flows (v2)
|
||||
|
||||
:::important
|
||||
The `/v2/files` endpoint does not support sending **image** files to flows.
|
||||
To send **image** files to your flows through the API, follow the procedure in [Upload image files (v1)](#upload-image-files-v1).
|
||||
The `/v2/files` endpoint can't send image files to flows.
|
||||
To send image files to your flows through the API, see [Upload image files (v1)](#upload-image-files-v1).
|
||||
:::
|
||||
|
||||
Send a file to your flow for analysis using the [File](/components-data#file) component and the API.
|
||||
|
|
|
|||
|
|
@ -178,7 +178,8 @@ curl -X POST \
|
|||
Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request.
|
||||
|
||||
:::tip
|
||||
After you add a **Webhook** component to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook cURL** tab to get an automatically generated `POST /webhook` request for your flow.
|
||||
After you add a [**Webhook** component](/components-data#webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook cURL** tab to get an automatically generated `POST /webhook` request for your flow.
|
||||
For more information, see [Trigger flows with webhooks](/webhook).
|
||||
:::
|
||||
|
||||
```bash
|
||||
|
|
@ -201,8 +202,6 @@ curl -X POST \
|
|||
|
||||
</details>
|
||||
|
||||
For more information, see [Webhook component](/components-data#webhook) and [Trigger flows with webhooks](/webhook).
|
||||
|
||||
## Deprecated flow trigger endpoints
|
||||
|
||||
The following endpoints are deprecated and replaced by the `/run` endpoint:
|
||||
|
|
|
|||
|
|
@ -175,4 +175,11 @@ Your flow should be visible in the response as a tool.
|
|||
The connected flow returns an answer based on your question.
|
||||
For example, a Basic Prompting flow connected as a tool returns a different result depending upon its LLM and prompt instructions.
|
||||
|
||||

|
||||

|
||||
|
||||
## See also
|
||||
|
||||
* [**Agent** and **MCP Tools** components](/components-agents)
|
||||
* [Use Langflow agents](/agents)
|
||||
* [Use Langflow as an MCP client](/mcp-client)
|
||||
* [Use Langflow as an MCP server](/mcp-server)
|
||||
|
|
@ -96,7 +96,7 @@ You can configure the **Agent** component to use your preferred provider and mod
|
|||
|
||||
:::tip
|
||||
Many optional **Agent** component input parameters are hidden by default in the visual editor.
|
||||
You can view and toggle all parameters through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
|
||||
You can access all component parameters through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
|
||||
:::
|
||||
|
||||
### Provider and model
|
||||
|
|
@ -154,9 +154,6 @@ For more information, see [Store chat memory](/memory#store-chat-memory) and [**
|
|||
|
||||
### Additional parameters
|
||||
|
||||
Many optional **Agent** component input parameters are hidden by default in the visual editor.
|
||||
You can view and toggle all parameters through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
|
||||
|
||||
With the **Agent** component, the available parameters can change depending on the selected provider and model.
|
||||
For example, some models support additional modes, arguments, or features like chat memory and temperature.
|
||||
|
||||
|
|
@ -166,9 +163,16 @@ Some additional input parameters include the following:
|
|||
* **Handle Parse Errors** (`handle_parsing_errors`): When enabled (`true`), this setting allows the agent to fix errors, like typos, when analyzing user input.
|
||||
* **Verbose** (`verbose`): When enabled (`true`), this setting records detailed logging output for debugging and analysis.
|
||||
|
||||
To view and configure all parameters, click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
|
||||
|
||||
## Agent component output
|
||||
|
||||
The **Agent** component outputs a **Response** (`response`) that is [`Message` data](/data-types#message) containing the agent's raw response to the query.
|
||||
|
||||
Typically, this is passed to a **Chat Output** component to return the response in a human-readable format.
|
||||
It can also be passed to other components if you need to process the response further before, or in addition to, returning it to the user.
|
||||
It can also be passed to other components if you need to process the response further before, or in addition to, returning it to the user.
|
||||
|
||||
## See also
|
||||
|
||||
* [**Agent** and **MCP Tools** components](/components-agents)
|
||||
* [Configure tools for agents](/agents-tools)
|
||||
|
|
@ -21,6 +21,7 @@ The **Astra DB Chat Memory** component isn't recommended for most memory storage
|
|||
|
||||
However, Langflow's **Agent** and **Language Model** components include built-in chat memory that is enabled by default.
|
||||
Your flows don't need an external database to store chat memory.
|
||||
For more information, see [Memory management options](/memory).
|
||||
:::
|
||||
|
||||
For more information about using external chat memory in flows, see the [**Message History** component](/components-helpers#message-history).
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ slug: /components-data
|
|||
|
||||
import Icon from "@site/src/components/icon";
|
||||
|
||||
You can use Langflow's data components to bring data into your flows from various sources like files, API endpoints, and URLs.
|
||||
You can use Langflow's **Data** components to bring data into your flows from various sources like files, API endpoints, and URLs.
|
||||
For example:
|
||||
|
||||
* **Load files**: Import data from a file or directory with the [**File**](#file) and [**Directory**](#directory) components.
|
||||
|
|
@ -22,7 +22,7 @@ Additionally, some components return raw data, whereas others can convert, restr
|
|||
This means that some similar components might produce different results.
|
||||
|
||||
:::tip
|
||||
Data components pair well with [processing components](/components-processing) that can perform additional parsing, transformation, and validation after retrieving the data.
|
||||
**Data** components pair well with [**Processing** components](/components-processing) that can perform additional parsing, transformation, and validation after retrieving the data.
|
||||
|
||||
This can include basic operations, like saving a file in a specific format, or more complex tasks, like using a **Text Splitter** component to break down a large document into smaller chunks before generating embeddings for vector search.
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -531,11 +531,9 @@ For `Message` inputs, the component can create:
|
|||
|
||||
</details>
|
||||
|
||||
## Smart function
|
||||
## Smart Function
|
||||
|
||||
:::tip
|
||||
Prior to Langflow 1.5, this component was named the Lambda filter.
|
||||
:::
|
||||
In Langflow version 1.5, this component was renamed from **Lambda Filter** to **Smart Function**.
|
||||
|
||||
This component uses an LLM to generate a function for filtering or transforming structured data.
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Other types of storage, like traditional structured databases and chat memory, a
|
|||
## Use a vector store component in a flow
|
||||
|
||||
:::tip
|
||||
For examples of vector store components in flows, see [Create a vector RAG chatbot](/chat-with-rag) and [Embedding Model components](/components-embedding-models).
|
||||
For examples of vector store components in flows, see [Create a vector RAG chatbot](/chat-with-rag) and [**Embedding Model** components](/components-embedding-models).
|
||||
:::
|
||||
|
||||
This example uses the **Chroma DB** vector store component. Your vector store component's parameters and authentication may be different, but the document ingestion workflow is the same. A document is loaded from a local machine and chunked. The vector store component generates embeddings with the connected [model](/components-models) component, and stores them in the connected vector database.
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ When exporting from the Langflow UI, you can select **Save with my API keys** to
|
|||
Non-API key variables are included in the export regardless of the **Save with my API keys** setting.
|
||||
|
||||
:::warning
|
||||
If you directly entered the key value into a component's API key field, then **Save with my API keys** exports the literal key value.
|
||||
If you enter the literal key into a component's **API key** field, then **Save with my API keys** exports the literal key value.
|
||||
|
||||
If your key is stored in a Langflow global variable, **Save with my API keys** exports only the variable name.
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ Voice mode requires the following:
|
|||
|
||||
* An [OpenAI](https://platform.openai.com/) account and an OpenAI API key because Langflow uses the OpenAI API to process voice input and generate responses.
|
||||
|
||||
* Optional: An [ElevenLabs](https://elevenlabs.io) API key to enable voice options for the LLM's response.
|
||||
* Optional: An [ElevenLabs](https://elevenlabs.io) API key to enable more voice options for the LLM's response.
|
||||
|
||||
* A microphone and speakers.
|
||||
|
||||
|
|
@ -85,4 +85,8 @@ Both endpoints accept an optional `/$SESSION_ID` path parameter to provide a uni
|
|||
If omitted, Langflow uses the flow ID as the [session ID](/session-id).
|
||||
|
||||
However, be aware that voice mode only maintains context within the current conversation instance.
|
||||
When you close the **Playground** or end a chat, verbal chat history is discarded and not available for future chat sessions.
|
||||
When you close the **Playground** or end a chat, verbal chat history is discarded and not available for future chat sessions.
|
||||
|
||||
## See also
|
||||
|
||||
* [Test flows in the Playground](/concepts-playground)
|
||||
|
|
@ -19,7 +19,7 @@ As an MCP server, Langflow exposes your flows as [tools](https://modelcontextpro
|
|||
|
||||
* A [Langflow project](/concepts-flows#projects) with at least one flow.
|
||||
|
||||
* Any LTS version of [Node.js](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) installed on your computer to use MCP Inspector to [test and debug flows](#test-and-debug-flows).
|
||||
* Any LTS version of [Node.js](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) installed on your computer if you want to use MCP Inspector to [test and debug flows](#test-and-debug-flows).
|
||||
|
||||
* [ngrok installed](https://ngrok.com/docs/getting-started/#1-install-ngrok) and an [ngrok authtoken](https://dashboard.ngrok.com/get-started/your-authtoken) if you want to [deploy a public Langflow server](/deployment-public-server).
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ The following steps explain how to limit the exposed flows and, optionally, rena
|
|||
1. From the Langflow dashboard, select the project that contains the flows you want to serve as tools, and then click the **MCP Server** tab.
|
||||
Alternatively, you can quickly access the **MCP Server** tab from within any flow by selecting **Share > MCP Server**.
|
||||
|
||||
The **Auto install** and **JSON** tabs display options for connecting MCP clients to the the project's MCP server.
|
||||
The **Auto install** and **JSON** tabs display options for connecting MCP clients to the project's MCP server.
|
||||
|
||||
The **Flows/Tools** section lists the flows that are currently being served as tools.
|
||||
|
||||
|
|
@ -105,7 +105,7 @@ However, you can connect any [MCP-compatible client](https://modelcontextprotoco
|
|||
|
||||
:::important
|
||||
Auto installation only works if your HTTP client and Langflow server are on the same local machine.
|
||||
In this is not the case, configure the client with the code in the **JSON** tab.
|
||||
If this is not the case, use the **JSON** option to configure the MCP server.
|
||||
:::
|
||||
|
||||
1. Install [Cursor](https://docs.cursor.com/get-started/installation).
|
||||
|
|
@ -144,7 +144,7 @@ For example:
|
|||
If you have [deployed a public Langflow server](/deployment-public-server), the address is automatically included.
|
||||
|
||||
:::important
|
||||
If your Langflow server [requires authentication](/configuration-authentication) ([`LANGFLOW_AUTO_LOGIN`](/environment-variables#LANGFLOW_AUTO_LOGIN) is set to `false`), you must include your Langflow API key in the configuration.
|
||||
If your Langflow server [requires authentication](/configuration-authentication) ([`LANGFLOW_AUTO_LOGIN=false`](/environment-variables#LANGFLOW_AUTO_LOGIN)), you must include your Langflow API key in the configuration.
|
||||
For more information, see [MCP server authentication and environment variables](#authentication).
|
||||
:::
|
||||
|
||||
|
|
|
|||
|
|
@ -56,13 +56,10 @@ uv run langflow api-key [OPTIONS]
|
|||
|
||||
### langflow copy-db
|
||||
|
||||
Copy the database files to the current directory.
|
||||
Copy the Langflow database files, `langflow.db` and `langflow-pre.db` (if they exist), from the cache directory to the current directory.
|
||||
|
||||
:::note
|
||||
The current directory is the directory containing `__main__.py`.
|
||||
Copy the database files to the current directory, which is the directory containing `__main__.py`.
|
||||
You can find this directory by running `which langflow`.
|
||||
:::
|
||||
|
||||
Copy the Langflow database files, `langflow.db` and `langflow-pre.db` (if they exist), from the cache directory to the current directory.
|
||||
|
||||
```bash
|
||||
langflow copy-db
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
title: Configure an external PostgreSQL database
|
||||
slug: /configuration-custom-database
|
||||
---
|
||||
|
||||
Langflow's default database is [SQLite](https://www.sqlite.org/docs.html), but you can configure Langflow to use PostgreSQL instead.
|
||||
|
||||
This guide walks you through setting up an external database for Langflow by replacing the default SQLite connection string `sqlite:///./langflow.db` with PostgreSQL, both in local and containerized environments.
|
||||
|
|
@ -12,8 +13,7 @@ Langflow can more efficiently handle multiple users and larger workloads by usin
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [Install and start Langflow](/get-started-installation)
|
||||
- Create a [PostgreSQL](https://www.pgadmin.org/download/) database
|
||||
- A [PostgreSQL](https://www.pgadmin.org/download/) database
|
||||
|
||||
## Connect Langflow to a local PostgreSQL database
|
||||
|
||||
|
|
@ -157,4 +157,9 @@ Your container name may vary.
|
|||
6. Examine the query results for multiple connections with different `client_addr` values, for example `172.21.0.3` and `172.21.0.4`.
|
||||
Since each Langflow instance runs in its own container on the Docker network, using different incoming IP addresses confirms that both instances are actively connected to the PostgreSQL database.
|
||||
|
||||
7. To quit psql, type `quit`.
|
||||
7. To quit psql, type `quit`.
|
||||
|
||||
## See also
|
||||
|
||||
* [Memory management options](/memory)
|
||||
* [Logs](/logging)
|
||||
|
|
@ -19,11 +19,7 @@ Install Langflow from source by forking the repository, and then set up your dev
|
|||
|
||||
* [uv](https://docs.astral.sh/uv/getting-started/installation/) version 0.4 or later
|
||||
* [Node.js](https://nodejs.org/en/download/package-manager)
|
||||
* [Make](https://www.gnu.org/software/make/#documentation)
|
||||
|
||||
:::tip Windows
|
||||
For Windows installations, you don't need need Make, and you can find [Windows scripts](https://github.com/langflow-ai/langflow/tree/main/scripts/windows) in the Langflow repository.
|
||||
:::
|
||||
* [Make](https://www.gnu.org/software/make/#documentation) (Linux and macOS only)
|
||||
|
||||
### Clone the Langflow repository
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ A single note usually suffices.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
* [OpenAI API Key](https://platform.openai.com/)
|
||||
* [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* [Tavily AI Search key](https://docs.tavily.com/welcome)
|
||||
* [Sambanova API key](https://sambanova.ai/)
|
||||
|
||||
|
|
|
|||
|
|
@ -8,13 +8,13 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
The [Langflow Integrated Development Environment (IDE)](https://github.com/langflow-ai/langflow-helm-charts/tree/main/charts/langflow-ide) Helm chart is designed to provide a complete environment for developers to create, test, and debug their flows. It includes both the API and the UI.
|
||||
|
||||
### Prerequisites
|
||||
## Prerequisites
|
||||
|
||||
- A [Kubernetes](https://kubernetes.io/docs/setup/) cluster
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
|
||||
- [Helm](https://helm.sh/docs/intro/install/)
|
||||
|
||||
### Prepare a Kubernetes cluster
|
||||
## Prepare a Kubernetes cluster
|
||||
|
||||
This example uses [Minikube](https://minikube.sigs.k8s.io/docs/start/), but you can use any Kubernetes cluster.
|
||||
|
||||
|
|
@ -30,7 +30,7 @@ This example uses [Minikube](https://minikube.sigs.k8s.io/docs/start/), but you
|
|||
kubectl config use-context minikube
|
||||
```
|
||||
|
||||
### Install the Langflow IDE Helm chart
|
||||
## Install the Langflow IDE Helm chart
|
||||
|
||||
1. Add the repository to Helm and update it.
|
||||
|
||||
|
|
@ -51,7 +51,7 @@ This example uses [Minikube](https://minikube.sigs.k8s.io/docs/start/), but you
|
|||
kubectl get pods -n langflow
|
||||
```
|
||||
|
||||
### Access the Langflow IDE
|
||||
## Access the Langflow IDE
|
||||
|
||||
Enable local port forwarding to access Langflow from your local machine.
|
||||
|
||||
|
|
@ -69,7 +69,7 @@ Now you can do the following:
|
|||
- Access the Langflow API at `http://localhost:7860`.
|
||||
- Access the Langflow UI at `http://localhost:8080`.
|
||||
|
||||
### Configure the Langflow version
|
||||
## Configure the Langflow version
|
||||
|
||||
Langflow is deployed with the `latest` version by default.
|
||||
|
||||
|
|
@ -85,7 +85,7 @@ langflow:
|
|||
tag: "1.0.0a59"
|
||||
```
|
||||
|
||||
### Configure external storage
|
||||
## Configure external storage
|
||||
|
||||
By default, the chart deploys a SQLite database stored in a local persistent disk. If you want to use an external PostgreSQL database, you can configure it in two ways:
|
||||
|
||||
|
|
@ -125,7 +125,7 @@ langflow:
|
|||
enabled: false
|
||||
```
|
||||
|
||||
### Configure scaling
|
||||
## Configure scaling
|
||||
|
||||
Scale the number of replicas and resources for both frontend and backend services:
|
||||
|
||||
|
|
@ -153,8 +153,8 @@ langflow:
|
|||
# memory: 512Mi
|
||||
```
|
||||
|
||||
:::note
|
||||
If your flow relies on a shared state, such as built-in chat memory, you need to set up a shared database when scaling horizontally.
|
||||
:::
|
||||
|
||||
For more examples of `langflow-ide` deployment, see the [Langflow Helm Charts repository](https://github.com/langflow-ai/langflow-helm-charts/tree/main/examples/langflow-ide).
|
||||
## See also
|
||||
|
||||
For more examples of `langflow-ide` deployment, see the [Langflow Helm Charts repository](https://github.com/langflow-ai/langflow-helm-charts/tree/main/examples/langflow-ide).
|
||||
|
|
@ -12,7 +12,7 @@ Langflow can be deployed in two distinct environments.
|
|||
* [**Langflow runtime**](/deployment-kubernetes-prod): The **Langflow runtime** is a headless or backend-only mode. The server exposes your flow as an endpoint, and runs only the processes necessary to serve your flow, with PostgreSQL as the database for improved scalability. Use the Langflow **runtime** to deploy your flows if you don't require the frontend for visual development. The Langflow runtime can be deployed on [Docker](/deployment-docker) or [Kubernetes](/deployment-kubernetes-prod).
|
||||
|
||||
:::tip
|
||||
You can start Langflow in headless mode with the [LANGFLOW_BACKEND_ONLY](/environment-variables#LANGFLOW_BACKEND_ONLY) environment variable.
|
||||
You can start Langflow in headless mode with the [`LANGFLOW_BACKEND_ONLY`](/environment-variables#LANGFLOW_BACKEND_ONLY) environment variable.
|
||||
:::
|
||||
|
||||
Deploying on Kubernetes offers the following advantages:
|
||||
|
|
|
|||
|
|
@ -10,13 +10,11 @@ When your Langflow server is public, you can do things like [deploy your Langflo
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [Install Langflow.](/get-started-installation)
|
||||
On the machine where you plan to host your Langflow installation, [install Langflow](/get-started-installation) and a reverse proxy or forwarding service.
|
||||
|
||||
- Install a reverse proxy or forwarding service.
|
||||
This guide uses ngrok, but you can use any similar reverse proxy or forwarding platform.
|
||||
|
||||
This guide uses ngrok, but you can use any similar reverse proxy or forwarding platform.
|
||||
|
||||
If you want to follow along with this guide, [install ngrok](https://ngrok.com/docs/getting-started/#1-install-ngrok) and [create an ngrok authtoken](https://dashboard.ngrok.com/get-started/your-authtoken).
|
||||
If you want to follow along with this guide, [install ngrok](https://ngrok.com/docs/getting-started/#1-install-ngrok) and [create an ngrok authtoken](https://dashboard.ngrok.com/get-started/your-authtoken).
|
||||
|
||||
## Expose your Langflow server with ngrok
|
||||
|
||||
|
|
@ -76,7 +74,7 @@ curl -X POST \
|
|||
```
|
||||
|
||||
:::tip
|
||||
For flows created on public Langflow servers, the code snippets generated in the [**API access** pane](/concepts-publish) automatically use your public server's domain.
|
||||
When you create flows on public Langflow servers, the code snippets generated in the [**API access** pane](/concepts-publish) automatically use your public server's domain.
|
||||
:::
|
||||
|
||||
You also use your public domain when making Langflow API calls in scripts, including the code snippets that are automatically generated by Langflow.
|
||||
|
|
@ -97,7 +95,7 @@ For example, the following code snippet calls an ngrok domain to trigger the spe
|
|||
# Request headers
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key: LANGFLOW_API_KEY"
|
||||
"x-api-key": "LANGFLOW_API_KEY"
|
||||
}
|
||||
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -5,19 +5,17 @@ slug: /deployment-render
|
|||
|
||||
This guide explains how to deploy Langflow on [Render](https://render.com/), a cloud platform for deploying web applications and APIs.
|
||||
|
||||
:::note
|
||||
Langflow requires at least 2 GB of RAM to run, so it uses a **standard** Render instance. This may require a credit card. Review [Render's pricing](https://render.com/pricing) before proceeding.
|
||||
:::
|
||||
1. Prepare a Render instance that can support Langflow.
|
||||
|
||||
1. Click the following button to go to Render:
|
||||
Langflow requires at least 2 GB of RAM to run, so you must use a **Standard** or better Render instance type.
|
||||
This requires a paid Render account.
|
||||
For more information, see [Render Web Services](https://render.com/docs/web-services) and [Render pricing](https://render.com/pricing).
|
||||
|
||||
2. Click the following button to go to Render:
|
||||
|
||||
[](https://render.com/deploy?repo=https%3A%2F%2Fgithub.com%2Flangflow-ai%2Flangflow%2Ftree%2Fdev)
|
||||
|
||||
2. Enter a blueprint name, and then select the branch for your `render.yaml` file.
|
||||
|
||||
3. Click **Deploy Blueprint**.
|
||||
|
||||
Wait for the deployment to complete.
|
||||
|
||||
Your Langflow instance is now ready to use.
|
||||
3. Enter a blueprint name, and then select the branch for your `render.yaml` file.
|
||||
|
||||
4. Click **Deploy Blueprint**, and then wait for the deployment to complete.
|
||||
Once complete, your Langflow instance is ready to use.
|
||||
|
|
@ -117,4 +117,10 @@ plotting = [
|
|||
]
|
||||
```
|
||||
|
||||
The `make` commands add the dependency with `uv add` and update the `uv.lock` file in the appropriate location.
|
||||
The `make` commands add the dependency with `uv add` and update the `uv.lock` file in the appropriate location.
|
||||
|
||||
## See also
|
||||
|
||||
* [Containerize a Langflow application](/develop-application)
|
||||
* [Create custom Python components](/components-custom-components)
|
||||
* [**Docling** bundle](/integrations-docling)
|
||||
|
|
@ -92,4 +92,9 @@ For more information, see [View chat history](/concepts-playground#view-chat-his
|
|||
|
||||
When debugging issues with the format or content of a flow's output, it can help to inspect each component's output to determine where data is being lost or malformed.
|
||||
|
||||
To view the output produced by a single component during the most recent run, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output** in the visual editor.
|
||||
To view the output produced by a single component during the most recent run, click <Icon name="TextSearch" aria-hidden="true"/> **Inspect output** in the visual editor.
|
||||
|
||||
## See also
|
||||
|
||||
* [Memory management options](/memory)
|
||||
* [Configure an external PostgreSQL database](/configuration-custom-database)
|
||||
|
|
@ -104,7 +104,7 @@ The following example flows show how to use the **CleanlabEvaluator** and **Clea
|
|||
### Evaluate and remediate responses from an LLM
|
||||
|
||||
:::tip
|
||||
You can [download the the Evaluate and Remediate flow](./eval_and_remediate_cleanlab.json), and then import it to your Langflow instance to follow along.
|
||||
You can [download the Evaluate and Remediate flow](./eval_and_remediate_cleanlab.json), and then [import the flow](/concepts-flows-import) to your Langflow instance to follow along.
|
||||
:::
|
||||
|
||||
This flow evaluates and remediates the trustworthiness of a response from any LLM using the **CleanlabEvaluator** and **CleanlabRemediator** components.
|
||||
|
|
|
|||
|
|
@ -15,11 +15,21 @@ Langflow integrates with [Docling](https://docling-project.github.io/docling/) t
|
|||
You must install the Docling dependency to use the Docling components in Langflow.
|
||||
:::
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="oss" label="Langflow OSS" default>
|
||||
|
||||
Install the Docling extra in Langflow OSS with `uv pip install 'langflow[docling]'`.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="desktop" label="Langflow Desktop">
|
||||
|
||||
To add a dependency to Langflow Desktop, add an entry for Docling to the application's `requirements.txt` file.
|
||||
For more information, see [Install custom dependencies in Langflow Desktop](/install-custom-dependencies#langflow-desktop).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Use Docling components in a flow
|
||||
|
||||
:::tip
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ Langflow integrates with [Google BigQuery](https://cloud.google.com/bigquery) th
|
|||
* A [Google Cloud project](https://developers.google.com/workspace/guides/create-project) with the BigQuery API enabled
|
||||
* A [service account](https://developers.google.com/workspace/guides/create-credentials#service-account) with the **BigQuery Job User** role
|
||||
* A [BigQuery dataset and table](https://cloud.google.com/bigquery/docs/datasets-intro)
|
||||
* A [running Langflow server](/get-started-installation)
|
||||
|
||||
## Create a service account with BigQuery access
|
||||
|
||||
|
|
|
|||
|
|
@ -13,13 +13,10 @@ To use Notion components in Langflow, you first need to create a Notion integrat
|
|||
## Create a Notion Integration
|
||||
|
||||
1. Go to the [Notion Integrations](https://www.notion.com/my-integrations) page.
|
||||
2. Click on the "New integration" button.
|
||||
3. Give your integration a name and select the workspace where you want to use it.
|
||||
4. Click "Submit" to create the integration.
|
||||
|
||||
:::info
|
||||
When creating the integration, make sure to enable the necessary capabilities based on your requirements. Refer to the [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities) documentation for more information on each capability.
|
||||
:::
|
||||
2. Click **New Integration**.
|
||||
3. Enter an integration name, and then select the workspace where you want to use it.
|
||||
4. Optional: Enable the [necessary Notion integration capabilities](https://developers.notion.com/reference/capabilities) based on your requirements.
|
||||
5. Click **Submit** to create the integration.
|
||||
|
||||
## Configure Integration Capabilities
|
||||
|
||||
|
|
@ -35,27 +32,26 @@ After creating the integration, you need to configure its capabilities to define
|
|||
|
||||
## Obtain Integration Token
|
||||
|
||||
:::warning
|
||||
Your integration token is a sensitive piece of information. Make sure to keep it secure and never share it publicly. Store it safely in your Langflow configuration or environment variables.
|
||||
:::
|
||||
|
||||
To authenticate your integration with Notion, you need to obtain an integration token.
|
||||
|
||||
1. In the integration settings page, go to the "Secrets" tab.
|
||||
2. Copy the "Internal Integration Token" value. This token will be used to authenticate your integration with Notion.
|
||||
|
||||
:::warning
|
||||
Your integration token is a sensitive piece of information. Make sure to keep it secure and never share it publicly. Store it safely in your Langflow configuration or environment variables.
|
||||
:::
|
||||
|
||||
## Grant Integration Access to Notion Databases
|
||||
|
||||
For your integration to interact with Notion databases, you need to grant it access to the specific databases it will be working with.
|
||||
For your integration to interact with Notion databases, you need to grant it access to the specific databases it must work with:
|
||||
|
||||
1. Open the Notion database that you want your integration to access.
|
||||
2. Click on the "Share" button in the top-right corner of the page.
|
||||
3. In the "Invite" section, select your integration from the list.
|
||||
4. Click "Invite" to grant the integration access to the database.
|
||||
2. Click **Share**.
|
||||
3. In the **Invite** section, select your integration from the list.
|
||||
4. Click **Invite** to grant the integration access to the database.
|
||||
|
||||
:::info
|
||||
If your database contains references to other databases, you need to grant the integration access to those referenced databases as well. Repeat step 4 for each referenced database to ensure your integration has the necessary access.
|
||||
:::
|
||||
If your database contains references to other databases, you need to grant the integration access to those referenced databases as well.
|
||||
Repeat this step for each referenced database that your integration must access.
|
||||
|
||||
## Build with Notion components in Langflow
|
||||
|
||||
|
|
|
|||
|
|
@ -9,10 +9,10 @@ The Notion Conversational Agent is an AI-powered assistant that interacts with y
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [Notion App](/integrations/notion/setup)
|
||||
- [Notion account and API key](https://www.notion.so/my-integrations)
|
||||
- [OpenAI API key](https://platform.openai.com/account/api-keys)
|
||||
- [Download Flow Conversation Agent Flow](./Conversational_Notion_Agent.json)(Download link)
|
||||
- [A Notion App](/integrations/notion/setup)
|
||||
- [A Notion account and API key](https://www.notion.so/my-integrations)
|
||||
- [An OpenAI API key](https://platform.openai.com/account/api-keys)
|
||||
- Recommended: [Download the Conversation Agent Flow JSON](./Conversational_Notion_Agent.json), and then [import the flow](/concepts-flows-import) into Langflow.
|
||||
|
||||
## Components
|
||||
|
||||
|
|
|
|||
|
|
@ -9,13 +9,13 @@ The Notion Agent for Meeting Notes is an AI-powered tool that automatically proc
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [Notion App](/integrations/notion/setup)
|
||||
- [Notion API key](https://www.notion.so/my-integrations)
|
||||
- [OpenAI API key](https://platform.openai.com/account/api-keys)
|
||||
- [Download Flow Meeting Agent Flow](./Meeting_Notes_Agent.json)(Download link)
|
||||
- [A Notion App](/integrations/notion/setup)
|
||||
- [A Notion API key](https://www.notion.so/my-integrations)
|
||||
- [An OpenAI API key](https://platform.openai.com/account/api-keys)
|
||||
- Recommended: [Download the Meeting Agent Flow JSON](./Meeting_Notes_Agent.json), and then [import the flow](/concepts-flows-import) into Langflow.
|
||||
|
||||
:::important
|
||||
Before using this flow, ensure you have obtained the necessary API keys from Notion and OpenAI. These keys are essential for the flow to function properly. Keep them secure and do not share them publicly.
|
||||
Treat all keys and other credentials as sensitive information. Use secure references, and don't share them publicly.
|
||||
:::
|
||||
|
||||
## Components
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ For more information, see the [NVIDIA G-assist project repository](https://githu
|
|||
|
||||
* Windows operating system
|
||||
* NVIDIA GPU
|
||||
* `gassist.rise` package installed. This package is already installed with Langflow.
|
||||
* `gassist.rise` package installed. This package is included in your [Langflow installation](/get-started-installation).
|
||||
|
||||
## Use the G-Assist component in a flow
|
||||
1. Create a flow with a **Chat input** component, a **G-Assist** component, and a **Chat output** component.
|
||||
|
|
|
|||
|
|
@ -3,16 +3,17 @@ title: Integrate NVIDIA Retriever Extraction with Langflow
|
|||
slug: /integrations-nvidia-ingest
|
||||
---
|
||||
|
||||
:::note
|
||||
NVIDIA Retriever Extraction is also known as NV-Ingest and NeMo Retriever Extraction.
|
||||
:::
|
||||
|
||||
The **NVIDIA Retriever Extraction** component integrates with the [NVIDIA nv-ingest](https://github.com/NVIDIA/nv-ingest) microservice for data ingestion, processing, and extraction of text files.
|
||||
|
||||
|
||||
The `nv-ingest` service supports multiple extraction methods for PDF, DOCX, and PPTX file types, and includes pre- and post-processing services like splitting, chunking, and embedding generation. The extractor service's High Resolution mode uses the `nemoretriever-parse` extraction method for better quality extraction from scanned PDF documents. This feature is only available for PDF files.
|
||||
|
||||
The **NVIDIA Retriever Extraction** component imports the NVIDIA `Ingestor` client, ingests files with requests to the NVIDIA ingest endpoint, and outputs the processed content as a list of [Data](/data-types#data) objects. `Ingestor` accepts additional configuration options for data extraction from other text formats. To configure these options, see the [component parameters](/integrations-nvidia-ingest#parameters).
|
||||
|
||||
:::tip
|
||||
NVIDIA Retriever Extraction is also known as NV-Ingest and NeMo Retriever Extraction.
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* An NVIDIA Ingest endpoint. For more information on setting up an NVIDIA Ingest endpoint, see the [NVIDIA Ingest quickstart](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart).
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ title: Integrate NVIDIA NIMs with Langflow
|
|||
slug: /integrations-nvidia-ingest-wsl2
|
||||
---
|
||||
|
||||
Connect **Langflow** with **NVIDIA NIM** on an RTX Windows system with [Windows Subsystem for Linux 2 (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) installed.
|
||||
Connect Langflow with NVIDIA NIM on an RTX Windows system with [Windows Subsystem for Linux 2 (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) installed.
|
||||
|
||||
[NVIDIA NIM (NVIDIA Inference Microservices)](https://docs.nvidia.com/nim/index.html) provides containers to self-host GPU-accelerated inferencing microservices.
|
||||
In this example, you connect a model component in **Langflow** to a deployed `mistral-nemo-12b-instruct` NIM on an **RTX Windows system** with **WSL2**.
|
||||
|
|
@ -13,8 +13,11 @@ For more information on NVIDIA NIM, see the [NVIDIA documentation](https://docs.
|
|||
## Prerequisites
|
||||
|
||||
* [NVIDIA NIM WSL2 installed](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
|
||||
* A NIM container deployed according to the model's instructions. Prerequisites vary between models.
|
||||
For example, to deploy the `mistral-nemo-12b-instruct` NIM, follow the instructions for **Windows on RTX AI PCs (Beta)** on your [model's deployment overview](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md)
|
||||
* A NIM container deployed according to the model's instructions
|
||||
|
||||
Prerequisites vary between models.
|
||||
For example, to deploy the `mistral-nemo-12b-instruct` NIM, follow the instructions for **Windows on RTX AI PCs (Beta)** on your [model's deployment overview](https://build.nvidia.com/nv-mistralai/mistral-nemo-12b-instruct/deploy?environment=wsl2.md).
|
||||
|
||||
* Windows 11 build 23H2 or later
|
||||
* At least 12 GB of RAM
|
||||
|
||||
|
|
|
|||
|
|
@ -21,11 +21,11 @@ More info about AssemblyAI:
|
|||
|
||||
## Prerequisites
|
||||
|
||||
You need an **AssemblyAI API key**. After creating a free account, you'll find the API key in your dashboard. [Get a Free API key here](https://www.assemblyai.com/dashboard/signup).
|
||||
* An [AssemblyAI account](https://www.assemblyai.com/dashboard/signup) and an AssemblyAI API key.
|
||||
|
||||
Enter the key in the *AssemblyAI API Key* field in all components that require the key.
|
||||
Enter the key in the *AssemblyAI API Key* field in all Langflow components that require the AssemblyAI key.
|
||||
|
||||
(Optional): To use LeMUR, you need to upgrade your AssemblyAI account, since this is not included in the free account.
|
||||
* Optional: To use LeMUR, you need a paid AssemblyAI account because LeMUR isn't included in the free account.
|
||||
|
||||
## Components
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import Tabs from '@theme/Tabs';
|
|||
import TabItem from '@theme/TabItem';
|
||||
import Icon from "@site/src/components/icon";
|
||||
|
||||
Use the [MCP Tools component](/mcp-client) to connect Langflow to a [Datastax Astra DB MCP server](https://github.com/datastax/astra-db-mcp).
|
||||
This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by using the **MCP Tools** component to run a [DataStax Astra DB MCP server](https://github.com/datastax/astra-db-mcp) in an agentic flow.
|
||||
|
||||
1. Install an LTS release of [Node.js](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
|
||||
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ By submitting natural language requests in a prompt to an LLM, you can obtain an
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Create the basic prompting flow
|
||||
|
||||
|
|
|
|||
|
|
@ -11,8 +11,8 @@ The [Language model](/components-models) component uses this input to generate a
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Create the blog writer flow
|
||||
|
||||
|
|
|
|||
|
|
@ -10,8 +10,8 @@ This flow demonstrates adding a file to the [File](/components-data#file) compon
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Create the document QA flow
|
||||
|
||||
|
|
|
|||
|
|
@ -12,8 +12,8 @@ The [Structured output](/components-processing#structured-output) component is u
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Create the financial report parser flow
|
||||
|
||||
|
|
|
|||
|
|
@ -5,16 +5,12 @@ slug: /memory-chatbot
|
|||
|
||||
import Icon from "@site/src/components/icon";
|
||||
|
||||
:::info
|
||||
The **Chat memory** component is also known as the **Message history** component.
|
||||
:::
|
||||
|
||||
This flow extends the [basic prompting flow](/basic-prompting) with a [Message history](/components-helpers#message-history) component that stores previous chat messages and uses them to provide context for the current conversation.
|
||||
This flow extends the [**Basic Prompting** flow template](/basic-prompting) with a [**Message History** component](/components-helpers#message-history) that stores previous chat messages and uses them to provide context for the current conversation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Create the memory chatbot flow
|
||||
|
||||
|
|
@ -24,7 +20,7 @@ This flow extends the [basic prompting flow](/basic-prompting) with a [Message h
|
|||
|
||||

|
||||
|
||||
This flow adds a **Message history** component to the Basic Prompting flow.
|
||||
This flow adds a **Message History** component to the Basic Prompting flow.
|
||||
This component retrieves previous messages and sends them to the **Prompt** component to fill a part of the **Template** with context.
|
||||
|
||||
To examine the template, click the **Template** field in the **Prompt** component.
|
||||
|
|
|
|||
|
|
@ -11,9 +11,9 @@ Each successive agent in the flow builds on the work of the previous agent, crea
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
- [A Tavily AI API key](https://www.tavily.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* Create a [Tavily AI API key](https://www.tavily.com/)
|
||||
|
||||
## Open Langflow and create a new flow
|
||||
|
||||
|
|
|
|||
|
|
@ -13,8 +13,8 @@ The **Agent** selects the [Calculator](/components-helpers#calculator) tool for
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
## Open Langflow and start a new flow
|
||||
|
||||
|
|
|
|||
|
|
@ -15,9 +15,9 @@ All agents have access to the **Search API** and **URL Content Fetcher** compone
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
- [A Search API key](https://www.searchapi.io/)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* Create a [Search API key](https://www.searchapi.io/)
|
||||
|
||||
## Open Langflow and start a new flow
|
||||
|
||||
|
|
|
|||
|
|
@ -13,15 +13,14 @@ This enables **vector search**, a more powerful and context-aware search.
|
|||
|
||||
We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
- [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with the following:
|
||||
- An Astra DB application token scoped to read and write to the database
|
||||
- A collection created in [Astra](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection) or a new collection created in the **Astra DB** component
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* Create an [Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html)
|
||||
* Create an [Astra application token scoped to your database](https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html#database-token)
|
||||
|
||||
You can use an existing [collection](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection) or create a new collection through the **Astra DB** component.
|
||||
|
||||
## Open Langflow and start a new project
|
||||
|
||||
|
|
@ -65,7 +64,7 @@ The **Retriever Flow** (top of the screen) embeds the user's queries into vector
|
|||
Complete the **Name**, **Cloud provider**, and **Region** fields, and then click **Create**. **Database creation takes a few minutes**.
|
||||
3. Select your **Collection**. Collections are created in your [Astra DB deployment](https://astra.datastax.com) for storing vector data.
|
||||
:::info
|
||||
If you select a collection embedded with Nvidia through Astra's vectorize service, the **Embedding Model** port is removed, because you have already generated embeddings for this collection with the Nvidia `NV-Embed-QA` model. The component fetches the data from the collection, and uses the same embeddings for queries.
|
||||
If your collection use the Astra vectorize NVIDIA integration, the **Embedding Model** port is removed, because the collection already generates embeddings with the integrated NVIDIA model. The component fetches the data from the collection, and uses the same model to generate embeddings for queries.
|
||||
:::
|
||||
|
||||
3. If you don't have a collection, create a new one within the component.
|
||||
|
|
|
|||
|
|
@ -13,10 +13,10 @@ With the agent connected, your application can use any connected tools to retrie
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [A Langflow API key](/configuration-api-keys)
|
||||
- [An OpenAI API key](https://platform.openai.com/api-keys)
|
||||
- [Langflow JavaScript client installed](/typescript-client)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create a [Langflow API key](/configuration-api-keys)
|
||||
* Install the [Langflow JavaScript client](/typescript-client)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
This tutorial uses an OpenAI LLM. If you want to use a different provider, you need a valid credential for that provider.
|
||||
|
||||
|
|
|
|||
|
|
@ -15,15 +15,15 @@ The main focus of this tutorial is to show you how to provide files as input to
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [A Langflow API key](/configuration-api-keys)
|
||||
- [An OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create a [Langflow API key](/configuration-api-keys)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
This tutorial uses an OpenAI LLM. If you want to use a different provider, you need a valid credential for that provider.
|
||||
|
||||
## Create a flow that accepts file input
|
||||
|
||||
To ingest files, your flow must have a **File** component attached to a component that receives input, such as a **Prompt** or **Agent** component.
|
||||
To ingest files, your flow must have a **File** component attached to a component that receives input, such as a **Prompt Template** or **Agent** component.
|
||||
|
||||
The following steps modify the [**Basic prompting**](/basic-prompting) template to accept file input:
|
||||
|
||||
|
|
@ -32,27 +32,30 @@ The following steps modify the [**Basic prompting**](/basic-prompting) template
|
|||
|
||||
If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
|
||||
3. To verify that your API key is valid, click <Icon name="Play" aria-hidden="true" /> **Playground**, and then ask the LLM a question.
|
||||
The LLM should respond according to the specifications in the **Prompt** component's **Template** field.
|
||||
4. Exit the **Playground**, and then modify the **Prompt** component to accept file input in addition to chat input.
|
||||
The LLM should respond according to the specifications in the **Prompt Template** component's **Template** field.
|
||||
4. Exit the **Playground**, and then modify the **Prompt Template** component to accept file input in addition to chat input.
|
||||
To do this, edit the **Template** field, and then replace the default prompt with the following text:
|
||||
|
||||
```text
|
||||
ChatInput:
|
||||
{chat-input}
|
||||
File:
|
||||
{file}
|
||||
```
|
||||
The **Prompt** component gets a new input port for each value in curly braces. At this point, your **Prompt** component should have **chat-input** and **file** input ports.
|
||||
|
||||
:::tip
|
||||
Within the curly braces, you can use any port name you like. For this tutorial, the ports are named after the components that connect to them.
|
||||
You can use any string to name your template variables.
|
||||
These strings become the names of the fields (input ports) on the **Prompt Template** component.
|
||||
|
||||
For this tutorial, the variables are named after the components that connect to them: **chat-input** for the **Chat Input** component and **file** for the **File** component.
|
||||
:::
|
||||
|
||||
5. Add a [File component](/components-data#file) to the flow, and then connect the **Raw Content** output port to the Prompt component's **file** input port.
|
||||
5. Add a [**File** component](/components-data#file) to the flow, and then connect the **Raw Content** output port to the **Prompt Template** component's **file** input port.
|
||||
To connect ports, click and drag from one port to the other.
|
||||
|
||||
You can add files directly to the file component to pre-load input before running the flow, or you can load files at runtime. The next section of this tutorial covers runtime file uploads.
|
||||
|
||||
At this point your flow has five components. The Chat Input and File components are connected to the Prompt component's input ports. Then, the Prompt component's output port is connected to the Language Model component's input port. Finally, the Language Model component's output port is connected to the Chat Output component, which returns the final response to the user.
|
||||
At this point your flow has five components. The Chat Input and File components are connected to the **Prompt Template** component's input ports. Then, the **Prompt Template** component's output port is connected to the Language Model component's input port. Finally, the Language Model component's output port is connected to the Chat Output component, which returns the final response to the user.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -11,11 +11,11 @@ This tutorial demonstrates how you can use Langflow to create a chatbot applicat
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- [A running Langflow instance](/get-started-installation)
|
||||
- [A Langflow API key](/configuration-api-keys)
|
||||
- [An OpenAI API key](https://platform.openai.com/)
|
||||
- [Langflow JavaScript client installed](/typescript-client)
|
||||
- Familiarity with vector search concepts and applications, such as [vector databases](https://www.datastax.com/guides/what-is-a-vector-database) and [RAG](https://www.datastax.com/guides/what-is-retrieval-augmented-generation)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create a [Langflow API key](/configuration-api-keys)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* Install the [Langflow JavaScript client](/typescript-client)
|
||||
* Be familiar with vector search concepts and applications, such as [vector databases](https://www.datastax.com/guides/what-is-a-vector-database) and [RAG](https://www.datastax.com/guides/what-is-retrieval-augmented-generation)
|
||||
|
||||
## Create a vector RAG flow
|
||||
|
||||
|
|
|
|||
|
|
@ -25,9 +25,9 @@ In this tutorial, you will use the Langflow **MCP Tools** component to connect m
|
|||
|
||||
## Prerequisites
|
||||
|
||||
* [A running Langflow instance](/get-started-installation)
|
||||
* [A Langflow API key](/configuration-api-keys)
|
||||
* [An OpenAI API key](https://platform.openai.com/api-keys)
|
||||
* [Install and start Langflow](/get-started-installation)
|
||||
* Create a [Langflow API key](/configuration-api-keys)
|
||||
* Create an [OpenAI API key](https://platform.openai.com/api-keys)
|
||||
|
||||
This tutorial uses an OpenAI LLM. If you want to use a different provider, you need a valid credential for that provider.
|
||||
|
||||
|
|
|
|||
|
|
@ -146,11 +146,17 @@ module.exports = {
|
|||
label: "Observability",
|
||||
items: [
|
||||
"Develop/logging",
|
||||
"Integrations/Arize/integrations-arize",
|
||||
"Integrations/integrations-langfuse",
|
||||
"Integrations/integrations-langsmith",
|
||||
"Integrations/integrations-langwatch",
|
||||
"Integrations/integrations-opik",
|
||||
{
|
||||
type: "category",
|
||||
label: "Monitoring",
|
||||
items: [
|
||||
"Integrations/Arize/integrations-arize",
|
||||
"Integrations/integrations-langfuse",
|
||||
"Integrations/integrations-langsmith",
|
||||
"Integrations/integrations-langwatch",
|
||||
"Integrations/integrations-opik",
|
||||
],
|
||||
},
|
||||
"Contributing/contributing-telemetry",
|
||||
],
|
||||
},
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue