docs: v1.1.2 (#5850)

* docs:add-changelog-to-nav

* docs: add OpenRouter component documentation with detailed inputs and outputs

* docs: add Outputs section to components-models documentation for Cohere and Ollama

* docs: update references from configuration-objects to concepts-objects across multiple components and documentation files

* feat: Add DataFrame operations section to components-processing documentation

* title-case-in-nav

* fix-memories-tab-in-chat-memory

* tool-calling-agent-update

* feat: enhance documentation with icon imports and improved instructions for OpenAI component

* material-icon

* fix: update documentation for tool mode input connection in agent component

* add-loop-component

* add-img-for-loop-summary

* feat: add documentation for using logic components in a flow with examples

* fix: enhance documentation for Loop component with detailed data flow explanation

* redirect-for-config-objects-page

* fix: improve error handling in data processing module

* fix: update documentation for Data objects in Loop component and add import statement in memory chatbot tutorial

* quickstart-screenshots

* docs: update starter flow images

* update-agent-screenshots

* move-repl-agent

* docs: enhance global variables documentation and clarify prerequisites for vector store RAG flow

* docs: update Simple Agent to use URL component

* docs: enhance memory chatbot tutorial with example conversation and clarify session ID terminology

* docs: update visibility icon description in concepts-components.md

* Apply suggestions from code review

Co-authored-by: brian-f <brian.fisher@datastax.com>

* correct-playground-sequence-and-typo

---------

Co-authored-by: brian-f <brian.fisher@datastax.com>
This commit is contained in:
Mendon Kissling 2025-01-24 09:24:57 -05:00 committed by GitHub
commit 0d11564dea
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
52 changed files with 917 additions and 1370 deletions

View file

@ -149,14 +149,7 @@ Current Date and Time: I can retrieve the current date and time in various time
## Make any component a tool
These components support **Tool Mode**:
* **URL**
* **API request**
* **Calculator**
* **Current date**
If the component you want to use as a tool doesn't have a **Tool Mode** button, add `tool_mode=True` to one of the component's inputs.
If the component you want to use as a tool doesn't have a **Tool Mode** button, add `tool_mode=True` to one of the component's inputs, and connect the new **Toolset** output to the agent's **Tools** input.
Langflow supports **Tool Mode** for the following data types:
@ -167,7 +160,7 @@ Langflow supports **Tool Mode** for the following data types:
* `MultilineInput`
* `DropdownInput`
For example, in the [components as tools](#components-as-tools) example above, `tool_mode=True,` is added to the `MultilineInput` input so the custom component can be used as a tool.
For example, the [components as tools](#components-as-tools) example above adds `tool_mode=True` to the `MessageTextInput` input so the custom component can be used as a tool.
```python
inputs = [

View file

@ -11,7 +11,7 @@ Helper components provide utility functions to help manage data, tasks, and othe
Chat memory in Langflow is stored either in local Langflow tables with `LCBufferMemory`, or connected to an external database.
The **Store Message** helper component stores chat memories as [Data](/configuration-objects) objects, and the **Message History** helper component retrieves chat messages as data objects or strings.
The **Store Message** helper component stores chat memories as [Data](/concepts-objects) objects, and the **Message History** helper component retrieves chat messages as data objects or strings.
This example flow stores and retrieves chat history from an [AstraDBChatMemory](/components-memories#astradbchatmemory-component) component with **Store Message** and **Chat Memory** components.

View file

@ -13,7 +13,8 @@ The difference between Chat Input and Text Input components is the output format
This component collects user input from the chat.
The Chat Input component creates a [Message](/configuration-objects) object that includes the input text, sender information, session ID, file attachments, and styling properties. It can optionally store the message in a chat history and supports customization of the message appearance.
The Chat Input component creates a [Message](/concepts-objects) object that includes the input text, sender information, session ID, file attachments, and styling properties.
It can optionally store the message in a chat history and supports customization of the message appearance.
### Inputs
@ -56,7 +57,8 @@ The Text Input component offers one input field for text, while the Chat Input h
## Chat Output
The Chat Output component creates a [Message](/configuration-objects) object that includes the input text, sender information, session ID, and styling properties. It can optionally store the message in a chat history and supports customization of the message appearance, including background color, icon, and text color.
The Chat Output component creates a [Message](/concepts-objects) object that includes the input text, sender information, session ID, and styling properties.
It can optionally store the message in a chat history and supports customization of the message appearance, including background color, icon, and text color.
### Inputs
@ -81,7 +83,9 @@ The Chat Output component creates a [Message](/configuration-objects) object tha
## Text Output
The TextOutputComponent displays text output in the **Playground**. It takes a single input of text and returns a [Message](/configuration-objects) object containing that text. The component is simpler compared to the Chat Output, but focuses solely on displaying text without additional chat-specific features or customizations.
The TextOutputComponent displays text output in the **Playground**.
It takes a single input of text and returns a [Message](/concepts-objects) object containing that text.
The component is simpler compared to the Chat Output but focuses solely on displaying text without additional chat-specific features or customizations.
### Inputs

View file

@ -14,7 +14,8 @@ Loaders fetch data into Langflow from various sources, such as databases, websit
## Use a loader component in a flow
This flow creates a question-and-answer chatbot for documents that are loaded into the flow.
The [Unstructured.io](https://unstructured.io/) loader component loads files from your local machine, and then parses them into a list of structured [Data](/configuration-objects) objects. This loaded data informs the **Open AI** component's responses to your questions.
The [Unstructured.io](https://unstructured.io/) loader component loads files from your local machine, and then parses them into a list of structured [Data](/concepts-objects) objects.
This loaded data informs the **Open AI** component's responses to your questions.
![Sample Flow retrieving data with unstructured](/img/starter-flow-unstructured-qa.png)
@ -62,7 +63,7 @@ The GitLoader component uses the GitLoader from LangChain to fetch and load docu
## Unstructured
This component uses the [Unstructured.io](https://unstructured.io/) Serverless API to load and parse files into a list of structured [Data](/configuration-objects) objects.
This component uses the [Unstructured.io](https://unstructured.io/) Serverless API to load and parse files into a list of structured [Data](/concepts-objects) objects.
### Inputs

View file

@ -7,6 +7,33 @@ slug: /components-logic
Logic components provide functionalities for routing, conditional processing, and flow management.
## Use a logic component in a flow
This flow creates a summarizing "for each" loop with the [Loop](/components-logic#loop) component.
The component iterates over a list of [Data](/concepts-objects#data-object) objects until it's completed, and then the **Done** loop aggregates the results.
The **File** component loads text files from your local machine, and then the **Parse Data** component parses them into a list of structured `Data` objects.
The **Loop** component passes each `Data` object to a **Prompt** to be summarized.
When the **Loop** component runs out of `Data`, the **Done** loop activates, which counts the number of pages and summarizes their tone with another **Prompt**.
This is represented in Langflow by connecting the Parse Data component's **Data List** output to the Loop component's `Data` loop input.
![Sample Flow looping summarizer](/img/loop-text-summarizer.png)
The output will look similar to this:
```plain
Document Summary
Total Pages Processed
Total Pages: 2
Overall Tone of Document
Tone: Informative and Instructional
The documentation outlines microservices architecture patterns and best practices.
It emphasizes service isolation and inter-service communication protocols.
The use of asynchronous messaging patterns is recommended for system scalability.
It includes code examples of REST and gRPC implementations to demonstrate integration approaches.
```
## Conditional router
This component routes an input message to a corresponding output based on text comparison.
@ -90,6 +117,24 @@ This component listens for a notification and retrieves its associated state.
|--------|------|--------------------------------------------|
| output | Data | The state associated with the notification. |
## Loop
This component iterates over a list of [Data](/concepts-objects#data-object) objects, outputting one item at a time and aggregating results from loop inputs.
### Inputs
| Name | Type | Description |
|------|-----------|------------------------------------------------------|
| data | Data/List | The initial list of Data objects to iterate over. |
### Outputs
| Name | Type | Description |
|------|---------|-------------------------------------------------------|
| item | Data | Outputs one item at a time from the data list. |
| done | Data | Triggered when iteration complete, returns aggregated results. |
## Notify
This component generates a notification for the Listen component to use.

View file

@ -15,7 +15,7 @@ Memory components provide access to their respective external databases **as mem
This example flow stores and retrieves chat history from an **Astra DB Chat Memory** component with **Store Message** and **Chat Memory** components.
The **Store Message** helper component stores chat memories as [Data](/configuration-objects) objects, and the **Message History** helper component retrieves chat messages as [Data](/configuration-objects) objects or strings.
The **Store Message** helper component stores chat memories as [Data](/concepts-objects) objects, and the **Message History** helper component retrieves chat messages as [Data](/concepts-objects) objects or strings.
![Sample Flow storing Chat Memory in AstraDB](/img/astra_db_chat_memory_rounded.png)

View file

@ -13,7 +13,7 @@ Refer to your specific component's documentation for more information on paramet
Model components receive inputs and prompts for generating text, and the generated text is sent to an output component.
The model output can also be sent to the **Language Model** port and on to a **Parse Data** component, where the output can be parsed into structured [Data](/configuration-objects) objects.
The model output can also be sent to the **Language Model** port and on to a **Parse Data** component, where the output can be parsed into structured [Data](/concepts-objects) objects.
This example has the OpenAI model in a chatbot flow. For more information, see the [Basic prompting flow](/starter-projects-basic-prompting).
@ -125,6 +125,12 @@ For more information, see the [Cohere documentation](https://cohere.ai/).
| Temperature | Temperature | Specifies the sampling temperature. Defaults to `0.75`. |
| Input Value | Input Value | Specifies the input text for text generation. |
### Outputs
| Name | Type | Description |
|-------|---------------|------------------------------------------------------------------|
| model | LanguageModel | An instance of the Cohere model configured with the specified parameters. |
## Google Generative AI
This component generates text using Google's Generative AI models.
@ -269,6 +275,12 @@ For more information, see [Ollama documentation](https://ollama.com/).
| Model Name | Model Name | The model name to use. |
| Temperature | Temperature | Controls the creativity of model responses. |
### Outputs
| Name | Type | Description |
|-------|---------------|------------------------------------------------------------------|
| model | LanguageModel | An instance of an Ollama model configured with the specified parameters. |
## OpenAI
This component generates text using OpenAI's language models.
@ -299,6 +311,30 @@ This component generates text using Qianfan's language models.
For more information, see [Qianfan documentation](https://github.com/baidubce/bce-qianfan-sdk).
## OpenRouter
This component generates text using OpenRouter's unified API for multiple AI models from different providers.
For more information, see [OpenRouter documentation](https://openrouter.ai/docs).
### Inputs
| Name | Type | Description |
|-------------|---------------|------------------------------------------------------------------|
| api_key | SecretString | Your OpenRouter API key for authentication. |
| site_url | String | Your site URL for OpenRouter rankings (advanced). |
| app_name | String | Your app name for OpenRouter rankings (advanced). |
| provider | String | The AI model provider to use. |
| model_name | String | The specific model to use for chat completion. |
| temperature | Float | Controls randomness in the output. Range: [0.0, 2.0]. Default: 0.7. |
| max_tokens | Integer | The maximum number of tokens to generate (advanced). |
### Outputs
| Name | Type | Description |
|-------|---------------|------------------------------------------------------------------|
| model | LanguageModel | An instance of ChatOpenAI configured with the specified parameters. |
## Perplexity
This component generates text using Perplexity's language models.

View file

@ -9,7 +9,7 @@ Processing components process and transform data within a flow.
## Use a processing component in a flow
The **Split Text** processing component in this flow splits the incoming [data](/configuration-objects) into chunks to be embedded into the vector store component.
The **Split Text** processing component in this flow splits the incoming [data](/concepts-objects) into chunks to be embedded into the vector store component.
The component offers control over chunk size, overlap, and separator, which affect context and granularity in vector store retrieval results.
@ -28,6 +28,44 @@ This component concatenates two text sources into a single text chunk using a sp
| delimiter | Delimiter | A string used to separate the two text inputs. Defaults to a space. |
## DataFrame operations
This component performs the following operations on Pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html):
| Operation | Description | Required Inputs |
|-----------|-------------|-----------------|
| Add Column | Adds a new column with a constant value | new_column_name, new_column_value |
| Drop Column | Removes a specified column | column_name |
| Filter | Filters rows based on column value | column_name, filter_value |
| Head | Returns first n rows | num_rows |
| Rename Column | Renames an existing column | column_name, new_column_name |
| Replace Value | Replaces values in a column | column_name, replace_value, replacement_value |
| Select Columns | Selects specific columns | columns_to_select |
| Sort | Sorts DataFrame by column | column_name, ascending |
| Tail | Returns last n rows | num_rows |
### Inputs
| Name | Display Name | Info |
|------|--------------|------|
| df | DataFrame | The input DataFrame to operate on. |
| operation | Operation | Select the DataFrame operation to perform. Options: Add Column, Drop Column, Filter, Head, Rename Column, Replace Value, Select Columns, Sort, Tail |
| column_name | Column Name | The column name to use for the operation. |
| filter_value | Filter Value | The value to filter rows by. |
| ascending | Sort Ascending | Whether to sort in ascending order. |
| new_column_name | New Column Name | The new column name when renaming or adding a column. |
| new_column_value | New Column Value | The value to populate the new column with. |
| columns_to_select | Columns to Select | List of column names to select. |
| num_rows | Number of Rows | Number of rows to return (for head/tail). Default: 5 |
| replace_value | Value to Replace | The value to replace in the column. |
| replacement_value | Replacement Value | The value to replace with. |
### Outputs
| Name | Display Name | Info |
|------|--------------|------|
| output | DataFrame | The resulting DataFrame after the operation. |
## Filter Data
This component filters a Data object based on a list of keys.

View file

@ -49,7 +49,7 @@ The Data output is primarily used when directly querying Astra DB, while the Too
| Name | Type | Description |
|------|------|-------------|
| Data | List[`Data`] | A list of [Data](/configuration-objects) objects containing the query results from Astra DB. Each `Data` object contains the document fields specified by the projection attributes. Limited by the `number_of_results` parameter. |
| Data | List[`Data`] | A list of [Data](/concepts-objects) objects containing the query results from Astra DB. Each `Data` object contains the document fields specified by the projection attributes. Limited by the `number_of_results` parameter. |
| Tool | StructuredTool | A LangChain `StructuredTool` object that can be used in agent workflows. Contains the tool name, description, argument schema based on tool parameters, and the query function. |
@ -79,7 +79,7 @@ The main difference between this tool and the **Astra DB Tool** is that this too
| Name | Type | Description |
|------|------|-------------|
| Data | List[Data] | A list of [Data](/configuration-objects) objects containing the query results from the Astra DB CQL table. Each Data object contains the document fields specified by the projection fields. Limited by the number_of_results parameter. |
| Data | List[Data] | A list of [Data](/concepts-objects) objects containing the query results from the Astra DB CQL table. Each Data object contains the document fields specified by the projection fields. Limited by the `number_of_results` parameter. |
| Tool | StructuredTool | A LangChain StructuredTool object that can be used in agent workflows. Contains the tool name, description, argument schema based on partition and clustering keys, and the query function. |
## Bing Search API

View file

@ -23,7 +23,9 @@ This vector data can then be retrieved for workloads like Retrieval Augmented Ge
![](/img/vector-store-retrieval.png)
The user's chat input is embedded and compared to the vectors embedded during document ingestion for a similarity search. The results are output from the vector database component as a [Data](/configuration-objects) object, and parsed into text. This text fills the `{context}` variable in the **Prompt** component, which informs the **Open AI model** component's responses.
The user's chat input is embedded and compared to the vectors embedded during document ingestion for a similarity search.
The results are output from the vector database component as a [Data](/concepts-objects) object and parsed into text.
This text fills the `{context}` variable in the **Prompt** component, which informs the **Open AI model** component's responses.
Alternatively, connect the vector database component's **Retriever** port to a [retriever tool](components-tools#retriever-tool), and then to an [agent](/components-agents) component. This enables the agent to use your vector database as a tool and make decisions based on the available data.

View file

@ -27,7 +27,7 @@ Use the component controls to do the following:
Click <Icon name="Ellipsis" aria-label="Horizontal ellipsis" /> **All** to see additional options for a component.
To view a components output and logs, click the <Icon name="View" aria-label="View icon" />**Visibility** icon.
To view a components output and logs, click the <Icon name="TextSearch" aria-label="Search and filter" /> icon.
To run a single component, click <Icon name="Play" aria-label="Play button" /> **Play**.

View file

@ -1,9 +1,9 @@
---
title: Langflow objects
slug: /configuration-objects
slug: /concepts-objects
---
In Langflow, the Data and Message objects are Pydantic models that serve as structured, functional representations of data.
In Langflow, objects are [Pydantic](https://docs.pydantic.dev/latest/api/base_model/) models that serve as structured, functional representations of data.
## Data object

View file

@ -47,7 +47,7 @@ The **workspace** is where you create AI applications by connecting and running
The workspace controls allow you to adjust your view and lock your flows in place.
* Add **Notes** to flows with the **Add Note** button, similar to commenting in code.
* To access the [Settings](#settings) menu, click ⚙️ **Settings**.
* To access the [Settings](#settings) menu, click <Icon name="Settings" aria-label="Gear icon" /> **Settings**.
This menu contains configuration for **Global Variables**, **Langflow API**, **Shortcuts**, and **Messages**.

View file

@ -3,12 +3,13 @@ title: Global variables
slug: /configuration-global-variables
---
import Icon from "@site/src/components/icon";
import ReactPlayer from "react-player";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Global variables let you store and reuse generic input values and credentials across your projects.
You can use a global variable in any text input field that displays the 🌐 icon.
You can use a global variable in any text input field that displays the <Icon name="Globe" aria-label="Globe" /> **Globe** icon.
Langflow stores global variables in its internal database, and encrypts the values using a secret key.

View file

@ -3,14 +3,15 @@ title: Quickstart
slug: /get-started-quickstart
---
import Icon from "@site/src/components/icon";
Get to know Langflow by building an OpenAI-powered chatbot application. After you've constructed a chatbot, add Retrieval Augmented Generation (RAG) to chat with your own data.
## Prerequisites
* [An OpenAI API key](https://platform.openai.com/)
* [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
* AstraDB application token
* API endpoint
* An AstraDB application token
* [A collection in Astra](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection)
## Open Langflow and start a new project
@ -48,7 +49,7 @@ You should now have a flow that looks like this:
![](/img/quickstart-basic-prompt-no-connections.png)
With no connections between them, the components won't interact with each other.
You want data to flow from **Chat Input** to **Chat Output** via the connectors between the components.
You want data to flow from **Chat Input** to **Chat Output** through the connections between the components.
Each component accepts inputs on its left side, and sends outputs on its right side.
Hover over the connection ports to see the data types that the component accepts.
For more on component inputs and outputs, see [Components overview](/concepts-components).
@ -67,7 +68,7 @@ Add your OpenAI API key to the OpenAI model component, and add a prompt to the P
1. Add your credentials to the OpenAI component. The fastest way to complete these fields is with Langflows [Global Variables](/configuration-global-variables).
1. In the OpenAI components OpenAI API Key field, click the language Globe icon, and then click **Add New Variable**.
1. In the OpenAI components OpenAI API Key field, click the <Icon name="Globe" aria-label="Globe" /> **Globe** button, and then click **Add New Variable**.
Alternatively, click your username in the top right corner, and then click **Settings**, **Global Variables**, and then **Add New**.
2. Name your variable. Paste your OpenAI API key (sk-…​) in the Value field.
3. In the **Apply To Fields** field, select the OpenAI API Key field to apply this variable to all OpenAI Embeddings components.
@ -131,9 +132,12 @@ The [OpenAI Embeddings](/components-embedding-models#openai-embeddings) componen
8. Configure the **Astra DB** component.
1. In the **Astra DB Application Token** field, add your **Astra DB** application token.
2. In the **API Endpoint** field, add your **Astra DB** API endpoint. This value is found in your [Astra DB deployment](https://astra.datastax.com) and looks similar to `https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`.
3. In the **Collection** field, enter your Astra DB collection's name. Collections are created in your [Astra DB deployment](https://astra.datastax.com) for storing vector data. The collections **Dimensions** value must match the dimensions of the **OpenAI Embeddings Model**. If youre unsure, enter `1536` and select the `text-embedding-ada-002` model in the OpenAI Embeddings component. For more on collections, see the [DataStax Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection).
The component connects to your database and populates the menus with existing databases and collections.
2. Select your **Database**.
3. Select your **Collection**. Collections are created in your [Astra DB deployment](https://astra.datastax.com) for storing vector data.
If you don't have a collection, see the [DataStax Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection).
4. Select **Embedding Model** to bring your own embeddings model, which is the connected **OpenAI Embeddings** component.
The **Dimensions** value must match the dimensions of your collection. This value can be found in your **Collection** in your [Astra DB deployment](https://astra.datastax.com).
If you used Langflow's **Global Variables** feature, the RAG application flow components are already configured with the necessary credentials.

View file

@ -1,62 +1,56 @@
---
title: Basic Prompting
title: Basic prompting
slug: /starter-projects-basic-prompting
---
import Icon from "@site/src/components/icon";
Prompts serve as the inputs to a large language model (LLM), acting as the interface between human instructions and computational tasks.
By submitting natural language requests in a prompt to an LLM, you can obtain answers, generate text, and solve problems.
This article demonstrates how to use Langflow's prompt tools to issue basic prompts to an LLM, and how various prompting strategies can affect your outcomes.
## Prerequisites {#20bd7bc51ce04e2fb4922c95f00870d3}
---
## Prerequisites
- [Langflow installed and running](/get-started-installation)
- [OpenAI API key created](https://platform.openai.com/)
## Create the basic prompting flow {#19d5305239c841548a695e2bf7839e7a}
## Create the basic prompting flow
1. From the Langflow dashboard, click **New Flow**.
2. Select **Basic Prompting**.
3. The **Basic Prompting** flow is created.
![](/img/starter-flow-basic-prompting.png)
This flow allows you to chat with the **OpenAI** component through the **Prompt** component.
This flow allows you to chat with the **OpenAI model** component.
The model will respond according to the prompt constructed in the **Prompt** component.
4. To examine the **Template**, in the **Prompt** component, click the **Template** field.
Examine the **Prompt** component. The **Template** field instructs the LLM to `Answer the user as if you were a pirate.` This should be interesting...
```plain
Answer the user as if you were a GenAI expert, enthusiastic about helping them get started building something fresh.
```
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
5. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-label="Globe icon" /> **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the basic prompting flow
## Run the basic prompting flow {#ce52f8e6b491452a9dfb069feb962eed}
1. Click the **Playground** button on the control panel (bottom right side of the workspace). This is where you can interact with your AI.
1. Click the **Playground** button.
2. Type a message and press Enter. The bot should respond in a markedly piratical manner!
## Modify the prompt for a different result {#3ab045fcbe774c8fb3adc528f9042ba0}
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** template, click the **Template** field. The **Edit Prompt** window opens.
2. Change `Answer the user as if you were a pirate` to a different character, perhaps `Answer the user as if you were Hermione Granger.`
3. Run the workflow again. The response will be markedly different.
1. To modify your prompt results, in the **Prompt** component, click the **Template** field. The **Edit Prompt** window opens.
2. Change the existing prompt to a different character, perhaps `Answer the user as if you were Hermione Granger.`
3. Run the workflow again and notice how the prompt changes the model's response.

View file

@ -3,11 +3,12 @@ title: Simple agent
slug: /starter-projects-simple-agent
---
Build a **Simple Agent** flow for an agentic application using the Tool-calling agent.
Build a **Simple Agent** flow for an agentic application using the **Tool-calling agent** component.
An **agent** uses an LLM as its "brain" to select among the connected tools and complete its tasks.
In this flow, the **Tool-calling agent** reasons using an **Open AI** LLM to solve math problems. It will select the **Calculator** tool for simpler math, and the **Python REPL** tool (with the Python `math` library) for more complex problems.
In this flow, the **Tool-calling agent** reasons using an **Open AI** LLM.
The agent selects the **Calculator** tool for simple math problems and the **URL** tool to search a URL for content.
## Prerequisites
@ -21,12 +22,12 @@ This opens a starter flow with the necessary components to run an agentic applic
## Simple Agent flow
![](/img/starter-flow-simple-agent.png)
<img src="/img/starter-flow-simple-agent.png" alt="Starter flow simple agent" width="75%"/>
The **Simple Agent** flow consists of these components:
* The **Tool calling agent** component uses the connected LLM to reason through the user's input and select among the connected tools to complete its task.
* The **Python REPL tool** component executes Python code in a REPL (Read-Evaluate-Print Loop) interpreter.
* The **URL** tool component searches a list of URLs for content.
* The **Calculator** component performs basic arithmetic operations.
* The **Chat Input** component accepts user input to the chat.
* The **Prompt** component combines the user input with a user-defined prompt.
@ -36,22 +37,30 @@ The **Simple Agent** flow consists of these components:
## Run the Simple Agent flow
1. Add your credentials to the Open AI component.
2. In the **Chat output** component, click ▶️ Play to start the end-to-end application flow.
A **Chat output built successfully** message and a ✅ Check on all components indicate that the flow ran successfully.
3. Click **Playground** to start a chat session.
4. Enter a simple math problem, like `2 + 2`, and then make sure the bot responds with the correct answer.
5. To confirm the REPL interpreter is working, prompt the `math` library directly with `math.sqrt(4)` and see if the bot responds with `4`.
6. The agent will also reason through more complex word problems. For example, prompt the agent with the following math problem:
2. Click **Playground** to start a chat session.
3. To confirm the tools are connected, ask the agent, `What tools are available to you?`
The response is similar to the following:
```plain
I have access to the following tools:
Calculator: Perform basic arithmetic operations.
fetch_content: Load and retrieve data from specified URLs.
fetch_content_text: Load and retrieve text data from specified URLs.
as_dataframe: Load and retrieve data in a structured format (dataframe) from specified URLs.
get_current_date: Returns the current date and time in a selected timezone.
```
4. Ask the agent a question. For example, ask it to create a tabletop character using your favorite rules set.
The agent will tell you when it's using the `URL-fetch_content_text` tool to search for rules information, and when it's using `CalculatorComponent-evaluate_expression` to generate attributes with dice rolls.
The final output should be similar to this:
```plain
The equation 24x2+25x47ax2=8x353ax2 is true for all values of x≠2a, where a is a constant.
What is the value of a?
A) -16
B) -3
C) 3
D) 16
Final Attributes
Strength (STR): 10
Constitution (CON): 12
Size (SIZ): 14
Dexterity (DEX): 9
Intelligence (INT): 11
Power (POW): 13
Charisma (CHA): 8
```
The agent should respond with `B`.
Now that your query has completed the journey from **Chat input** to **Chat output**, you have completed the **Simple Agent** flow.

View file

@ -1,9 +1,9 @@
---
title: Vector Store RAG
title: Vector store RAG
slug: /starter-projects-vector-store-rag
---
import Icon from "@site/src/components/icon";
Retrieval Augmented Generation, or RAG, is a pattern for training LLMs on your data and querying it.
@ -17,65 +17,55 @@ This enables **vector search**, a more powerful and context-aware search.
We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account) as the vector database for this starter flow, but you can follow along with any of Langflow's vector database options.
## Prerequisites {#6aa2c6dff6894eccadc39d4903d79e66}
## Prerequisites
* [An OpenAI API key](https://platform.openai.com/)
* [An Astra DB vector database](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
* An Astra DB application token
* [A collection in Astra](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection)
---
- [Langflow installed and running](https://docs.langflow.org/get-started-installation)
- [OpenAI API key](https://platform.openai.com/)
- [An Astra DB vector database created](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
- Application Token
- API Endpoint
## Create the vector store RAG flow
## Open Langflow and start a new project
1. From the Langflow dashboard, click **New Flow**.
2. Select **Vector Store RAG**.
3. The **Vector Store RAG** flow is created.
![](/img/starter-flow-vector-rag.png)
## Build the vector RAG flow
The vector store RAG flow is built of two separate flows for ingestion and query.
![](/img/starter-flow-vector-rag.png)
The **ingestion** part (bottom of the screen) populates the vector store with data from a local file. It ingests data from a file (**File**), splits it into chunks (**Split Text**), indexes it in Astra DB (**Astra DB**), and computes embeddings for the chunks using an embedding model (**OpenAI Embeddings**).
The **Load Data Flow** (bottom of the screen) creates a searchable index to be queried for contextual similarity.
This flow populates the vector store with data from a local file.
It ingests data from a local file, splits it into chunks, indexes it in Astra DB, and computes embeddings for the chunks using the OpenAI embeddings model.
The **Retriever Flow** (top of the screen) embeds the user's queries into vectors, which are compared to the vector store data from the **Load Data Flow** for contextual similarity.
:::tip
- **Chat Input** receives user input from the **Playground**.
- **OpenAI Embeddings** converts the user query into vector form.
- **Astra DB** performs similarity search using the query vector.
- **Parse Data** processes the retrieved chunks.
- **Prompt** combines the user query with relevant context.
- **OpenAI** generates the response using the prompt.
- **Chat Output** returns the response to the **Playground**.
Embeddings are numerical vectors that represent data meaningfully. They enable efficient similarity searches in vector stores by placing similar items close together in the vector space, enhancing search and recommendation tasks.
:::
This part creates a searchable index to be queried for contextual similarity.
The **query** part (top of the screen) allows users to retrieve embedded vector store data. Components:
- **Chat Input** defines where to send the user input (coming from the Playground).
- **OpenAI Embeddings** is the model used to generate embeddings from the user input.
- **Astra DB** retrieves the most relevant chunks from the Astra DB database (here, used for search, not ingestion).
- **Parse Data** converts chunks coming from the **Astra DB** component into plain text to feed a prompt.
- **Prompt** takes in the user input and the retrieved chunks as text and builds a prompt for the model.
- **OpenAI** takes in the prompt to generate a response.
- **Chat Output** component displays the response in the Playground.
1. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. To create environment variables for the **Astra DB** and **Astra DB Search** components:
1. In the **Token** field, click the **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `astra_token`.
3. In the **Value** field, paste your Astra application token (`AstraCS:WSnyFUhRxsrg…`).
1. Configure the **OpenAI** model component.
1. To create a global variable for the **OpenAI** component, in the **OpenAI API Key** field, click the <Icon name="Globe" aria-label="Globe" /> **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `openai_api_key`.
3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
4. Click **Save Variable**.
5. Repeat the above steps for the **API Endpoint** field, pasting your Astra API Endpoint instead (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`).
6. Add the global variable to both the **Astra DB** and **Astra DB Search** components.
2. Configure the **Astra DB** component.
1. In the **Astra DB Application Token** field, add your **Astra DB** application token.
The component connects to your database and populates the menus with existing databases and collections.
2. Select your **Database**.
3. Select your **Collection**. Collections are created in your [Astra DB deployment](https://astra.datastax.com) for storing vector data.
If you don't have a collection, see the [DataStax Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/databases/manage-collections.html#create-collection).
4. Select **Embedding Model** to bring your own embeddings model, which is the connected **OpenAI Embeddings** component.
The **Dimensions** value must match the dimensions of your collection. You can find this value in the **Collection** in your [Astra DB deployment](https://astra.datastax.com).
If you used Langflow's **Global Variables** feature, the RAG application flow components are already configured with the necessary credentials.
## Run the Vector Store RAG flow

View file

@ -1,5 +1,5 @@
---
title: Blog Writer
title: Blog writer
slug: /tutorials-blog-writer
---

View file

@ -0,0 +1,56 @@
---
title: Math agent
slug: /tutorials-math-agent
---
import Icon from "@site/src/components/icon";
Build a **Math Agent** flow for an agentic application using the **Tool-calling agent** component.
In this flow, the **Tool-calling agent** reasons using an **Open AI** LLM to solve math problems.
It selects the **Calculator** tool for simpler math and the **Python REPL** tool (with the Python `math` library) for more complex problems.
## Prerequisites
To use this flow, you need an OpenAI API key.
## Open Langflow and start a new flow
Click **New Flow**, and then select the **Math Agent** flow.
This opens a starter flow with the necessary components to run an agentic application using the Tool-calling agent.
## Math Agent flow
![](/img/starter-flow-simple-agent-repl.png)
The **Math Agent** flow consists of these components:
* The **Tool calling agent** component uses the connected LLM to reason through the user's input and select among the connected tools to complete its task.
* The **Python REPL tool** component executes Python code in a REPL (Read-Evaluate-Print Loop) interpreter.
* The **Calculator** component performs basic arithmetic operations.
* The **Chat Input** component accepts user input to the chat.
* The **Prompt** component combines the user input with a user-defined prompt.
* The **Chat Output** component prints the flow's output to the chat.
* The **OpenAI** model component sends the user input and prompt to the OpenAI API and receives a response.
## Run the Math Agent flow
1. Add your credentials to the Open AI component.
2. Click **Playground** to start a chat session.
3. Enter a simple math problem, like `2 + 2`, and then make sure the bot responds with the correct answer.
4. To confirm the REPL interpreter is working, prompt the `math` library directly with `math.sqrt(4)` and see if the bot responds with `4`.
5. The agent will also reason through more complex word problems. For example, prompt the agent with the following math problem:
```plain
The equation 24x2+25x47ax2=8x353ax2 is true for all values of x≠2a, where a is a constant.
What is the value of a?
A) -16
B) -3
C) 3
D) 16
```
The agent should respond with `B`.
Now that your query has completed the journey from **Chat input** to **Chat output**, you have completed the **Math Agent** flow.

View file

@ -1,13 +1,13 @@
---
title: Memory Chatbot
title: Memory chatbot
slug: /tutorials-memory-chatbot
---
This flow extends the [basic prompting](/starter-projects-basic-prompting) flow to include a chat memory. This makes the AI remember previous user inputs.
import Icon from "@site/src/components/icon";
## Prerequisites {#a71d73e99b1543bbba827207503cf31f}
This flow extends the [basic prompting flow](/starter-projects-basic-prompting) with a **Chat memory** component that stores up to 100 previous chat messages and uses them to provide context for the current conversation.
---
## Prerequisites
- [Langflow installed and running](/get-started-installation)
- [OpenAI API key created](https://platform.openai.com/)
@ -20,31 +20,49 @@ This flow extends the [basic prompting](/starter-projects-basic-prompting) flo
![](/img/starter-flow-memory-chatbot.png)
This flow uses the same components as the Basic Prompting one, but extends it with a **Chat Memory** component. This component retrieves previous messages and sends them to the **Prompt** component to fill a part of the **Template** with context.
This flow adds a **Chat Memory** component to the Basic Prompting flow.
This component retrieves previous messages and sends them to the **Prompt** component to fill a part of the **Template** with context.
By clicking the template, you'll see the prompt editor like below:
To examine the template, click the **Template** field in the **Prompt** component.
The **Prompt** tells the **OpenAI model** component how to respond to input.
```plain
You are a helpful assistant that answers questions.
Use markdown to format your answer, properly embedding images and urls.
History:
{memory}
```
The `{memory}` code in the prompt creates a new input port in the component called **memory**.
The **Chat Memory** component is connected to this port to store chat messages from the **Playground**.
This gives the **OpenAI** component a memory of previous chat messages.
## Run the memory chatbot flow
1. Open the Playground.
2. Type multiple questions. In the **Memories** tab, your queries are logged in order. Up to 100 queries are stored by default. Try telling the AI your name and asking `What is my name?` on a second message, or `What is the first subject I asked you about?` to validate that previous knowledge is taking effect.
1. Open the **Playground**.
2. Type multiple questions. For example, try entering this conversation:
:::tip
Check and adjust advanced parameters by opening the Advanced Settings of the **Chat Memory** component.
:::
```plain
Hi, my name is Luca.
Please tell me about PostgreSQL.
What is my name?
What is the second subject I asked you about?
```
The chatbot remembers your name and previous questions.
3. To view the **Message Logs** pane, click <Icon name="Ellipsis" aria-label="Horizontal ellipsis" />, and then click **Message Logs**.
The **Message Logs** pane displays all previous messages, with each conversation sorted by `session_id`.
![](/img/messages-logs.png)
## Use Session ID with the memory chatbot flow
---
`SessionID` is a unique identifier in Langflow that stores conversation sessions between the AI and a user. A `SessionID` is created when a conversation is initiated, and then associated with all subsequent messages during that session.
`session_id` is a unique identifier in Langflow that stores conversation sessions between the AI and a user. A `session_id` is created when a conversation is initiated, and then associated with all subsequent messages during that session.
In the **Memory Chatbot** flow you created, the **Chat Memory** component references past interactions by **Session ID**. You can demonstrate this by modifying the **Session ID** value to switch between conversation histories.
@ -52,10 +70,4 @@ In the **Memory Chatbot** flow you created, the **Chat Memory** component re
2. Now, once you send a new message the **Playground**, you should have a new memory created on the **Memories** tab.
3. Notice how your conversation is being stored in different memory sessions.
:::tip
Every chat component in Langflow comes with a `SessionID`. It defaults to the flow ID. Explore how changing it affects what the AI remembers.
:::
Learn more about memories in the [Memory](/components-memories) section.
Learn more about chat memories in the [Memory](/components-memories) section.

View file

@ -39,9 +39,7 @@ The **Travel Planning Agent** flow consists of these components:
## Run the travel planning agent flow
1. Add your credentials to the Open AI and Search API components.
2. In the **Chat output** component, click ▶️ Play to start the end-to-end application flow.
A **Chat output built successfully** message and a ✅ Check on all components indicate that the flow ran successfully.
3. Click **Playground** to start a chat session.
You should receive a detailed, helpful answer to the journey defined in the **Chat input** component.
2. Click **Playground** to start a chat session.
You should receive a detailed, helpful answer to the journey defined in the **Chat input** component.
Now that your query has completed the journey from **Chat input** to **Chat output**, you have completed the **Travel Planning Agent** flow.

View file

@ -145,8 +145,11 @@ const config = {
],
},
{
to: "/configuration-objects",
from: "/guides-data-message",
to: "/concepts-objects",
from: [
"/guides-data-message",
"/configuration-objects",
]
},
{
to: "/tutorials-sequential-agent",

File diff suppressed because it is too large Load diff

View file

@ -3,7 +3,7 @@ module.exports = {
"Get-Started/welcome-to-langflow",
{
type: "category",
label: "Get Started",
label: "Get started",
items: [
"Get-Started/get-started-installation",
"Get-Started/get-started-quickstart",
@ -11,7 +11,7 @@ module.exports = {
},
{
type: "category",
label: "Starter Projects",
label: "Starter projects",
items: [
'Starter-Projects/starter-projects-basic-prompting',
'Starter-Projects/starter-projects-vector-store-rag',
@ -25,6 +25,7 @@ module.exports = {
'Tutorials/tutorials-blog-writer',
'Tutorials/tutorials-document-qa',
'Tutorials/tutorials-memory-chatbot',
'Tutorials/tutorials-math-agent',
'Tutorials/tutorials-sequential-agent',
'Tutorials/tutorials-travel-planning-agent',
],
@ -37,6 +38,7 @@ module.exports = {
"Concepts/concepts-playground",
"Concepts/concepts-components",
"Concepts/concepts-flows",
"Concepts/concepts-objects",
"Concepts/concepts-api",
],
},
@ -79,7 +81,6 @@ module.exports = {
"Configuration/configuration-cli",
"Configuration/configuration-global-variables",
"Configuration/environment-variables",
"Configuration/configuration-objects",
"Configuration/configuration-security-best-practices"
],
},
@ -137,7 +138,7 @@ module.exports = {
},
{
type: "category",
label: "API Reference",
label: "API reference",
items: [
{
type: "doc",
@ -151,5 +152,16 @@ module.exports = {
},
],
},
{
type: "category",
label: "Changelog",
items: [
{
type: "link",
label: "Changelog",
href: "https://github.com/langflow-ai/langflow/releases/latest",
},
],
},
],
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 348 KiB

After

Width:  |  Height:  |  Size: 335 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

After

Width:  |  Height:  |  Size: 326 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

After

Width:  |  Height:  |  Size: 181 KiB

Before After
Before After

BIN
docs/static/img/loop-text-summarizer.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 533 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 531 KiB

After

Width:  |  Height:  |  Size: 320 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 146 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 525 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 367 KiB

After

Width:  |  Height:  |  Size: 383 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 373 KiB

After

Width:  |  Height:  |  Size: 402 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 KiB

After

Width:  |  Height:  |  Size: 385 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 337 KiB

After

Width:  |  Height:  |  Size: 398 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 347 KiB

After

Width:  |  Height:  |  Size: 377 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 508 KiB

After

Width:  |  Height:  |  Size: 458 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 343 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 343 KiB

After

Width:  |  Height:  |  Size: 328 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 503 KiB

After

Width:  |  Height:  |  Size: 404 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 376 KiB

After

Width:  |  Height:  |  Size: 429 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 529 KiB

After

Width:  |  Height:  |  Size: 581 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 337 KiB

After

Width:  |  Height:  |  Size: 343 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 447 KiB

After

Width:  |  Height:  |  Size: 382 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 501 KiB

After

Width:  |  Height:  |  Size: 489 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 180 KiB

After

Width:  |  Height:  |  Size: 165 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 410 KiB

After

Width:  |  Height:  |  Size: 455 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 497 KiB

After

Width:  |  Height:  |  Size: 427 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 395 KiB

After

Width:  |  Height:  |  Size: 309 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 449 KiB

After

Width:  |  Height:  |  Size: 406 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 576 KiB

After

Width:  |  Height:  |  Size: 586 KiB

Before After
Before After