merge fix

This commit is contained in:
cristhianzl 2024-05-29 20:15:13 -03:00
commit e6f466188c
75 changed files with 3041 additions and 518 deletions

View file

@ -2,7 +2,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
import ReactPlayer from "react-player";
# Global environment variables
# Global Environment Variables
Langflow 1.0 alpha includes the option to add **Global Environment Variables** for your application.

View file

@ -4,7 +4,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Sign up and Sign in
# Sign Up and Sign In
## Introduction

View file

@ -1,6 +1,6 @@
import ZoomableImage from "/src/theme/ZoomableImage.js";
# How to contribute components?
# How to Contribute Components?
As of Langflow 1.0 alpha, new components are added as objects of the [CustomComponent](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/interface/custom/custom_component/custom_component.py) class and any dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/dev/pyproject.toml#L27) file.

View file

@ -4,7 +4,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# 🎨 Langflow canvas
# 🎨 Langflow Canvas
The **Langflow canvas** is the central hub of Langflow, where you'll assemble new flows from components, run them, and see the results.
@ -20,11 +20,38 @@ You can either build this flow yourself, or select **New Project** > **Basic pro
style={{ width: "30%", margin: "20px auto" }}
/>
For more on the difference between flows, components, collections, and projects, see [Flows, collections, components, and projects](./flows-components-collections.mdx).
## Flows, components, collections, and projects
## Components
A [flow](#flow) is a pipeline of components connected together in the Langflow canvas.
A component is a building block of a flow.
A [component](#component) is a single building block within a flow. A component has inputs, outputs, and parameters that define its functionality.
A [collection](#collection) is a snapshot of the flows available in your database. Collections can be downloaded to local storage and uploaded for future use.
A [project](#project) can be a component or a flow. Projects are saved as part of your collection.
For example, the **OpenAI LLM** is a **component** of the **Basic prompting** flow, and the **flow** is stored in a **collection**.
## Flow
A **flow** is a pipeline of components connected together in the Langflow canvas.
For example, the [Basic prompting](../starter-projects/basic-prompting.mdx) flow is a pipeline of four components:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/basic-prompting.png",
dark: "img/basic-prompting.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
In this flow, the **OpenAI LLM component** receives input (left side) and produces output (right side) - in this case, receiving input from the **Chat Input** and **Prompt** components and producing output to the **Chat Output** component.
## Component
Components are the building blocks of flows. They consist of inputs, outputs, and parameters that define their functionality. These elements provide a convenient and straightforward way to compose LLM-based applications. Learn more about components and how they work in the LangChain [documentation](https://python.langchain.com/docs/integrations/components).
<div style={{ marginBottom: "20px" }}>
During the flow creation process, you will notice handles (colored circles)
@ -55,7 +82,7 @@ A component is a building block of a flow.
<div style={{ marginBottom: "20px" }}>
In the top right corner of the component, you'll find the component status icon (![Status icon](/logos/playbutton.svg)).
Run the flow by clicking the **![Playground icon](/logos/botmessage.svg)Playground** button at the bottom right of the canvas.
Build the flow by clicking the **![Playground icon](/logos/botmessage.svg)Playground** at the bottom right of the canvas.
Once the validation is complete, the status of each validated component should turn green (![Status icon](/logos/greencheck.svg)).
To debug, hover over the component status to see the outputs.
@ -64,6 +91,16 @@ To debug, hover over the component status to see the outputs.
---
### Component Parameters
Langflow components can be edited by clicking the component settings button. Hide parameters to reduce complexity and keep the canvas clean and intuitive for experimentation.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
>
<ReactPlayer playing controls url="/videos/langflow_parameters.mp4" />
</div>
### Component menu
Each component is a little unique, but they will all have a menu bar on top that looks something like this.
@ -78,7 +115,7 @@ The menu options are **Code**, **Save**, **Duplicate**, and **More**.
style={{ width: "30%", margin: "20px auto" }}
/>
#### Code
### Code menu
The **Code** button displays your component's Python code.
You can modify the code and save it.
@ -191,6 +228,34 @@ For example, changing the **Chat Input** component's `input_value` will change t
<ReactPlayer playing controls url="/videos/langflow_api.mp4" />
</div>
## Collection
A collection is a snapshot of flows available in a database.
Collections can be downloaded to local storage and uploaded for future use.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
>
<ReactPlayer playing controls url="/videos/langflow_collection.mp4" />
</div>
## Project
A **Project** can be a flow or a component. To view your saved projects, select **My Collection**.
Your **Projects** are displayed.
Click the **![Playground icon](/logos/botmessage.svg) Playground** button to run a flow from the **My Collection** screen.
In the top left corner of the screen are options for **Download Collection**, **Upload Collection**, and **New Project**.
Select **Download Collection** to save your project to your local machine. This downloads all flows and components as a `.json` file.
Select **Upload Collection** to upload a flow or component `.json` file from your local machine.
Select **New Project** to create a new project. In addition to a blank canvas, [starter projects](../starter-projects/basic-prompting.mdx) are also available.
## Project options menu
To see options for your project, in the upper left corner of the canvas, select the dropdown menu.

View file

@ -17,23 +17,6 @@ A [project](#project) can be a component or a flow. Projects are saved as part o
For example, the **OpenAI LLM** is a **component** of the **Basic prompting** flow, and the **flow** is stored in a **collection**.
## Flow
A **flow** is a pipeline of components connected together in the Langflow canvas.
For example, the [Basic prompting](../starter-projects/basic-prompting.mdx) flow is a pipeline of four components:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/basic-prompting.png",
dark: "img/basic-prompting.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
For example, the **OpenAI LLM component** receives input (left side) and produces output (right side) - in this case, receiving input from the **Chat Input** and **Prompt** components and producing output to the **Chat Output** component.
## Component
Components are the building blocks of flows. They consist of inputs, outputs, and parameters that define their functionality. These elements provide a convenient and straightforward way to compose LLM-based applications. Learn more about components and how they work in the LangChain [documentation](https://python.langchain.com/docs/integrations/components).
@ -107,9 +90,3 @@ Your **Projects** are displayed.
Click the **![Playground icon](/logos/botmessage.svg) Playground** button to run a flow from the **My Collection** screen.
In the top left corner of the screen are options for **Download Collection**, **Upload Collection**, and **New Project**.
Select **Download Collection** to save your project to your local machine. This downloads all flows and components as a `.json` file.
Select **Upload Collection** to upload a flow or component `.json` file from your local machine.
Select **New Project** to create a new project. In addition to a blank canvas, [starter projects](../starter-projects/basic-prompting.mdx) are also available.

View file

@ -1,38 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# 🤗 HuggingFace Spaces
HuggingFace provides a great alternative for running Langflow in their Spaces environment. This means you can run Langflow without any local installation required.
In a Chromium-based browser, go to the [Langflow Space](https://huggingface.co/spaces/Langflow/Langflow?duplicate=true) or [Langflow v1.0 alpha Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true).
You'll be presented with the following screen:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/duplicate-space.png",
dark: "img/duplicate-space.png",
}}
style={{ width: "100%", margin: "20px auto" }}
/>
Name your Space, define the visibility (Public or Private), and click on **Duplicate Space** to start the installation process. When installation is finished, you'll be redirected to the Space's main page to start using Langflow right away!
## Run a starter project
Langflow provides a range of example flows to help you get started.
Once you get Langflow running in your Space, click on **New Project** in the top right corner of the screen.
Select a starter project from the list, set up your API keys, and click ⚡ Run. This will open up Langflow's Interaction Panel with the chat console, text inputs, and outputs ready to go.
For more information on the starter projects, see the guides below:
* [Basic prompting](/starter-projects/basic-prompting.mdx)
* [Memory chatbot](/starter-projects/memory-chatbot.mdx)
* [Blog writer](/starter-projects/blog-writer.mdx)
* [Document QA](/starter-projects/document-qa.mdx)

View file

@ -6,35 +6,40 @@ import Admonition from "@theme/Admonition";
# 📦 Install Langflow
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](./huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true),
to create your own Langflow workspace in minutes.
</Admonition>
Langflow requires [Python 3.10](https://www.python.org/downloads/release/python-3100/) and [pip](https://pypi.org/project/pip/) or [pipx](https://pipx.pypa.io/stable/installation/) to be installed on your system.
Install Langflow with pip:
```bash
python -m pip install langflow -U
```
Install Langflow with pipx:
```bash
pipx install langflow --python python3.10 --fetch-missing-python
```
Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
## Install Langflow pre-release
To install a pre-release version of Langflow:
pip:
```bash
python -m pip install langflow --pre --force-reinstall
```
pipx:
```bash
pipx install langflow --python python3.10 --fetch-missing-python --pip-args="--pre --force-reinstall"
```
@ -54,11 +59,13 @@ python -m langflow --help
## ⛓️ Run Langflow
1. To run Langflow, enter the following command.
```bash
python -m langflow run
```
2. Confirm that a local Langflow instance starts by visiting `http://127.0.0.1:7860` in a Chromium-based browser.
```bash
│ Welcome to ⛓ Langflow │
│ │
@ -66,4 +73,23 @@ python -m langflow run
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
3. Continue on to the [Quickstart](./quickstart.mdx).
3. Continue on to the [Quickstart](./quickstart.mdx).
## HuggingFace Spaces
HuggingFace provides a great alternative for running Langflow in their Spaces environment. This means you can run Langflow without any local installation required.
In a Chromium-based browser, go to the [Langflow Space](https://huggingface.co/spaces/Langflow/Langflow?duplicate=true) or [Langflow v1.0 alpha Preview Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true).
You'll be presented with the following screen:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/duplicate-space.png",
dark: "img/duplicate-space.png",
}}
style={{ width: "100%", margin: "20px auto" }}
/>
Name your Space, define the visibility (Public or Private), and click on **Duplicate Space** to start the installation process. When installation is finished, you'll be redirected to the Space's main page to start using Langflow right away!

View file

@ -10,12 +10,15 @@ This guide demonstrates how to build a basic prompt flow and modify that prompt
## Prerequisites
* [Langflow installed](./install-langflow.mdx)
- [Langflow installed and running](./install-langflow.mdx)
* [OpenAI API key](https://platform.openai.com)
- [OpenAI API key](https://platform.openai.com)
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](./huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
## Hello World - Basic Prompting
@ -44,25 +47,25 @@ Examine the **Prompt** component. The **Template** field instructs the LLM to `A
This should be interesting...
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the basic prompting flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can chat with your bot.
The **Interaction Panel** opens, where you can chat with your bot.
2. Type a message and press Enter.
And... Ahoy! 🏴‍☠️
The bot responds in a piratical manner!
And... Ahoy! 🏴‍☠️
The bot responds in a piratical manner!
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** template, click the **Template** field.
The **Edit Prompt** window opens.
The **Edit Prompt** window opens.
2. Change `Answer the user as if you were a pirate` to a different character, perhaps `Answer the user as if you were Harold Abelson.`
3. Run the basic prompting flow again.
The response will be markedly different.
The response will be markedly different.
## Next steps
@ -72,8 +75,6 @@ By adding Langflow components to your flow, you can create all sorts of interest
Here are a couple of examples:
* [Memory chatbot](/starter-projects/memory-chatbot.mdx)
* [Blog writer](/starter-projects/blog-writer.mdx)
* [Document QA](/starter-projects/document-qa.mdx)
- [Memory chatbot](/starter-projects/memory-chatbot.mdx)
- [Blog writer](/starter-projects/blog-writer.mdx)
- [Document QA](/starter-projects/document-qa.mdx)

View file

@ -26,9 +26,14 @@ Its intuitive interface allows for easy manipulation of AI building blocks, enab
- [Quickstart](/getting-started/quickstart) - Create a flow and run it.
- [HuggingFace Spaces](/getting-started/huggingface-spaces) - Duplicate the Langflow preview space and try it out before you install.
- [Langflow Canvas](/getting-started/canvas) - Learn more about the Langflow canvas.
- [New to LLMs?](/getting-started/new-to-llms) - Learn more about LLMs, prompting, and more at [promptingguide.ai](https://promptingguide.ai).
<Admonition type="info">
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
## Learn more about Langflow 1.0

View file

@ -0,0 +1,138 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Add Content To Page
The `AddContentToPage` component converts markdown text to Notion blocks and appends them to a Notion page.
[Notion Reference](https://developers.notion.com/reference/patch-block-children)
<Admonition type="tip" title="Component Functionality">
The `AddContentToPage` component enables you to:
- Convert markdown text to Notion blocks.
- Append the converted blocks to a specified Notion page.
- Seamlessly integrate Notion content creation into Langflow workflows.
</Admonition>
## Component Usage
To use the `AddContentToPage` component in a Langflow flow:
1. **Add the `AddContentToPage` component** to your flow.
2. **Configure the component** by providing:
- `markdown_text`: The markdown text to convert.
- `block_id`: The ID of the Notion page/block to append the content.
- `notion_secret`: The Notion integration token for authentication.
3. **Connect the component** to other nodes in your flow as needed.
4. **Run the flow** to convert the markdown text and append it to the specified Notion page.
## Component Python Code
```python
import json
from typing import Optional
import requests
from langflow.custom import CustomComponent
class NotionPageCreator(CustomComponent):
display_name = "Create Page [Notion]"
description = "A component for creating Notion pages."
documentation: str = "https://docs.langflow.org/integrations/notion/add-content-to-page"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"database_id": {
"display_name": "Database ID",
"field_type": "str",
"info": "The ID of the Notion database.",
},
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
"properties": {
"display_name": "Properties",
"field_type": "str",
"info": "The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}",
},
}
def build(
self,
database_id: str,
notion_secret: str,
properties: str = '{"Task name": {"id": "title", "type": "title", "title": [{"type": "text", "text": {"content": "Send Notion Components to LF", "link": null}}]}}',
) -> str:
if not database_id or not properties:
raise ValueError("Invalid input. Please provide 'database_id' and 'properties'.")
headers = {
"Authorization": f"Bearer {notion_secret}",
"Content-Type": "application/json",
"Notion-Version": "2022-06-28",
}
data = {
"parent": {"database_id": database_id},
"properties": json.loads(properties),
}
response = requests.post("https://api.notion.com/v1/pages", headers=headers, json=data)
if response.status_code == 200:
page_id = response.json()["id"]
self.status = f"Successfully created Notion page with ID: {page_id}\n {str(response.json())}"
return response.json()
else:
error_message = f"Failed to create Notion page. Status code: {response.status_code}, Error: {response.text}"
self.status = error_message
raise Exception(error_message)
```
## Example Usage
<Admonition type="info" title="Example Usage">
Example of using the `AddContentToPage` component in a Langflow flow using Markdown as input:
<ZoomableImage
alt="NotionDatabaseProperties Flow Example"
sources={{
light: "img/notion/AddContentToPage_flow_example.png",
dark: "img/notion/AddContentToPage_flow_example.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
In this example, the `AddContentToPage` component connects to a `MarkdownLoader` component to provide the markdown text input. The converted Notion blocks are appended to the specified Notion page using the provided `block_id` and `notion_secret`.
</Admonition>
## Best Practices
When using the `AddContentToPage` component:
- Ensure markdown text is well-formatted.
- Verify the `block_id` corresponds to the right Notion page/block.
- Keep your Notion integration token secure.
- Test with sample markdown text before production use.
The `AddContentToPage` component is a powerful tool for integrating Notion content creation into Langflow workflows, facilitating easy conversion of markdown text to Notion blocks and appending them to specific pages.
## Troubleshooting
If you encounter any issues while using the `AddContentToPage` component, consider the following:
- Verify the Notion integration tokens validity and permissions.
- Check the Notion API documentation for updates.
- Ensure markdown text is properly formatted.
- Double-check the `block_id` for correctness.

View file

@ -0,0 +1,43 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Introduction to Notion in Langflow
The Notion integration in Langflow enables seamless connectivity with Notion databases, pages, and users, facilitating automation and improving productivity.
<ZoomableImage
alt="Notion Components in Langflow"
sources={{
light: "img/notion/notion_components_bundle.png",
dark: "img/notion/notion_components_bundle_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
#### <a target="\_blank" href="json_files/Notion_Components_bundle.json" download>Download Notion Components Bundle</a>
### Key Features of Notion Integration in Langflow
- **List Pages**: Retrieve a list of pages from a Notion database and access data stored in your Notion workspace.
- **List Database Properties**: Obtain insights into the properties of a Notion database, allowing for easy understanding of its structure and metadata.
- **Add Page Content**: Programmatically add new content to a Notion page, simplifying the creation and updating of pages.
- **List Users**: Retrieve a list of users with access to a Notion workspace, aiding in user management and collaboration.
- **Update Property**: Update the value of a specific property in a Notion page, enabling easy modification and maintenance of Notion data.
### Potential Use Cases for Notion Integration in Langflow
- **Task Automation**: Automate task creation in Notion using Langflow's AI capabilities. Describe the required tasks, and they will be automatically created and updated in Notion.
- **Context Extraction from Meetings**: Leverage AI to analyze meeting contexts, extract key points, and update the relevant Notion pages automatically.
- **Content Creation**: Utilize AI to generate ideas, suggest templates, and populate Notion pages with relevant data, enhancing content management efficiency.
### Getting Started with Notion Integration in Langflow
1. **Set Up Notion Integration**: Follow the guide [Setting up a Notion App](./setup) to set up a Notion integration in your workspace.
2. **Configure Notion Components**: Provide the necessary authentication details and parameters to configure the Notion components in your Langflow flows.
3. **Connect Components**: Integrate Notion components with other Langflow components to build your workflow.
4. **Test and Refine**: Ensure your Langflow flow operates as intended by testing and refining it.
5. **Deploy and Run**: Deploy your Langflow flow to automate Notion-related tasks and processes.
The Notion integration in Langflow offers a powerful toolset for automation and productivity enhancement. Whether managing tasks, extracting meeting insights, or creating content, Langflow and Notion provide robust solutions for streamlining workflows.

View file

@ -0,0 +1,117 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Database Properties
The `NotionDatabaseProperties` component retrieves properties of a Notion database. It provides a convenient way to integrate Notion database information into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/post-database-query)
<Admonition type="tip" title="Component Functionality">
The `NotionDatabaseProperties` component enables you to:
- Retrieve properties of a Notion database
- Access the retrieved properties in your Langflow flows
- Integrate Notion database information seamlessly into your workflows
</Admonition>
## Component Usage
To use the `NotionDatabaseProperties` component in a Langflow flow, follow these steps:
1. Add the `NotionDatabaseProperties` component to your flow.
2. Configure the component by providing the required inputs:
- `database_id`: The ID of the Notion database you want to retrieve properties from.
- `notion_secret`: The Notion integration token for authentication.
3. Connect the output of the `NotionDatabaseProperties` component to other components in your flow as needed.
## Component Python code
```python
import requests
from typing import Dict
from langflow import CustomComponent
from langflow.schema import Record
class NotionDatabaseProperties(CustomComponent):
display_name = "List Database Properties [Notion]"
description = "Retrieve properties of a Notion database."
documentation: str = "https://docs.langflow.org/integrations/notion/list-database-properties"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"database_id": {
"display_name": "Database ID",
"field_type": "str",
"info": "The ID of the Notion database.",
},
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
}
def build(
self,
database_id: str,
notion_secret: str,
) -> Record:
url = f"https://api.notion.com/v1/databases/{database_id}"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Notion-Version": "2022-06-28", # Use the latest supported version
}
response = requests.get(url, headers=headers)
response.raise_for_status()
data = response.json()
properties = data.get("properties", {})
record = Record(text=str(response.json()), data=properties)
self.status = f"Retrieved {len(properties)} properties from the Notion database.\n {record.text}"
return record
```
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how you can use the `NotionDatabaseProperties` component in a Langflow flow:
<ZoomableImage
alt="NotionDatabaseProperties Flow Example"
sources={{
light: "img/notion/NotionDatabaseProperties_flow_example.png",
dark: "img/notion/NotionDatabaseProperties_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
In this example, the `NotionDatabaseProperties` component retrieves the properties of a Notion database, and the retrieved properties are then used as input for subsequent components in the flow.
</Admonition>
## Best Practices
When using the `NotionDatabaseProperties` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to access the desired database.
- Double-check the database ID to avoid retrieving properties from the wrong database.
- Handle potential errors gracefully by checking the response status and providing appropriate error messages.
The `NotionDatabaseProperties` component simplifies the process of retrieving properties from a Notion database and integrating them into your Langflow workflows. By leveraging this component, you can easily access and utilize Notion database information in your flows, enabling powerful integrations and automations.
Feel free to explore the capabilities of the `NotionDatabaseProperties` component and experiment with different use cases to enhance your Langflow workflows!
## Troubleshooting
If you encounter any issues while using the `NotionDatabaseProperties` component, consider the following:
- Verify that the Notion integration token is valid and has the required permissions.
- Check the database ID to ensure it matches the intended Notion database.
- Inspect the response from the Notion API for any error messages or status codes that may indicate the cause of the issue.

View file

@ -0,0 +1,179 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# List Pages
The `NotionListPages` component queries a Notion database with filtering and sorting. It provides a convenient way to integrate Notion database querying capabilities into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/post-database-query)
<Admonition type="tip" title="Component Functionality">
The `NotionListPages` component enables you to:
- Query a Notion database with custom filters and sorting options
- Retrieve specific pages from a Notion database based on the provided criteria
- Integrate Notion database data seamlessly into your Langflow workflows
</Admonition>
## Component Usage
To use the `NotionListPages
` component in a Langflow flow, follow these steps:
1. **Add the `NotionListPages
` component to your flow.**
2. **Configure the component by providing the required parameters:**
- `notion_secret`: The Notion integration token for authentication.
- `database_id`: The ID of the Notion database you want to query.
- `query_payload`: A JSON string containing the filters and sorting options for the query.
3. **Connect the `NotionListPages
` component to other components in your flow as needed.**
## Component Python code
```python
import requests
import json
from typing import Dict, Any, List
from langflow.custom import CustomComponent
from langflow.schema import Record
class NotionListPages(CustomComponent):
display_name = "List Pages [Notion]"
description = (
"Query a Notion database with filtering and sorting. "
"The input should be a JSON string containing the 'filter' and 'sorts' objects. "
"Example input:\n"
'{"filter": {"property": "Status", "select": {"equals": "Done"}}, "sorts": [{"timestamp": "created_time", "direction": "descending"}]}'
)
documentation: str = "https://docs.langflow.org/integrations/notion/list-pages"
icon = "NotionDirectoryLoader"
field_order = [
"notion_secret",
"database_id",
"query_payload",
]
def build_config(self):
return {
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
"database_id": {
"display_name": "Database ID",
"field_type": "str",
"info": "The ID of the Notion database to query.",
},
"query_payload": {
"display_name": "Database query",
"field_type": "str",
"info": "A JSON string containing the filters that will be used for querying the database. EG: {'filter': {'property': 'Status', 'status': {'equals': 'In progress'}}}",
},
}
def build(
self,
notion_secret: str,
database_id: str,
query_payload: str = "{}",
) -> List[Record]:
try:
query_data = json.loads(query_payload)
filter_obj = query_data.get("filter")
sorts = query_data.get("sorts", [])
url = f"https://api.notion.com/v1/databases/{database_id}/query"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Content-Type": "application/json",
"Notion-Version": "2022-06-28",
}
data = {
"sorts": sorts,
}
if filter_obj:
data["filter"] = filter_obj
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
results = response.json()
records = []
combined_text = f"Pages found: {len(results['results'])}\n\n"
for page in results['results']:
page_data = {
'id': page['id'],
'url': page['url'],
'created_time': page['created_time'],
'last_edited_time': page['last_edited_time'],
'properties': page['properties'],
}
text = (
f"id: {page['id']}\n"
f"url: {page['url']}\n"
f"created_time: {page['created_time']}\n"
f"last_edited_time: {page['last_edited_time']}\n"
f"properties: {json.dumps(page['properties'], indent=2)}\n\n"
)
combined_text += text
records.append(Record(text=text, data=page_data))
self.status = combined_text.strip()
return records
except Exception as e:
self.status = f"An error occurred: {str(e)}"
return [Record(text=self.status, data=[])]
```
<Admonition type="info" title="Example Usage">
## Example Usage
Here's an example of how you can use the `NotionListPages` component in a Langflow flow and passing to the Prompt component:
<ZoomableImage
alt="NotionListPages
Flow Example"
sources={{
light: "img/notion/NotionListPages_flow_example.png",
dark: "img/notion/NotionListPages_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
In this example, the `NotionListPages` component is used to retrieve specific pages from a Notion database based on the provided filters and sorting options. The retrieved data can then be processed further in the subsequent components of the flow.
</Admonition>
## Best Practices
When using the `NotionListPages
` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to query the desired database.
- Construct the `query_payload` JSON string carefully, following the Notion API documentation for filtering and sorting options.
The `NotionListPages
` component provides a powerful way to integrate Notion database querying capabilities into your Langflow workflows. By leveraging this component, you can easily retrieve specific pages from a Notion database based on custom filters and sorting options, enabling you to build more dynamic and data-driven flows.
We encourage you to explore the capabilities of the `NotionListPages
` component further and experiment with different querying scenarios to unlock the full potential of integrating Notion databases into your Langflow workflows.
## Troubleshooting
If you encounter any issues while using the `NotionListPages` component, consider the following:
- Double-check that the `notion_secret` and `database_id` are correct and valid.
- Verify that the `query_payload` JSON string is properly formatted and contains valid filtering and sorting options.
- Check the Notion API documentation for any updates or changes that may affect the component's functionality.

View file

@ -0,0 +1,127 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# User List
The `NotionUserList` component retrieves users from Notion. It provides a convenient way to integrate Notion user data into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/get-users)
<Admonition type="tip" title="Component Functionality">
The `NotionUserList` component enables you to:
- Retrieve user data from Notion
- Access user information such as ID, type, name, and avatar URL
- Integrate Notion user data seamlessly into your Langflow workflows
</Admonition>
## Component Usage
To use the `NotionUserList` component in a Langflow flow, follow these steps:
1. Add the `NotionUserList` component to your flow.
2. Configure the component by providing the required Notion secret token.
3. Connect the component to other nodes in your flow as needed.
## Component Python code
```python
import requests
from typing import List
from langflow import CustomComponent
from langflow.schema import Record
class NotionUserList(CustomComponent):
display_name = "List Users [Notion]"
description = "Retrieve users from Notion."
documentation: str = "https://docs.langflow.org/integrations/notion/list-users"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
}
def build(
self,
notion_secret: str,
) -> List[Record]:
url = "https://api.notion.com/v1/users"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Notion-Version": "2022-06-28",
}
response = requests.get(url, headers=headers)
response.raise_for_status()
data = response.json()
results = data['results']
records = []
for user in results:
id = user['id']
type = user['type']
name = user.get('name', '')
avatar_url = user.get('avatar_url', '')
record_data = {
"id": id,
"type": type,
"name": name,
"avatar_url": avatar_url,
}
output = "User:\n"
for key, value in record_data.items():
output += f"{key.replace('_', ' ').title()}: {value}\n"
output += "________________________\n"
record = Record(text=output, data=record_data)
records.append(record)
self.status = "\n".join(record.text for record in records)
return records
```
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how you can use the `NotionUserList` component in a Langflow flow and passing the outputs to the Prompt component:
<ZoomableImage
alt="NotionUserList Flow Example"
sources={{
light: "img/notion/NotionUserList_flow_example.png",
dark: "img/notion/NotionUserList_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
</Admonition>
## Best Practices
When using the `NotionUserList` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to retrieve user data.
- Handle the retrieved user data securely and in compliance with Notion's API usage guidelines.
The `NotionUserList` component provides a seamless way to integrate Notion user data into your Langflow workflows. By leveraging this component, you can easily retrieve and utilize user information from Notion, enhancing the capabilities of your Langflow applications. Feel free to explore and experiment with the `NotionUserList` component to unlock new possibilities in your Langflow projects!
## Troubleshooting
If you encounter any issues while using the `NotionUserList` component, consider the following:
- Double-check that your Notion integration token is valid and has the required permissions.
- Verify that you have installed the necessary dependencies (`requests`) for the component to function properly.
- Check the Notion API documentation for any updates or changes that may affect the component's functionality.

View file

@ -0,0 +1,142 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Page Content
The `NotionPageContent` component retrieves the content of a Notion page as plain text. It provides a convenient way to integrate Notion page content into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/get-page)
<Admonition type="tip" title="Component Functionality">
The `NotionPageContent` component enables you to:
- Retrieve the content of a Notion page as plain text
- Extract text from various block types, including paragraphs, headings, lists, and more
- Integrate Notion page content seamlessly into your Langflow workflows
</Admonition>
## Component Usage
To use the `NotionPageContent` component in a Langflow flow, follow these steps:
1. Add the `NotionPageContent` component to your flow.
2. Configure the component by providing the required inputs:
- `page_id`: The ID of the Notion page you want to retrieve.
- `notion_secret`: Your Notion integration token for authentication.
3. Connect the output of the `NotionPageContent` component to other components in your flow as needed.
## Component Python code
```python
import requests
from typing import Dict, Any
from langflow import CustomComponent
from langflow.schema import Record
class NotionPageContent(CustomComponent):
display_name = "Page Content Viewer [Notion]"
description = "Retrieve the content of a Notion page as plain text."
documentation: str = "https://docs.langflow.org/integrations/notion/page-content-viewer"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"page_id": {
"display_name": "Page ID",
"field_type": "str",
"info": "The ID of the Notion page to retrieve.",
},
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
}
def build(
self,
page_id: str,
notion_secret: str,
) -> Record:
blocks_url = f"https://api.notion.com/v1/blocks/{page_id}/children?page_size=100"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Notion-Version": "2022-06-28", # Use the latest supported version
}
# Retrieve the child blocks
blocks_response = requests.get(blocks_url, headers=headers)
blocks_response.raise_for_status()
blocks_data = blocks_response.json()
# Parse the blocks and extract the content as plain text
content = self.parse_blocks(blocks_data["results"])
self.status = content
return Record(data={"content": content}, text=content)
def parse_blocks(self, blocks: list) -> str:
content = ""
for block in blocks:
block_type = block["type"]
if block_type in ["paragraph", "heading_1", "heading_2", "heading_3", "quote"]:
content += self.parse_rich_text(block[block_type]["rich_text"]) + "\n\n"
elif block_type in ["bulleted_list_item", "numbered_list_item"]:
content += self.parse_rich_text(block[block_type]["rich_text"]) + "\n"
elif block_type == "to_do":
content += self.parse_rich_text(block["to_do"]["rich_text"]) + "\n"
elif block_type == "code":
content += self.parse_rich_text(block["code"]["rich_text"]) + "\n\n"
elif block_type == "image":
content += f"[Image: {block['image']['external']['url']}]\n\n"
elif block_type == "divider":
content += "---\n\n"
return content.strip()
def parse_rich_text(self, rich_text: list) -> str:
text = ""
for segment in rich_text:
text += segment["plain_text"]
return text
```
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how you can use the `NotionPageContent` component in a Langflow flow:
<ZoomableImage
alt="NotionPageContent Flow Example"
sources={{
light: "img/notion/NotionPageContent_flow_example.png",
dark: "img/notion/NotionPageContent_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
</Admonition>
## Best Practices
When using the `NotionPageContent` component, consider the following best practices:
- Ensure that you have the necessary permissions to access the Notion page you want to retrieve.
- Keep your Notion integration token secure and avoid sharing it publicly.
- Be mindful of the content you retrieve and ensure that it aligns with your intended use case.
The `NotionPageContent` component provides a seamless way to integrate Notion page content into your Langflow workflows. By leveraging this component, you can easily retrieve and process the content of Notion pages, enabling you to build powerful and dynamic applications. Explore the capabilities of the `NotionPageContent` component and unlock new possibilities in your Langflow projects!
## Troubleshooting
If you encounter any issues while using the `NotionPageContent` component, consider the following:
- Double-check that you have provided the correct Notion page ID.
- Verify that your Notion integration token is valid and has the necessary permissions.
- Check the Notion API documentation for any updates or changes that may affect the component's functionality.

View file

@ -0,0 +1,131 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Page Create
The `NotionPageCreator` component creates pages in a Notion database. It provides a convenient way to integrate Notion page creation into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/patch-block-children)
<Admonition type="tip" title="Component Functionality">
The `NotionPageCreator` component enables you to:
- Create new pages in a specified Notion database
- Set custom properties for the created pages
- Retrieve the ID and URL of the newly created pages
</Admonition>
## Component Usage
To use the `NotionPageCreator` component in a Langflow flow, follow these steps:
1. Add the `NotionPageCreator` component to your flow.
2. Configure the component by providing the required inputs:
- `database_id`: The ID of the Notion database where the pages will be created.
- `notion_secret`: The Notion integration token for authentication.
- `properties`: The properties of the new page, specified as a JSON string.
3. Connect the component to other components in your flow as needed.
4. Run the flow to create Notion pages based on the configured inputs.
## Component Python Code
```python
import json
from typing import Optional
import requests
from langflow.custom import CustomComponent
class NotionPageCreator(CustomComponent):
display_name = "Create Page [Notion]"
description = "A component for creating Notion pages."
documentation: str = "https://docs.langflow.org/integrations/notion/page-create"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"database_id": {
"display_name": "Database ID",
"field_type": "str",
"info": "The ID of the Notion database.",
},
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
"properties": {
"display_name": "Properties",
"field_type": "str",
"info": "The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}",
},
}
def build(
self,
database_id: str,
notion_secret: str,
properties: str = '{"Task name": {"id": "title", "type": "title", "title": [{"type": "text", "text": {"content": "Send Notion Components to LF", "link": null}}]}}',
) -> str:
if not database_id or not properties:
raise ValueError("Invalid input. Please provide 'database_id' and 'properties'.")
headers = {
"Authorization": f"Bearer {notion_secret}",
"Content-Type": "application/json",
"Notion-Version": "2022-06-28",
}
data = {
"parent": {"database_id": database_id},
"properties": json.loads(properties),
}
response = requests.post("https://api.notion.com/v1/pages", headers=headers, json=data)
if response.status_code == 200:
page_id = response.json()["id"]
self.status = f"Successfully created Notion page with ID: {page_id}\n {str(response.json())}"
return response.json()
else:
error_message = f"Failed to create Notion page. Status code: {response.status_code}, Error: {response.text}"
self.status = error_message
raise Exception(error_message)
```
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how to use the `NotionPageCreator` component in a Langflow flow:
<ZoomableImage
alt="NotionPageCreator Flow Example"
sources={{
light: "img/notion/NotionPageCreator_flow_example.png",
dark: "img/notion/NotionPageCreator_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
</Admonition>
## Best Practices
When using the `NotionPageCreator` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to create pages in the specified database.
- Properly format the `properties` input as a JSON string, matching the structure and field types of your Notion database.
- Handle any errors or exceptions that may occur during the page creation process and provide appropriate error messages.
- To avoid the hassle of messing with JSON, we recommend using the LLM to create the JSON for you as input.
The `NotionPageCreator` component simplifies the process of creating pages in a Notion database directly from your Langflow workflows. By leveraging this component, you can seamlessly integrate Notion page creation functionality into your automated processes, saving time and effort. Feel free to explore the capabilities of the `NotionPageCreator` component and adapt it to suit your specific requirements.
## Troubleshooting
If you encounter any issues while using the `NotionPageCreator` component, consider the following:
- Double-check that the `database_id` and `notion_secret` inputs are correct and valid.
- Verify that the `properties` input is properly formatted as a JSON string and matches the structure of your Notion database.
- Check the Notion API documentation for any updates or changes that may affect the component's functionality.

View file

@ -0,0 +1,138 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Page Update
The `NotionPageUpdate` component updates the properties of a Notion page. It provides a convenient way to integrate updating Notion page properties into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/patch-page)
## Component Usage
To use the `NotionPageUpdate` component in your Langflow flow:
1. Drag and drop the `NotionPageUpdate` component onto the canvas.
2. Double-click the component to open its configuration.
3. Provide the required parameters as defined in the component's `build_config` method.
4. Connect the component to other nodes in your flow as needed.
## Component Python Code
```python
import json
import requests
from typing import Dict, Any
from langflow import CustomComponent
from langflow.schema import Record
class NotionPageUpdate(CustomComponent):
display_name = "Update Page Property [Notion]"
description = "Update the properties of a Notion page."
documentation: str = "https://docs.langflow.org/integrations/notion/page-update"
icon = "NotionDirectoryLoader"
def build_config(self):
return {
"page_id": {
"display_name": "Page ID",
"field_type": "str",
"info": "The ID of the Notion page to update.",
},
"properties": {
"display_name": "Properties",
"field_type": "str",
"info": "The properties to update on the page (as a JSON string).",
"multiline": True,
},
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
}
def build(
self,
page_id: str,
properties: str,
notion_secret: str,
) -> Record:
url = f"https://api.notion.com/v1/pages/{page_id}"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Content-Type": "application/json",
"Notion-Version": "2022-06-28", # Use the latest supported version
}
try:
parsed_properties = json.loads(properties)
except json.JSONDecodeError as e:
raise ValueError("Invalid JSON format for properties") from e
data = {
"properties": parsed_properties
}
response = requests.patch(url, headers=headers, json=data)
response.raise_for_status()
updated_page = response.json()
output = "Updated page properties:\n"
for prop_name, prop_value in updated_page["properties"].items():
output += f"{prop_name}: {prop_value}\n"
self.status = output
return Record(data=updated_page)
```
Let's break down the key parts of this component:
- The `build_config` method defines the configuration fields for the component. It specifies the required parameters and their properties, such as display names, field types, and any additional information or validation.
- The `build` method contains the main logic of the component. It takes the configured parameters as input and performs the necessary operations to update the properties of a Notion page.
- The component interacts with the Notion API to update the page properties. It constructs the API URL, headers, and request data based on the provided parameters.
- The processed data is returned as a `Record` object, which can be connected to other components in the Langflow flow. The `Record` object contains the updated page data.
- The component also stores the updated page properties in the `status` attribute for logging and debugging purposes.
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how to use the `NotionPageUpdate` component in a Langflow flow using:
<ZoomableImage
alt="NotionPageUpdate Flow Example"
sources={{
light: "img/notion/NotionPageUpdate_flow_example.png",
dark: "img/notion/NotionPageUpdate_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
</Admonition>
## Best Practices
When using the `NotionPageUpdate` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to update page properties.
- Handle edge cases and error scenarios gracefully, such as invalid JSON format for properties or API request failures.
- We recommend using an LLM to generate the inputs for this component, to allow flexibilty
By leveraging the `NotionPageUpdate` component in Langflow, you can easily integrate updating Notion page properties into your language model workflows and build powerful applications that extend Langflow's capabilities.
## Troubleshooting
If you encounter any issues while using the `NotionPageUpdate` component, consider the following:
- Double-check that you have correctly configured the component with the required parameters, including the page ID, properties JSON, and Notion integration token.
- Verify that your Notion integration token has the necessary permissions to update page properties.
- Check the Langflow logs for any error messages or exceptions related to the component, such as invalid JSON format or API request failures.
- Consult the [Notion API Documentation](https://developers.notion.com/reference/patch-page) for specific troubleshooting steps or common issues related to updating page properties.

View file

@ -0,0 +1,184 @@
import Admonition from "@theme/Admonition";
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Notion Search
The `NotionSearch` component is designed to search all pages and databases that have been shared with an integration in Notion. It provides a convenient way to integrate Notion search capabilities into your Langflow workflows.
[Notion Reference](https://developers.notion.com/reference/search)
<Admonition type="tip" title="Component Functionality">
The `NotionSearch` component enables you to:
- Search for pages and databases in Notion that have been shared with an integration
- Filter the search results based on object type (pages or databases)
- Sort the search results in ascending or descending order based on the last edited time
</Admonition>
## Component Usage
To use the `NotionSearch` component in a Langflow flow, follow these steps:
1. **Add the `NotionSearch` component to your flow.**
2. **Configure the component by providing the required parameters:**
- `notion_secret`: The Notion integration token for authentication.
- `query`: The text to search for in page and database titles.
- `filter_value`: The type of objects to include in the search results (pages or databases).
- `sort_direction`: The direction to sort the search results (ascending or descending).
3. **Connect the `NotionSearch` component to other components in your flow as needed.**
## Component Python Code
```python
import requests
from typing import Dict, Any, List
from langflow.custom import CustomComponent
from langflow.schema import Record
class NotionSearch(CustomComponent):
display_name = "Search Notion"
description = (
"Searches all pages and databases that have been shared with an integration."
)
documentation: str = "https://docs.langflow.org/integrations/notion/search"
icon = "NotionDirectoryLoader"
field_order = [
"notion_secret",
"query",
"filter_value",
"sort_direction",
]
def build_config(self):
return {
"notion_secret": {
"display_name": "Notion Secret",
"field_type": "str",
"info": "The Notion integration token.",
"password": True,
},
"query": {
"display_name": "Search Query",
"field_type": "str",
"info": "The text that the API compares page and database titles against.",
},
"filter_value": {
"display_name": "Filter Type",
"field_type": "str",
"info": "Limits the results to either only pages or only databases.",
"options": ["page", "database"],
"default_value": "page",
},
"sort_direction": {
"display_name": "Sort Direction",
"field_type": "str",
"info": "The direction to sort the results.",
"options": ["ascending", "descending"],
"default_value": "descending",
},
}
def build(
self,
notion_secret: str,
query: str = "",
filter_value: str = "page",
sort_direction: str = "descending",
) -> List[Record]:
try:
url = "https://api.notion.com/v1/search"
headers = {
"Authorization": f"Bearer {notion_secret}",
"Content-Type": "application/json",
"Notion-Version": "2022-06-28",
}
data = {
"query": query,
"filter": {
"value": filter_value,
"property": "object"
},
"sort":{
"direction": sort_direction,
"timestamp": "last_edited_time"
}
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
results = response.json()
records = []
combined_text = f"Results found: {len(results['results'])}\n\n"
for result in results['results']:
result_data = {
'id': result['id'],
'type': result['object'],
'last_edited_time': result['last_edited_time'],
}
if result['object'] == 'page':
result_data['title_or_url'] = result['url']
text = f"id: {result['id']}\ntitle_or_url: {result['url']}\n"
elif result['object'] == 'database':
if 'title' in result and isinstance(result['title'], list) and len(result['title']) > 0:
result_data['title_or_url'] = result['title'][0]['plain_text']
text = f"id: {result['id']}\ntitle_or_url: {result['title'][0]['plain_text']}\n"
else:
result_data['title_or_url'] = "N/A"
text = f"id: {result['id']}\ntitle_or_url: N/A\n"
text += f"type: {result['object']}\nlast_edited_time: {result['last_edited_time']}\n\n"
combined_text += text
records.append(Record(text=text, data=result_data))
self.status = combined_text
return records
except Exception as e:
self.status = f"An error occurred: {str(e)}"
return [Record(text=self.status, data=[])]
```
## Example Usage
<Admonition type="info" title="Example Usage">
Here's an example of how you can use the `NotionSearch` component in a Langflow flow:
<ZoomableImage
alt="NotionSearch Flow Example"
sources={{
light: "img/notion/NotionSearch_flow_example.png",
dark: "img/notion/NotionSearch_flow_example_dark.png",
}}
style={{ width: "100%", margin: "20px 0" }}
/>
In this example, the `NotionSearch` component is used to search for pages and databases in Notion based on the provided query and filter criteria. The retrieved data can then be processed further in the subsequent components of the flow.
</Admonition>
## Best Practices
When using the `NotionSearch` component, consider these best practices:
- Ensure you have a valid Notion integration token with the necessary permissions to search for pages and databases.
- Provide a meaningful search query to narrow down the results to the desired pages or databases.
- Choose the appropriate filter type (`page` or `database`) based on your search requirements.
- Consider the sorting direction (`ascending` or `descending`) to organize the search results effectively.
The `NotionSearch` component provides a powerful way to integrate Notion search capabilities into your Langflow workflows. By leveraging this component, you can easily search for pages and databases in Notion based on custom queries and filters, enabling you to build more dynamic and data-driven flows.
We encourage you to explore the capabilities of the `NotionSearch` component further and experiment with different search scenarios to unlock the full potential of integrating Notion search into your Langflow workflows.
## Troubleshooting
If you encounter any issues while using the `NotionSearch` component, consider the following:
- Double-check that the `notion_secret` is correct and valid.
- Verify that the Notion integration has the necessary permissions to access the desired pages and databases.
- Check the Notion API documentation for any updates or changes that may affect the component's functionality.

View file

@ -0,0 +1,78 @@
import Admonition from "@theme/Admonition";
# Setting up a Notion App
To use Notion components in Langflow, you first need to create a Notion integration and configure it with the necessary capabilities. This guide will walk you through the process of setting up a Notion integration and granting it access to your Notion databases.
## Prerequisites
- A Notion account with access to the workspace where you want to use the integration.
- Admin permissions in the Notion workspace to create and manage integrations.
## Step 1: Create a Notion Integration
1. Go to the [Notion Integrations](https://www.notion.com/my-integrations) page.
2. Click on the "New integration" button.
3. Give your integration a name and select the workspace where you want to use it.
4. Click "Submit" to create the integration.
<Admonition type="info" title="Integration Capabilities">
When creating the integration, make sure to enable the necessary capabilities based on your requirements. Refer to the [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities) documentation for more information on each capability.
</Admonition>
## Step 2: Configure Integration Capabilities
After creating the integration, you need to configure its capabilities to define what actions it can perform and what data it can access.
1. In the integration settings page, go to the **Capabilities** tab.
2. Enable the required capabilities for your integration. For example:
- If your integration needs to read data from Notion, enable the "Read content" capability.
- If your integration needs to create new content in Notion, enable the "Insert content" capability.
- If your integration needs to update existing content in Notion, enable the "Update content" capability.
3. Configure the user information access level based on your integration's requirements.
4. Save the changes.
## Step 3: Obtain Integration Token
To authenticate your integration with Notion, you need to obtain an integration token.
1. In the integration settings page, go to the "Secrets" tab.
2. Copy the "Internal Integration Token" value. This token will be used to authenticate your integration with Notion.
<Admonition type="warning" title="Keep Your Token Secure">
Your integration token is a sensitive piece of information. Make sure to keep it secure and never share it publicly. Store it safely in your Langflow configuration or environment variables.
</Admonition>
## Step 4: Grant Integration Access to Notion Databases
For your integration to interact with Notion databases, you need to grant it access to the specific databases it will be working with.
1. Open the Notion database that you want your integration to access.
2. Click on the "Share" button in the top-right corner of the page.
3. In the "Invite" section, select your integration from the list.
4. Click "Invite" to grant the integration access to the database.
<Admonition type="info" title="Nested Databases">
If your database contains references to other databases, you need to grant the integration access to those referenced databases as well. Repeat step 4 for each referenced database to ensure your integration has the necessary access.
</Admonition>
## Using Notion Components in Langflow
Once you have set up your Notion integration and granted it access to the required databases, you can start using the Notion components in Langflow.
Langflow provides the following Notion components:
- **List Pages**: Retrieves a list of pages from a Notion database.
- **List Database Properties**: Retrieves the properties of a Notion database.
- **Add Page Content**: Adds content to a Notion page.
- **List Users**: Retrieves a list of users with access to a Notion workspace.
- **Update Property**: Updates the value of a property in a Notion page.
Refer to the individual component documentation for more details on how to use each component in your Langflow flows.
## Additional Resources
- [Notion API Documentation](https://developers.notion.com/docs/getting-started)
- [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities)
If you encounter any issues or have questions, please reach out to our support team or consult the Langflow community forums.

View file

@ -4,7 +4,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Basic prompting
# Basic Prompting
Prompts serve as the inputs to a large language model (LLM), acting as the interface between human instructions and computational tasks.
@ -14,36 +14,17 @@ This article demonstrates how to use Langflow's prompt tools to issue basic prom
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
- [Langflow installed and running](../getting-started/install-langflow.mdx)
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
- [OpenAI API key created](https://platform.openai.com)
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](/getting-started/huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the basic prompting project
1. From the Langflow dashboard, click **New Project**.
@ -64,25 +45,21 @@ Examine the **Prompt** component. The **Template** field instructs the LLM to `A
This should be interesting...
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the basic prompting flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
The bot responds in a markedly piratical manner!
The bot responds in a markedly piratical manner!
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** template, click the **Template** field.
The **Edit Prompt** window opens.
The **Edit Prompt** window opens.
2. Change `Answer the user as if you were a pirate` to a different character, perhaps `Answer the user as if you were Harold Abelson.`
3. Run the basic prompting flow again.
The response will be markedly different.
The response will be markedly different.

View file

@ -4,42 +4,23 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Blog writer
# Blog Writer
Build a blog writer with OpenAI that uses URLs for reference content.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
- [Langflow installed and running](../getting-started/install-langflow.mdx)
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```bash
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
- [OpenAI API key created](https://platform.openai.com)
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](/getting-started/huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the Blog Writer project
1. From the Langflow dashboard, click **New Project**.
@ -58,6 +39,7 @@ Result:
This flow creates a one-shot prompt flow with **Prompt**, **OpenAI**, and **Chat Output** components, and augments the flow with reference content and instructions from the **URL** and **Instructions** components.
The **Prompt** component's default **Template** field looks like this:
```bash
Reference 1:
@ -81,16 +63,16 @@ The `{instructions}` value is received from the **Value** field of the **Instruc
The `reference_1` and `reference_2` values are received from the **URL** fields of the **URL** components.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the Blog Writer flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can run your one-shot flow.
The **Interaction Panel** opens, where you can run your one-shot flow.
2. Click the **Lighting Bolt** icon to run your flow.
3. The **OpenAI** component constructs a blog post with the **URL** items as context.
The default **URL** values are for web pages at `promptingguide.ai`, so your blog post will be about prompting LLMs.
The default **URL** values are for web pages at `promptingguide.ai`, so your blog post will be about prompting LLMs.
To write about something different, change the values in the **URL** components, and see what the LLM constructs.
To write about something different, change the values in the **URL** components, and see what the LLM constructs.

View file

@ -10,36 +10,17 @@ Build a question-and-answer chatbot with a document loaded from local memory.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
- [Langflow installed and running](../getting-started/install-langflow.mdx)
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
- [OpenAI API key created](https://platform.openai.com)
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](/getting-started/huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the Document QA project
1. From the Langflow dashboard, click **New Project**.
@ -61,24 +42,27 @@ The **Prompt** component is instructed to answer questions based on the contents
Including a file with the prompt gives the **OpenAI** component context it may not otherwise have access to.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
5. To select a document to load, in the **Files** component, click within the **Path** field.
1. Select a local file, and then click **Open**.
2. The file name appears in the field.
<Admonition type="tip">
The file must be of an extension type listed [here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
</Admonition>
1. Select a local file, and then click **Open**.
2. The file name appears in the field.
<Admonition type="tip">
The file must be of an extension type listed
[here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
</Admonition>
## Run the Document QA flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
The bot responded:
For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
The bot responded:
```
The issue occurred during the execution of migrations in the application. Specifically, an error was raised by the Alembic library, indicating that new upgrade operations were detected that had not been accounted for in the existing migration scripts. The operation in question involved modifying the nullable property of a column (apikey, created_at) in the database, with details about the existing type (DATETIME()), existing server default, and other properties.
```

View file

@ -4,42 +4,23 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Memory chatbot
# Memory Chatbot
This flow extends the [basic prompting flow](./basic-prompting.mdx) to include chat memory for unique SessionIDs.
## Prerequisites
1. Install Langflow.
```bash
python -m pip install langflow --pre
```
- [Langflow installed and running](../getting-started/install-langflow.mdx)
2. Start a local Langflow instance with the Langflow CLI:
```bash
langflow run
```
Or start Langflow with Python:
```bash
python -m langflow run
```
Result:
```
│ Welcome to ⛓ Langflow │
│ │
│ Access http://127.0.0.1:7860 │
│ Collaborate, and contribute at our GitHub Repo 🚀 │
```
- [OpenAI API key created](https://platform.openai.com)
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](/getting-started/huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
3. Create an [OpenAI API key](https://platform.openai.com).
## Create the memory chatbot project
1. From the Langflow dashboard, click **New Project**.
@ -65,16 +46,16 @@ This chatbot is augmented with the **Chat Memory** component, which stores messa
The **Chat History** component gives the **OpenAI** component a memory of previous questions.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
## Run the memory chatbot flow
1. Click the **Run** button.
The **Interaction Panel** opens, where you can converse with your bot.
The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
The bot will respond according to the template in the **Prompt** component.
The bot will respond according to the template in the **Prompt** component.
3. Type more questions. In the **Outputs** log, your queries are logged in order. Up to 5 queries are stored by default. Try asking `What is the first subject I asked you about?` to see where the LLM's memory disappears.
## Modify the Session ID field to have multiple conversations
@ -87,11 +68,11 @@ You can demonstrate this by modifying the **Session ID** value to switch between
1. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value from `MySessionID` to `AnotherSessionID`.
2. Click the **Run** button to run your flow.
In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
3. Type a few questions to your bot.
4. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value back to `MySessionID`.
5. Run your flow.
The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
## Store Session ID as a Langflow variable
@ -101,4 +82,3 @@ To store **Session ID** as a Langflow variable, in the **Session ID** field, cli
2. In the **Value** field, enter a value like `1B5EBD79-6E9C-4533-B2C8-7E4FF29E983B`.
3. Click **Save Variable**.
4. Apply this variable to **Chat Input**.

View file

@ -4,7 +4,7 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Vector store RAG
# Vector Store RAG
Retrieval Augmented Generation, or RAG, is a pattern for training LLMs on your data and querying it.
@ -17,16 +17,19 @@ We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pr
## Prerequisites
<Admonition type="info">
Langflow v1.0 alpha is also available in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true). Try it out or follow the instructions [here](../getting-started/huggingface-spaces) to install it locally.
Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
using this
link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
to create your own Langflow workspace in minutes.
</Admonition>
* [Langflow installed](../getting-started/install-langflow.mdx)
- [Langflow installed and running](../getting-started/install-langflow.mdx)
* [OpenAI API key](https://platform.openai.com)
- [OpenAI API key](https://platform.openai.com)
* [An Astra DB vector database created](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
* Application token (`AstraCS:WSnyFUhRxsrg…`)
* API endpoint (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`)
- [An Astra DB vector database created](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
- Application token (`AstraCS:WSnyFUhRxsrg…`)
- API endpoint (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`)
## Create the vector store RAG project
@ -49,38 +52,40 @@ The **ingestion** flow (bottom of the screen) populates the vector store with da
It ingests data from a file (**File**), splits it into chunks (**Recursive Character Text Splitter**), indexes it in Astra DB (**Astra DB**), and computes embeddings for the chunks (**OpenAI Embeddings**).
This forms a "brain" for the query flow.
The **query** flow (top of the screen) allows users to chat with the embedded vector store data. It's a little more complex:
The **query** flow (top of the screen) allows users to chat with the embedded vector store data. It's a little more complex:
* **Chat Input** component defines where to put the user input coming from the Playground.
* **OpenAI Embeddings** component generates embeddings from the user input.
* **Astra DB Search** component retrieves the most relevant Records from the Astra DB database.
* **Text Output** component turns the Records into Text by concatenating them and also displays it in the Playground.
* **Prompt** component takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model.
* **OpenAI** component generates a response to the prompt.
* **Chat Output** component displays the response in the Playground.
- **Chat Input** component defines where to put the user input coming from the Playground.
- **OpenAI Embeddings** component generates embeddings from the user input.
- **Astra DB Search** component retrieves the most relevant Records from the Astra DB database.
- **Text Output** component turns the Records into Text by concatenating them and also displays it in the Playground.
- **Prompt** component takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model.
- **OpenAI** component generates a response to the prompt.
- **Chat Output** component displays the response in the Playground.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
4. To create environment variables for the **Astra DB** and **Astra DB Search** components:
1. In the **Token** field, click the **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `astra_token`.
3. In the **Value** field, paste your Astra application token (`AstraCS:WSnyFUhRxsrg…`).
4. Click **Save Variable**.
5. Repeat the above steps for the **API Endpoint** field, pasting your Astra API Endpoint instead (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`).
6. Add the global variable to both the **Astra DB** and **Astra DB Search** components.
1. In the **Variable Name** field, enter `openai_api_key`.
2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
3. Click **Save Variable**.
5. To create environment variables for the **Astra DB** and **Astra DB Search** components:
1. In the **Token** field, click the **Globe** button, and then click **Add New Variable**.
2. In the **Variable Name** field, enter `astra_token`.
3. In the **Value** field, paste your Astra application token (`AstraCS:WSnyFUhRxsrg…`).
4. Click **Save Variable**.
5. Repeat the above steps for the **API Endpoint** field, pasting your Astra API Endpoint instead (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`).
6. Add the global variable to both the **Astra DB** and **Astra DB Search** components.
## Run the vector store RAG flow
1. Click the **Playground** button.
The **Playground** opens, where you can chat with your data.
The **Playground** opens, where you can chat with your data.
2. Type a message and press Enter. (Try something like "What topics do you know about?")
3. The bot will respond with a summary of the data you've embedded.
For example, we embedded a PDF of an engine maintenance manual and asked, "How do I change the oil?"
The bot responds:
```
To change the oil in the engine, follow these steps:
@ -102,7 +107,3 @@ You should use a 3/8 inch wrench to remove the oil drain cap.
```
This is the size the engine manual lists as well. This confirms our flow works, because the query returns the unique knowledge we embedded from the Astra vector store.

View file

@ -3,7 +3,7 @@ import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
# Building chatbots with System Message
# Building Chatbots with System Message
## Overview

View file

@ -3,7 +3,7 @@ import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
# Integrating documents with prompt variables
# Integrating Documents with Prompt Variables
## Overview

View file

@ -14,9 +14,7 @@ module.exports = {
"index",
"getting-started/install-langflow",
"getting-started/quickstart",
"getting-started/huggingface-spaces",
"getting-started/canvas",
"getting-started/flows-components-collections",
"migration/possible-installation-issues",
"getting-started/new-to-llms",
],
@ -131,5 +129,28 @@ module.exports = {
"contributing/contribute-component",
],
},
{
type: "category",
label: "Integrations",
collapsed: false,
items: [
{
type: "category",
label: "Notion",
items: [
"integrations/notion/intro",
"integrations/notion/setup",
"integrations/notion/search",
"integrations/notion/list-database-properties",
"integrations/notion/list-pages",
"integrations/notion/list-users",
"integrations/notion/page-create",
"integrations/notion/add-content-to-page",
"integrations/notion/page-update",
"integrations/notion/page-content-viewer",
],
},
],
},
],
};

View file

@ -532,10 +532,10 @@
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"load_from_db": true,
"title_case": false,
"input_types": ["Text"],
"value": ""
"value": "OPENAI_API_KEY"
},
"openai_api_type": {
"type": "str",

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

File diff suppressed because one or more lines are too long

146
poetry.lock generated
View file

@ -1310,63 +1310,63 @@ files = [
[[package]]
name = "coverage"
version = "7.5.2"
version = "7.5.3"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "coverage-7.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:554c7327bf0fd688050348e22db7c8e163fb7219f3ecdd4732d7ed606b417263"},
{file = "coverage-7.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d0305e02e40c7cfea5d08d6368576537a74c0eea62b77633179748d3519d6705"},
{file = "coverage-7.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:829fb55ad437d757c70d5b1c51cfda9377f31506a0a3f3ac282bc6a387d6a5f1"},
{file = "coverage-7.5.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:894b1acded706f1407a662d08e026bfd0ff1e59e9bd32062fea9d862564cfb65"},
{file = "coverage-7.5.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fe76d6dee5e4febefa83998b17926df3a04e5089e3d2b1688c74a9157798d7a2"},
{file = "coverage-7.5.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c7ebf2a37e4f5fea3c1a11e1f47cea7d75d0f2d8ef69635ddbd5c927083211fc"},
{file = "coverage-7.5.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:20e611fc36e1a0fc7bbf957ef9c635c8807d71fbe5643e51b2769b3cc0fb0b51"},
{file = "coverage-7.5.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7c5c5b7ae2763533152880d5b5b451acbc1089ade2336b710a24b2b0f5239d20"},
{file = "coverage-7.5.2-cp310-cp310-win32.whl", hash = "sha256:1e4225990a87df898e40ca31c9e830c15c2c53b1d33df592bc8ef314d71f0281"},
{file = "coverage-7.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:976cd92d9420e6e2aa6ce6a9d61f2b490e07cb468968adf371546b33b829284b"},
{file = "coverage-7.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5997d418c219dcd4dcba64e50671cca849aaf0dac3d7a2eeeb7d651a5bd735b8"},
{file = "coverage-7.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ec27e93bbf5976f0465e8936f02eb5add99bbe4e4e7b233607e4d7622912d68d"},
{file = "coverage-7.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f11f98753800eb1ec872562a398081f6695f91cd01ce39819e36621003ec52a"},
{file = "coverage-7.5.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6e34680049eecb30b6498784c9637c1c74277dcb1db75649a152f8004fbd6646"},
{file = "coverage-7.5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e12536446ad4527ac8ed91d8a607813085683bcce27af69e3b31cd72b3c5960"},
{file = "coverage-7.5.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:3d3f7744b8a8079d69af69d512e5abed4fb473057625588ce126088e50d05493"},
{file = "coverage-7.5.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:431a3917e32223fcdb90b79fe60185864a9109631ebc05f6c5aa03781a00b513"},
{file = "coverage-7.5.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a7c6574225f34ce45466f04751d957b5c5e6b69fca9351db017c9249786172ce"},
{file = "coverage-7.5.2-cp311-cp311-win32.whl", hash = "sha256:2b144d142ec9987276aeff1326edbc0df8ba4afbd7232f0ca10ad57a115e95b6"},
{file = "coverage-7.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:900532713115ac58bc3491b9d2b52704a05ed408ba0918d57fd72c94bc47fba1"},
{file = "coverage-7.5.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:9a42970ce74c88bdf144df11c52c5cf4ad610d860de87c0883385a1c9d9fa4ab"},
{file = "coverage-7.5.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:26716a1118c6ce2188283b4b60a898c3be29b480acbd0a91446ced4fe4e780d8"},
{file = "coverage-7.5.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:60b66b0363c5a2a79fba3d1cd7430c25bbd92c923d031cae906bdcb6e054d9a2"},
{file = "coverage-7.5.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e5d22eba19273b2069e4efeff88c897a26bdc64633cbe0357a198f92dca94268"},
{file = "coverage-7.5.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3bb5b92a0ab3d22dfdbfe845e2fef92717b067bdf41a5b68c7e3e857c0cff1a4"},
{file = "coverage-7.5.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1aef719b6559b521ae913ddeb38f5048c6d1a3d366865e8b320270b7bc4693c2"},
{file = "coverage-7.5.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8809c0ea0e8454f756e3bd5c36d04dddf222989216788a25bfd6724bfcee342c"},
{file = "coverage-7.5.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:1acc2e2ef098a1d4bf535758085f508097316d738101a97c3f996bccba963ea5"},
{file = "coverage-7.5.2-cp312-cp312-win32.whl", hash = "sha256:97de509043d3f0f2b2cd171bdccf408f175c7f7a99d36d566b1ae4dd84107985"},
{file = "coverage-7.5.2-cp312-cp312-win_amd64.whl", hash = "sha256:8941e35a0e991a7a20a1fa3e3182f82abe357211f2c335a9e6007067c3392fcf"},
{file = "coverage-7.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5662bf0f6fb6757f5c2d6279c541a5af55a39772c2362ed0920b27e3ce0e21f7"},
{file = "coverage-7.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3d9c62cff2ffb4c2a95328488fd7aa96a7a4b34873150650fe76b19c08c9c792"},
{file = "coverage-7.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74eeaa13e8200ad72fca9c5f37395fb310915cec6f1682b21375e84fd9770e84"},
{file = "coverage-7.5.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f29bf497d51a5077994b265e976d78b09d9d0dff6ca5763dbb4804534a5d380"},
{file = "coverage-7.5.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f96aa94739593ae0707eda9813ce363a0a0374a810ae0eced383340fc4a1f73"},
{file = "coverage-7.5.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:51b6cee539168a912b4b3b040e4042b9e2c9a7ad9c8546c09e4eaeff3eacba6b"},
{file = "coverage-7.5.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:59a75e6aa5c25b50b5a1499f9718f2edff54257f545718c4fb100f48d570ead4"},
{file = "coverage-7.5.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:29da75ce20cb0a26d60e22658dd3230713c6c05a3465dd8ad040ffc991aea318"},
{file = "coverage-7.5.2-cp38-cp38-win32.whl", hash = "sha256:23f2f16958b16152b43a39a5ecf4705757ddd284b3b17a77da3a62aef9c057ef"},
{file = "coverage-7.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:9e41c94035e5cdb362beed681b58a707e8dc29ea446ea1713d92afeded9d1ddd"},
{file = "coverage-7.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:06d96b9b19bbe7f049c2be3c4f9e06737ec6d8ef8933c7c3a4c557ef07936e46"},
{file = "coverage-7.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:878243e1206828908a6b4a9ca7b1aa8bee9eb129bf7186fc381d2646f4524ce9"},
{file = "coverage-7.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:482df956b055d3009d10fce81af6ffab28215d7ed6ad4a15e5c8e67cb7c5251c"},
{file = "coverage-7.5.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a35c97af60a5492e9e89f8b7153fe24eadfd61cb3a2fb600df1a25b5dab34b7e"},
{file = "coverage-7.5.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:24bb4c7859a3f757a116521d4d3a8a82befad56ea1bdacd17d6aafd113b0071e"},
{file = "coverage-7.5.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e1046aab24c48c694f0793f669ac49ea68acde6a0798ac5388abe0a5615b5ec8"},
{file = "coverage-7.5.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:448ec61ea9ea7916d5579939362509145caaecf03161f6f13e366aebb692a631"},
{file = "coverage-7.5.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4a00bd5ba8f1a4114720bef283cf31583d6cb1c510ce890a6da6c4268f0070b7"},
{file = "coverage-7.5.2-cp39-cp39-win32.whl", hash = "sha256:9f805481d5eff2a96bac4da1570ef662bf970f9a16580dc2c169c8c3183fa02b"},
{file = "coverage-7.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:2c79f058e7bec26b5295d53b8c39ecb623448c74ccc8378631f5cb5c16a7e02c"},
{file = "coverage-7.5.2-pp38.pp39.pp310-none-any.whl", hash = "sha256:40dbb8e7727560fe8ab65efcddfec1ae25f30ef02e2f2e5d78cfb52a66781ec5"},
{file = "coverage-7.5.2.tar.gz", hash = "sha256:13017a63b0e499c59b5ba94a8542fb62864ba3016127d1e4ef30d354fc2b00e9"},
{file = "coverage-7.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a6519d917abb15e12380406d721e37613e2a67d166f9fb7e5a8ce0375744cd45"},
{file = "coverage-7.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:aea7da970f1feccf48be7335f8b2ca64baf9b589d79e05b9397a06696ce1a1ec"},
{file = "coverage-7.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:923b7b1c717bd0f0f92d862d1ff51d9b2b55dbbd133e05680204465f454bb286"},
{file = "coverage-7.5.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:62bda40da1e68898186f274f832ef3e759ce929da9a9fd9fcf265956de269dbc"},
{file = "coverage-7.5.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8b7339180d00de83e930358223c617cc343dd08e1aa5ec7b06c3a121aec4e1d"},
{file = "coverage-7.5.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:25a5caf742c6195e08002d3b6c2dd6947e50efc5fc2c2205f61ecb47592d2d83"},
{file = "coverage-7.5.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:05ac5f60faa0c704c0f7e6a5cbfd6f02101ed05e0aee4d2822637a9e672c998d"},
{file = "coverage-7.5.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:239a4e75e09c2b12ea478d28815acf83334d32e722e7433471fbf641c606344c"},
{file = "coverage-7.5.3-cp310-cp310-win32.whl", hash = "sha256:a5812840d1d00eafae6585aba38021f90a705a25b8216ec7f66aebe5b619fb84"},
{file = "coverage-7.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:33ca90a0eb29225f195e30684ba4a6db05dbef03c2ccd50b9077714c48153cac"},
{file = "coverage-7.5.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f81bc26d609bf0fbc622c7122ba6307993c83c795d2d6f6f6fd8c000a770d974"},
{file = "coverage-7.5.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7cec2af81f9e7569280822be68bd57e51b86d42e59ea30d10ebdbb22d2cb7232"},
{file = "coverage-7.5.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:55f689f846661e3f26efa535071775d0483388a1ccfab899df72924805e9e7cd"},
{file = "coverage-7.5.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:50084d3516aa263791198913a17354bd1dc627d3c1639209640b9cac3fef5807"},
{file = "coverage-7.5.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:341dd8f61c26337c37988345ca5c8ccabeff33093a26953a1ac72e7d0103c4fb"},
{file = "coverage-7.5.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ab0b028165eea880af12f66086694768f2c3139b2c31ad5e032c8edbafca6ffc"},
{file = "coverage-7.5.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:5bc5a8c87714b0c67cfeb4c7caa82b2d71e8864d1a46aa990b5588fa953673b8"},
{file = "coverage-7.5.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:38a3b98dae8a7c9057bd91fbf3415c05e700a5114c5f1b5b0ea5f8f429ba6614"},
{file = "coverage-7.5.3-cp311-cp311-win32.whl", hash = "sha256:fcf7d1d6f5da887ca04302db8e0e0cf56ce9a5e05f202720e49b3e8157ddb9a9"},
{file = "coverage-7.5.3-cp311-cp311-win_amd64.whl", hash = "sha256:8c836309931839cca658a78a888dab9676b5c988d0dd34ca247f5f3e679f4e7a"},
{file = "coverage-7.5.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:296a7d9bbc598e8744c00f7a6cecf1da9b30ae9ad51c566291ff1314e6cbbed8"},
{file = "coverage-7.5.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:34d6d21d8795a97b14d503dcaf74226ae51eb1f2bd41015d3ef332a24d0a17b3"},
{file = "coverage-7.5.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e317953bb4c074c06c798a11dbdd2cf9979dbcaa8ccc0fa4701d80042d4ebf1"},
{file = "coverage-7.5.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:705f3d7c2b098c40f5b81790a5fedb274113373d4d1a69e65f8b68b0cc26f6db"},
{file = "coverage-7.5.3-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1196e13c45e327d6cd0b6e471530a1882f1017eb83c6229fc613cd1a11b53cd"},
{file = "coverage-7.5.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:015eddc5ccd5364dcb902eaecf9515636806fa1e0d5bef5769d06d0f31b54523"},
{file = "coverage-7.5.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:fd27d8b49e574e50caa65196d908f80e4dff64d7e592d0c59788b45aad7e8b35"},
{file = "coverage-7.5.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:33fc65740267222fc02975c061eb7167185fef4cc8f2770267ee8bf7d6a42f84"},
{file = "coverage-7.5.3-cp312-cp312-win32.whl", hash = "sha256:7b2a19e13dfb5c8e145c7a6ea959485ee8e2204699903c88c7d25283584bfc08"},
{file = "coverage-7.5.3-cp312-cp312-win_amd64.whl", hash = "sha256:0bbddc54bbacfc09b3edaec644d4ac90c08ee8ed4844b0f86227dcda2d428fcb"},
{file = "coverage-7.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f78300789a708ac1f17e134593f577407d52d0417305435b134805c4fb135adb"},
{file = "coverage-7.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b368e1aee1b9b75757942d44d7598dcd22a9dbb126affcbba82d15917f0cc155"},
{file = "coverage-7.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f836c174c3a7f639bded48ec913f348c4761cbf49de4a20a956d3431a7c9cb24"},
{file = "coverage-7.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:244f509f126dc71369393ce5fea17c0592c40ee44e607b6d855e9c4ac57aac98"},
{file = "coverage-7.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c4c2872b3c91f9baa836147ca33650dc5c172e9273c808c3c3199c75490e709d"},
{file = "coverage-7.5.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dd4b3355b01273a56b20c219e74e7549e14370b31a4ffe42706a8cda91f19f6d"},
{file = "coverage-7.5.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:f542287b1489c7a860d43a7d8883e27ca62ab84ca53c965d11dac1d3a1fab7ce"},
{file = "coverage-7.5.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:75e3f4e86804023e991096b29e147e635f5e2568f77883a1e6eed74512659ab0"},
{file = "coverage-7.5.3-cp38-cp38-win32.whl", hash = "sha256:c59d2ad092dc0551d9f79d9d44d005c945ba95832a6798f98f9216ede3d5f485"},
{file = "coverage-7.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:fa21a04112c59ad54f69d80e376f7f9d0f5f9123ab87ecd18fbb9ec3a2beed56"},
{file = "coverage-7.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f5102a92855d518b0996eb197772f5ac2a527c0ec617124ad5242a3af5e25f85"},
{file = "coverage-7.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d1da0a2e3b37b745a2b2a678a4c796462cf753aebf94edcc87dcc6b8641eae31"},
{file = "coverage-7.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8383a6c8cefba1b7cecc0149415046b6fc38836295bc4c84e820872eb5478b3d"},
{file = "coverage-7.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9aad68c3f2566dfae84bf46295a79e79d904e1c21ccfc66de88cd446f8686341"},
{file = "coverage-7.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e079c9ec772fedbade9d7ebc36202a1d9ef7291bc9b3a024ca395c4d52853d7"},
{file = "coverage-7.5.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:bde997cac85fcac227b27d4fb2c7608a2c5f6558469b0eb704c5726ae49e1c52"},
{file = "coverage-7.5.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:990fb20b32990b2ce2c5f974c3e738c9358b2735bc05075d50a6f36721b8f303"},
{file = "coverage-7.5.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:3d5a67f0da401e105753d474369ab034c7bae51a4c31c77d94030d59e41df5bd"},
{file = "coverage-7.5.3-cp39-cp39-win32.whl", hash = "sha256:e08c470c2eb01977d221fd87495b44867a56d4d594f43739a8028f8646a51e0d"},
{file = "coverage-7.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:1d2a830ade66d3563bb61d1e3c77c8def97b30ed91e166c67d0632c018f380f0"},
{file = "coverage-7.5.3-pp38.pp39.pp310-none-any.whl", hash = "sha256:3538d8fb1ee9bdd2e2692b3b18c22bb1c19ffbefd06880f5ac496e42d7bb3884"},
{file = "coverage-7.5.3.tar.gz", hash = "sha256:04aefca5190d1dc7a53a4c1a5a7f8568811306d7a8ee231c42fb69215571944f"},
]
[package.dependencies]
@ -1807,13 +1807,13 @@ gmpy2 = ["gmpy2"]
[[package]]
name = "elastic-transport"
version = "8.13.0"
version = "8.13.1"
description = "Transport classes and utilities shared among Python Elastic client libraries"
optional = false
python-versions = ">=3.7"
files = [
{file = "elastic-transport-8.13.0.tar.gz", hash = "sha256:2410ec1ff51221e8b3a01c0afa9f0d0498e1386a269283801f5c12f98e42dc45"},
{file = "elastic_transport-8.13.0-py3-none-any.whl", hash = "sha256:aec890afdddd057762b27ff3553b0be8fa4673ec1a4fd922dfbd00325874bb3d"},
{file = "elastic_transport-8.13.1-py3-none-any.whl", hash = "sha256:5d4bb6b8e9d74a9c16de274e91a5caf65a3a8d12876f1e99152975e15b2746fe"},
{file = "elastic_transport-8.13.1.tar.gz", hash = "sha256:16339d392b4bbe86ad00b4bdeecff10edf516d32bc6c16053846625f2c6ea250"},
]
[package.dependencies]
@ -2539,13 +2539,13 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.dev0)"]
[[package]]
name = "google-api-python-client"
version = "2.130.0"
version = "2.131.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-api-python-client-2.130.0.tar.gz", hash = "sha256:2bba3122b82a649c677b8a694b8e2bbf2a5fbf3420265caf3343bb88e2e9f0ae"},
{file = "google_api_python_client-2.130.0-py2.py3-none-any.whl", hash = "sha256:7d45a28d738628715944a9c9d73e8696e7e03ac50b7de87f5e3035cefa94ed3a"},
{file = "google-api-python-client-2.131.0.tar.gz", hash = "sha256:1c03e24af62238a8817ecc24e9d4c32ddd4cb1f323b08413652d9a9a592fc00d"},
{file = "google_api_python_client-2.131.0-py2.py3-none-any.whl", hash = "sha256:e325409bdcef4604d505d9246ce7199960a010a0569ac503b9f319db8dbdc217"},
]
[package.dependencies]
@ -4323,7 +4323,7 @@ types-requests = ">=2.31.0.2,<3.0.0.0"
[[package]]
name = "langflow-base"
version = "0.0.49"
version = "0.0.50"
description = "A Python package with a built-in web application"
optional = false
python-versions = ">=3.10,<3.13"
@ -4419,13 +4419,13 @@ requests = ">=2,<3"
[[package]]
name = "litellm"
version = "1.38.10"
version = "1.38.11"
description = "Library to easily interface with LLM API providers"
optional = false
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
files = [
{file = "litellm-1.38.10-py3-none-any.whl", hash = "sha256:4d33465eacde566832b9d7aa7677476e61aa7ba4ec26631fb1c8411c87219ed1"},
{file = "litellm-1.38.10.tar.gz", hash = "sha256:1a0b3088fe4b072f367343a7d7d25e4c5f9990975d9ee7dbf21f3b25ff046bb0"},
{file = "litellm-1.38.11-py3-none-any.whl", hash = "sha256:772e0f3b2a78184bbfe221004ad031939b7d76d5be0e5537effdf8afb82e8ac0"},
{file = "litellm-1.38.11.tar.gz", hash = "sha256:5c65b707e4779000913c54da144ad93e7ef4ca0f25215ae386fe2cadd41bf744"},
]
[package.dependencies]
@ -6257,13 +6257,13 @@ twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.43"
version = "3.0.45"
description = "Library for building powerful interactive command lines in Python"
optional = false
python-versions = ">=3.7.0"
files = [
{file = "prompt_toolkit-3.0.43-py3-none-any.whl", hash = "sha256:a11a29cb3bf0a28a387fe5122cdb649816a957cd9261dcedf8c9f1fef33eacf6"},
{file = "prompt_toolkit-3.0.43.tar.gz", hash = "sha256:3527b7af26106cbc65a040bcc84839a3566ec1b051bb0bfe953631e704b0ff7d"},
{file = "prompt_toolkit-3.0.45-py3-none-any.whl", hash = "sha256:a29b89160e494e3ea8622b09fa5897610b437884dcdcd054fdc1308883326c2a"},
{file = "prompt_toolkit-3.0.45.tar.gz", hash = "sha256:07c60ee4ab7b7e90824b61afa840c8f5aad2d46b3e2e10acc33d8ecc94a49089"},
]
[package.dependencies]
@ -8343,30 +8343,30 @@ test = ["pylint", "pytest", "pytest-black", "pytest-cov", "pytest-pylint"]
[[package]]
name = "structlog"
version = "24.1.0"
version = "24.2.0"
description = "Structured Logging for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "structlog-24.1.0-py3-none-any.whl", hash = "sha256:3f6efe7d25fab6e86f277713c218044669906537bb717c1807a09d46bca0714d"},
{file = "structlog-24.1.0.tar.gz", hash = "sha256:41a09886e4d55df25bdcb9b5c9674bccfab723ff43e0a86a1b7b236be8e57b16"},
{file = "structlog-24.2.0-py3-none-any.whl", hash = "sha256:983bd49f70725c5e1e3867096c0c09665918936b3db27341b41d294283d7a48a"},
{file = "structlog-24.2.0.tar.gz", hash = "sha256:0e3fe74924a6d8857d3f612739efb94c72a7417d7c7c008d12276bca3b5bf13b"},
]
[package.extras]
dev = ["structlog[tests,typing]"]
dev = ["freezegun (>=0.2.8)", "mypy (>=1.4)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "rich", "simplejson", "twisted"]
docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-mermaid", "sphinxext-opengraph", "twisted"]
tests = ["freezegun (>=0.2.8)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "simplejson"]
typing = ["mypy (>=1.4)", "rich", "twisted"]
[[package]]
name = "supabase"
version = "2.4.6"
version = "2.5.0"
description = "Supabase client for Python."
optional = false
python-versions = "<4.0,>=3.8"
files = [
{file = "supabase-2.4.6-py3-none-any.whl", hash = "sha256:0bfd6bb33c0e6d6891b55caaf689140f47588b01436fecc336d1d75090c70e8b"},
{file = "supabase-2.4.6.tar.gz", hash = "sha256:442be0729f5fd9258326ba89859f60bfd8d9218283ed7fd8a62ae81e2f310474"},
{file = "supabase-2.5.0-py3-none-any.whl", hash = "sha256:13e5ed9e9377a1a69e70ad18ed7b82997cf13ffcd28173952f7503e4d5067771"},
{file = "supabase-2.5.0.tar.gz", hash = "sha256:133dc832dfdd617f2f90ac5b52664df96ac8a9302ac6656ee769dc3f545812f0"},
]
[package.dependencies]

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "langflow"
version = "1.0.0a38"
version = "1.0.0a39"
description = "A Python package with a built-in web application"
authors = ["Langflow <contact@langflow.org>"]
maintainers = [

View file

@ -77,7 +77,7 @@ def set_var_for_macos_issue():
def run(
host: str = typer.Option("127.0.0.1", help="Host to bind the server to.", envvar="LANGFLOW_HOST"),
workers: int = typer.Option(1, help="Number of worker processes.", envvar="LANGFLOW_WORKERS"),
timeout: int = typer.Option(300, help="Worker timeout in seconds."),
timeout: int = typer.Option(300, help="Worker timeout in seconds.", envvar="LANGFLOW_WORKER_TIMEOUT"),
port: int = typer.Option(7860, help="Port to listen on.", envvar="LANGFLOW_PORT"),
components_path: Optional[Path] = typer.Option(
Path(__file__).parent / "components",
@ -145,6 +145,10 @@ def run(
if is_port_in_use(port, host):
port = get_free_port(port)
settings_service = get_settings_service()
settings_service.set("worker_timeout", timeout)
options = {
"bind": f"{host}:{port}",
"workers": get_number_of_workers(workers),

View file

@ -1,5 +1,5 @@
from http import HTTPStatus
from typing import Annotated, List, Optional, Union
from typing import TYPE_CHECKING, Annotated, List, Optional, Union
from uuid import UUID
import sqlalchemy as sa
@ -9,6 +9,7 @@ from sqlmodel import Session, select
from langflow.api.utils import update_frontend_node_with_template_values
from langflow.api.v1.schemas import (
ConfigResponse,
CustomComponentRequest,
InputValueRequest,
ProcessResponse,
@ -31,7 +32,9 @@ from langflow.services.deps import get_session, get_session_service, get_setting
from langflow.services.session.service import SessionService
from langflow.services.task.service import TaskService
# build router
if TYPE_CHECKING:
from langflow.services.settings.manager import SettingsService
router = APIRouter(tags=["Base"])
@ -440,3 +443,15 @@ async def custom_component_update(
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=400, detail=str(exc)) from exc
@router.get("/config", response_model=ConfigResponse)
def get_config():
try:
from langflow.services.deps import get_settings_service
settings_service: "SettingsService" = get_settings_service()
return settings_service.settings.model_dump()
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc

View file

@ -246,7 +246,7 @@ class VerticesOrderResponse(BaseModel):
class Log(TypedDict):
message: str
message: Union[dict, str]
type: str
@ -322,3 +322,7 @@ class FlowDataRequest(BaseModel):
nodes: List[dict]
edges: List[dict]
viewport: Optional[dict] = None
class ConfigResponse(BaseModel):
frontend_timeout: int

View file

@ -0,0 +1,67 @@
from typing import List
from langflow.graph.schema import ResultData, RunOutputs
from langflow.schema.schema import Record
def build_records_from_run_outputs(run_outputs: RunOutputs) -> List[Record]:
"""
Build a list of records from the given RunOutputs.
Args:
run_outputs (RunOutputs): The RunOutputs object containing the output data.
Returns:
List[Record]: A list of records built from the RunOutputs.
"""
if not run_outputs:
return []
records = []
for result_data in run_outputs.outputs:
if result_data:
records.extend(build_records_from_result_data(result_data))
return records
def build_records_from_result_data(result_data: ResultData, get_final_results_only: bool = True) -> List[Record]:
"""
Build a list of records from the given ResultData.
Args:
result_data (ResultData): The ResultData object containing the result data.
get_final_results_only (bool, optional): Whether to include only final results. Defaults to True.
Returns:
List[Record]: A list of records built from the ResultData.
"""
messages = result_data.messages
if not messages:
return []
records = []
for message in messages:
message_dict = message if isinstance(message, dict) else message.model_dump()
if get_final_results_only:
result_data_dict = result_data.model_dump()
results = result_data_dict.get("results", {})
inner_result = results.get("result", {})
record = Record(data={"result": inner_result, "message": message_dict}, text_key="result")
records.append(record)
return records
def format_flow_output_records(records: List[Record]) -> str:
"""
Format the flow output records into a string.
Args:
records (List[Record]): The list of records to format.
Returns:
str: The formatted flow output records.
"""
result = "Flow run output:\n"
results = "\n".join([record.result for record in records if record.data["message"]])
return result + results

View file

@ -53,19 +53,28 @@ class LCModelComponent(CustomComponent):
key in response_metadata["token_usage"] for key in inner_openai_keys
):
token_usage = response_metadata["token_usage"]
completion_tokens = token_usage["completion_tokens"]
prompt_tokens = token_usage["prompt_tokens"]
total_tokens = token_usage["total_tokens"]
finish_reason = response_metadata["finish_reason"]
status_message = f"Tokens:\nInput: {prompt_tokens}\nOutput: {completion_tokens}\nTotal Tokens: {total_tokens}\nStop Reason: {finish_reason}\nResponse: {content}"
status_message = {
"tokens": {
"input": token_usage["prompt_tokens"],
"output": token_usage["completion_tokens"],
"total": token_usage["total_tokens"],
"stop_reason": response_metadata["finish_reason"],
"response": content,
}
}
elif all(key in response_metadata for key in anthropic_keys) and all(
key in response_metadata["usage"] for key in inner_anthropic_keys
):
usage = response_metadata["usage"]
input_tokens = usage["input_tokens"]
output_tokens = usage["output_tokens"]
stop_reason = response_metadata["stop_reason"]
status_message = f"Tokens:\nInput: {input_tokens}\nOutput: {output_tokens}\nStop Reason: {stop_reason}\nResponse: {content}"
status_message = {
"tokens": {
"input": usage["input_tokens"],
"output": usage["output_tokens"],
"stop_reason": response_metadata["stop_reason"],
"response": content,
}
}
else:
status_message = f"Response: {content}"
else:

View file

@ -0,0 +1,117 @@
from typing import Any, List, Optional, Type
from asyncer import syncify
from langchain.tools import BaseTool
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import ToolException
from pydantic.v1 import BaseModel
from langflow.base.flow_processing.utils import build_records_from_result_data, format_flow_output_records
from langflow.graph.graph.base import Graph
from langflow.graph.vertex.base import Vertex
from langflow.helpers.flow import build_schema_from_inputs, get_arg_names, get_flow_inputs, run_flow
class FlowTool(BaseTool):
name: str
description: str
graph: Optional[Graph] = None
flow_id: Optional[str] = None
user_id: Optional[str] = None
inputs: List["Vertex"] = []
get_final_results_only: bool = True
@property
def args(self) -> dict:
schema = self.get_input_schema()
return schema.schema()["properties"]
def get_input_schema(self, config: Optional[RunnableConfig] = None) -> Type[BaseModel]:
"""The tool's input schema."""
if self.args_schema is not None:
return self.args_schema
elif self.graph is not None:
return build_schema_from_inputs(self.name, get_flow_inputs(self.graph))
else:
raise ToolException("No input schema available.")
def _run(
self,
*args: Any,
**kwargs: Any,
) -> str:
"""Use the tool."""
args_names = get_arg_names(self.inputs)
if len(args_names) == len(args):
kwargs = {arg["arg_name"]: arg_value for arg, arg_value in zip(args_names, args)}
elif len(args_names) != len(args) and len(args) != 0:
raise ToolException(
"Number of arguments does not match the number of inputs. Pass keyword arguments instead."
)
tweaks = {arg["component_name"]: kwargs[arg["arg_name"]] for arg in args_names}
run_outputs = syncify(run_flow, raise_sync_error=False)(
tweaks={key: {"input_value": value} for key, value in tweaks.items()},
flow_id=self.flow_id,
user_id=self.user_id,
)
if not run_outputs:
return "No output"
run_output = run_outputs[0]
records = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(
build_records_from_result_data(output, get_final_results_only=self.get_final_results_only)
)
return format_flow_output_records(records)
def validate_inputs(self, args_names: List[dict[str, str]], args: Any, kwargs: Any):
"""Validate the inputs."""
if len(args) > 0 and len(args) != len(args_names):
raise ToolException(
"Number of positional arguments does not match the number of inputs. Pass keyword arguments instead."
)
if len(args) == len(args_names):
kwargs = {arg_name["arg_name"]: arg_value for arg_name, arg_value in zip(args_names, args)}
missing_args = [arg["arg_name"] for arg in args_names if arg["arg_name"] not in kwargs]
if missing_args:
raise ToolException(f"Missing required arguments: {', '.join(missing_args)}")
return kwargs
def build_tweaks_dict(self, args, kwargs):
args_names = get_arg_names(self.inputs)
kwargs = self.validate_inputs(args_names=args_names, args=args, kwargs=kwargs)
tweaks = {arg["component_name"]: kwargs[arg["arg_name"]] for arg in args_names}
return tweaks
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
tweaks = self.build_tweaks_dict(args, kwargs)
run_outputs = await run_flow(
tweaks={key: {"input_value": value} for key, value in tweaks.items()},
flow_id=self.flow_id,
user_id=self.user_id,
)
if not run_outputs:
return "No output"
run_output = run_outputs[0]
records = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(
build_records_from_result_data(output, get_final_results_only=self.get_final_results_only)
)
return format_flow_output_records(records)

View file

@ -1,14 +1,14 @@
from typing import Any, List, Optional
from asyncer import syncify
from langchain_core.tools import StructuredTool
from loguru import logger
from langflow.base.tools.flow_tool import FlowTool
from langflow.custom import CustomComponent
from langflow.field_typing import Tool
from langflow.graph.graph.base import Graph
from langflow.helpers.flow import build_function_and_schema
from langflow.helpers.flow import get_flow_inputs
from langflow.schema.dotdict import dotdict
from langflow.schema.schema import Record
from loguru import logger
class FlowToolComponent(CustomComponent):
@ -68,18 +68,20 @@ class FlowToolComponent(CustomComponent):
}
async def build(self, flow_name: str, name: str, description: str, return_direct: bool = False) -> Tool:
FlowTool.update_forward_refs()
flow_record = self.get_flow(flow_name)
if not flow_record:
raise ValueError("Flow not found.")
graph = Graph.from_payload(flow_record.data["data"])
dynamic_flow_function, schema = build_function_and_schema(flow_record, graph)
tool = StructuredTool.from_function(
func=syncify(dynamic_flow_function, raise_sync_error=False), # type: ignore
coroutine=dynamic_flow_function,
inputs = get_flow_inputs(graph)
tool = FlowTool(
name=name,
description=description,
graph=graph,
return_direct=return_direct,
args_schema=schema,
inputs=inputs,
flow_id=str(flow_record.id),
user_id=str(self._user_id),
)
description_repr = repr(tool.description).strip("'")
args_str = "\n".join([f"- {arg_name}: {arg_data['description']}" for arg_name, arg_data in tool.args.items()])

View file

@ -1,8 +1,9 @@
from typing import Any, List, Optional
from langflow.base.flow_processing.utils import build_records_from_run_outputs
from langflow.custom import CustomComponent
from langflow.field_typing import NestedDict, Text
from langflow.graph.schema import ResultData
from langflow.graph.schema import RunOutputs
from langflow.schema import Record, dotdict
@ -39,28 +40,17 @@ class RunFlowComponent(CustomComponent):
},
}
def build_records_from_result_data(self, result_data: ResultData) -> List[Record]:
messages = result_data.messages
if not messages:
return []
records = []
for message in messages:
message_dict = message if isinstance(message, dict) else message.model_dump()
record = Record(text=message_dict.get("text", ""), data={"result": result_data})
records.append(record)
return records
async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> List[Record]:
results: List[Optional[ResultData]] = await self.run_flow(
results: List[Optional[RunOutputs]] = await self.run_flow(
inputs={"input_value": input_value}, flow_name=flow_name, tweaks=tweaks
)
if isinstance(results, list):
records = []
for result in results:
if result:
records.extend(self.build_records_from_result_data(result))
records.extend(build_records_from_run_outputs(result))
else:
records = self.build_records_from_result_data(results)
records = build_records_from_run_outputs()(results)
self.status = records
return records

View file

@ -2,9 +2,10 @@ from typing import Any, List, Optional
from loguru import logger
from langflow.base.flow_processing.utils import build_records_from_result_data
from langflow.custom import CustomComponent
from langflow.graph.graph.base import Graph
from langflow.graph.schema import ResultData, RunOutputs
from langflow.graph.schema import RunOutputs
from langflow.graph.vertex.base import Vertex
from langflow.helpers.flow import get_flow_inputs
from langflow.schema import Record
@ -92,21 +93,6 @@ class SubFlowComponent(CustomComponent):
},
}
def build_records_from_result_data(self, result_data: ResultData, get_final_results_only: bool) -> List[Record]:
messages = result_data.messages
if not messages:
return []
records = []
for message in messages:
message_dict = message if isinstance(message, dict) else message.model_dump()
if get_final_results_only:
result_data_dict = result_data.model_dump()
results = result_data_dict.get("results", {})
inner_result = results.get("result", {})
record = Record(data={"result": inner_result, "message": message_dict}, text_key="result")
records.append(record)
return records
async def build(self, flow_name: str, get_final_results_only: bool = True, **kwargs) -> List[Record]:
tweaks = {key: {"input_value": value} for key, value in kwargs.items()}
run_outputs: List[Optional[RunOutputs]] = await self.run_flow(
@ -121,7 +107,7 @@ class SubFlowComponent(CustomComponent):
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(self.build_records_from_result_data(output, get_final_results_only))
records.extend(build_records_from_result_data(output, get_final_results_only))
self.status = records
logger.debug(records)

View file

@ -1,16 +1,21 @@
from typing import Dict, List, Optional
# from langchain_community.chat_models import ChatOllama
from langchain_community.chat_models import ChatOllama
from typing import Any, Dict, List, Optional, Union
from langchain_community.chat_models.ollama import ChatOllama
from langflow.base.constants import STREAM_INFO_TEXT
from langflow.base.models.model import LCModelComponent
from langchain_core.caches import BaseCache
# from langchain.chat_models import ChatOllama
from langflow.field_typing import Text
# whe When a callback component is added to Langflow, the comment must be uncommented.
# from langchain.callbacks.manager import CallbackManager
import asyncio
import json
import httpx
class ChatOllamaComponent(LCModelComponent):
@ -20,11 +25,19 @@ class ChatOllamaComponent(LCModelComponent):
field_order = [
"base_url",
"headers",
"keep_alive_flag",
"keep_alive",
"metadata",
"model",
"temperature",
"cache",
"callback_manager",
"callbacks",
"format",
"metadata",
"mirostat",
@ -54,12 +67,41 @@ class ChatOllamaComponent(LCModelComponent):
"base_url": {
"display_name": "Base URL",
"info": "Endpoint of the Ollama API. Defaults to 'http://localhost:11434' if not specified.",
},
"format": {
"display_name": "Format",
"info": "Specify the format of the output (e.g., json)",
"advanced": True,
},
"headers": {
"display_name": "Headers",
"advanced": True,
},
"keep_alive_flag": {
"display_name": "Unload interval",
"options": ["Keep", "Immediately","Minute", "Hour", "sec" ],
"real_time_refresh": True,
"refresh_button": True,
},
"keep_alive": {
"display_name": "interval",
"info": "How long the model will stay loaded into memory.",
},
"model": {
"display_name": "Model Name",
"value": "llama2",
"options": [],
"info": "Refer to https://ollama.ai/library for more models.",
"real_time_refresh": True,
"refresh_button": True,
},
"temperature": {
"display_name": "Temperature",
@ -67,25 +109,8 @@ class ChatOllamaComponent(LCModelComponent):
"value": 0.8,
"info": "Controls the creativity of model responses.",
},
"cache": {
"display_name": "Cache",
"field_type": "bool",
"info": "Enable or disable caching.",
"advanced": True,
"value": False,
},
### When a callback component is added to Langflow, the comment must be uncommented. ###
# "callback_manager": {
# "display_name": "Callback Manager",
# "info": "Optional callback manager for additional functionality.",
# "advanced": True,
# },
# "callbacks": {
# "display_name": "Callbacks",
# "info": "Callbacks to execute during model runtime.",
# "advanced": True,
# },
########################################################################################
"format": {
"display_name": "Format",
"field_type": "str",
@ -101,20 +126,24 @@ class ChatOllamaComponent(LCModelComponent):
"display_name": "Mirostat",
"options": ["Disabled", "Mirostat", "Mirostat 2.0"],
"info": "Enable/disable Mirostat sampling for controlling perplexity.",
"value": "Disabled",
"advanced": True,
"advanced": False,
"real_time_refresh": True,
"refresh_button": True,
},
"mirostat_eta": {
"display_name": "Mirostat Eta",
"field_type": "float",
"info": "Learning rate for Mirostat algorithm. (Default: 0.1)",
"advanced": True,
"real_time_refresh": True,
},
"mirostat_tau": {
"display_name": "Mirostat Tau",
"field_type": "float",
"info": "Controls the balance between coherence and diversity of the output. (Default: 5.0)",
"advanced": True,
"real_time_refresh": True,
},
"num_ctx": {
"display_name": "Context Window Size",
@ -211,21 +240,74 @@ class ChatOllamaComponent(LCModelComponent):
},
}
def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None):
if field_name == "mirostat":
if field_value == "Disabled":
build_config["mirostat_eta"]["advanced"] = True
build_config["mirostat_tau"]["advanced"] = True
build_config["mirostat_eta"]["value"] = None
build_config["mirostat_tau"]["value"] = None
else:
build_config["mirostat_eta"]["advanced"] = False
build_config["mirostat_tau"]["advanced"] = False
if field_value == "Mirostat 2.0":
build_config["mirostat_eta"]["value"] = 0.2
build_config["mirostat_tau"]["value"] = 10
else:
build_config["mirostat_eta"]["value"] = 0.1
build_config["mirostat_tau"]["value"] = 5
if field_name == "model":
base_url = build_config.get("base_url", {}).get(
"value", "http://localhost:11434")
build_config["model"]["options"] = self.get_model(
base_url + "/api/tags")
if field_name == "keep_alive_flag":
if field_value == "Keep":
build_config["keep_alive"]["value"] = "-1"
build_config["keep_alive"]["advanced"] = True
elif field_value == "Immediately":
build_config["keep_alive"]["value"] = "0"
build_config["keep_alive"]["advanced"] = True
else:
build_config["keep_alive"]["advanced"] = False
return build_config
def get_model(self, url: str) -> List[str]:
try:
with httpx.Client() as client:
response = client.get(url)
response.raise_for_status()
data = response.json()
model_names = [model['name']
for model in data.get("models", [])]
return model_names
except Exception as e:
raise ValueError("Could not retrieve models") from e
return [""]
def build(
self,
base_url: Optional[str],
model: str,
input_value: Text,
mirostat: Optional[str],
mirostat: Optional[str],
mirostat_eta: Optional[float] = None,
mirostat_tau: Optional[float] = None,
### When a callback component is added to Langflow, the comment must be uncommented.###
# callback_manager: Optional[CallbackManager] = None,
# callbacks: Optional[List[Callbacks]] = None,
#######################################################################################
repeat_last_n: Optional[int] = None,
verbose: Optional[bool] = None,
cache: Optional[bool] = None,
keep_alive: Optional[int] = None,
keep_alive_flag: Optional[str] = None,
num_ctx: Optional[int] = None,
num_gpu: Optional[int] = None,
format: Optional[str] = None,
@ -244,33 +326,39 @@ class ChatOllamaComponent(LCModelComponent):
stream: bool = False,
system_message: Optional[str] = None,
) -> Text:
if not base_url:
base_url = "http://localhost:11434"
# Mapping mirostat settings to their corresponding values
mirostat_options = {"Mirostat": 1, "Mirostat 2.0": 2}
# Default to 0 for 'Disabled'
mirostat_value = mirostat_options.get(mirostat, 0) # type: ignore
# Set mirostat_eta and mirostat_tau to None if mirostat is disabled
if mirostat_value == 0:
mirostat_eta = None
mirostat_tau = None
if keep_alive_flag == "Minute":
keep_alive_instance = f"{keep_alive}m"
elif keep_alive_flag == "Hour":
keep_alive_instance = f"{keep_alive}h"
elif keep_alive_flag == "sec":
keep_alive_instance = f"{keep_alive}s"
elif keep_alive_flag == "Keep":
keep_alive_instance = "-1"
elif keep_alive_flag == "Immediately":
keep_alive_instance = "0"
else:
keep_alive_instance = "Invalid option"
mirostat_instance = 0
if mirostat == "disable":
mirostat_instance = 0
# Mapping system settings to their corresponding values
llm_params = {
"base_url": base_url,
"cache": cache,
"model": model,
"mirostat": mirostat_value,
"mirostat": mirostat_instance,
"keep_alive": keep_alive_instance,
"format": format,
"metadata": metadata,
"tags": tags,
## When a callback component is added to Langflow, the comment must be uncommented.##
# "callback_manager": callback_manager,
# "callbacks": callbacks,
#####################################################################################
"mirostat_eta": mirostat_eta,
"mirostat_tau": mirostat_tau,
"num_ctx": num_ctx,

View file

@ -1,5 +1,6 @@
from typing import List, Optional
import chromadb
from chromadb.config import Settings
from langchain_chroma import Chroma
@ -91,7 +92,7 @@ class ChromaSearchComponent(LCVectorStoreComponent):
# Chroma settings
chroma_settings = None
client = None
if chroma_server_host is not None:
chroma_settings = Settings(
chroma_server_cors_allow_origins=chroma_server_cors_allow_origins or [],
@ -100,13 +101,14 @@ class ChromaSearchComponent(LCVectorStoreComponent):
chroma_server_grpc_port=chroma_server_grpc_port or None,
chroma_server_ssl_enabled=chroma_server_ssl_enabled,
)
client = chromadb.HttpClient(settings=chroma_settings)
if index_directory:
index_directory = self.resolve_path(index_directory)
vector_store = Chroma(
embedding_function=embedding,
collection_name=collection_name,
persist_directory=index_directory,
client_settings=chroma_settings,
client=client,
)
return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)

View file

@ -1,5 +1,6 @@
from typing import List, Optional, Union
import chromadb
from chromadb.config import Settings
from langchain_chroma import Chroma
from langchain_core.embeddings import Embeddings
@ -81,7 +82,7 @@ class ChromaComponent(CustomComponent):
# Chroma settings
chroma_settings = None
client = None
if chroma_server_host is not None:
chroma_settings = Settings(
chroma_server_cors_allow_origins=chroma_server_cors_allow_origins or [],
@ -90,6 +91,7 @@ class ChromaComponent(CustomComponent):
chroma_server_grpc_port=chroma_server_grpc_port or None,
chroma_server_ssl_enabled=chroma_server_ssl_enabled,
)
client = chromadb.HttpClient(settings=chroma_settings)
# If documents, then we need to create a Chroma instance using .from_documents
@ -111,12 +113,12 @@ class ChromaComponent(CustomComponent):
persist_directory=index_directory,
collection_name=collection_name,
embedding=embedding,
client_settings=chroma_settings,
client=client,
)
else:
chroma = Chroma(
persist_directory=index_directory,
client_settings=chroma_settings,
client=client,
embedding_function=embedding,
)
return chroma

View file

@ -734,7 +734,7 @@ class Graph:
await vertex.build(user_id=user_id, inputs=inputs_dict,files=files, fallback_to_env_vars=fallback_to_env_vars)
if vertex.result is not None:
params = vertex._built_object_repr()
params = vertex.artifacts_raw
log_type = vertex.artifacts_type
valid = True
result_dict = vertex.result

View file

@ -19,6 +19,7 @@ class UnbuiltResult:
class ArtifactType(str, Enum):
TEXT = "text"
RECORD = "record"
OBJECT = "object"
UNKNOWN = "unknown"
@ -60,16 +61,16 @@ def serialize_field(value):
return value
def get_artifact_type(build_result: Any) -> str:
result = None
match build_result:
def get_artifact_type(value: Any) -> str:
result = ArtifactType.UNKNOWN
match value:
case Record():
result = ArtifactType.RECORD
case str():
result = ArtifactType.TEXT
case _:
result = ArtifactType.UNKNOWN
case dict():
result = ArtifactType.OBJECT
return result.value

View file

@ -63,6 +63,7 @@ class Vertex:
self._built_result = None
self._built = False
self.artifacts: Dict[str, Any] = {}
self.artifacts_raw: Any = None
self.artifacts_type: Optional[str] = None
self.steps: List[Callable] = [self._build]
self.steps_ran: List[Callable] = []
@ -626,6 +627,7 @@ class Vertex:
self._built_object, self.artifacts = result
elif len(result) == 3:
self._custom_component, self._built_object, self.artifacts = result
self.artifacts_raw = self.artifacts.get("raw")
self.artifacts_type = self.artifacts.get("type") or ArtifactType.UNKNOWN.value
else:

View file

@ -1,10 +1,13 @@
from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Optional, Tuple, Type, Union, cast
from uuid import UUID
from pydantic.v1 import BaseModel, Field, create_model
from sqlmodel import select
from langflow.graph.schema import RunOutputs
from langflow.schema.schema import INPUT_FIELD_NAME, Record
from langflow.services.database.models.flow.model import Flow
from langflow.services.deps import session_scope
from pydantic.v1 import BaseModel, Field, create_model
from sqlmodel import select
if TYPE_CHECKING:
from langflow.graph.graph.base import Graph
@ -51,7 +54,7 @@ async def load_flow(
raise ValueError(f"Flow {flow_id} not found")
if tweaks:
graph_data = process_tweaks(graph_data=graph_data, tweaks=tweaks)
graph = Graph.from_payload(graph_data, flow_id=flow_id)
graph = Graph.from_payload(graph_data, flow_id=flow_id, user_id=user_id)
return graph
@ -67,25 +70,29 @@ async def run_flow(
flow_id: Optional[str] = None,
flow_name: Optional[str] = None,
user_id: Optional[str] = None,
) -> Any:
) -> List[RunOutputs]:
if user_id is None:
raise ValueError("Session is invalid")
graph = await load_flow(user_id, flow_id, flow_name, tweaks)
if inputs is None:
inputs = []
if isinstance(inputs, dict):
inputs = [inputs]
inputs_list = []
inputs_components = []
types = []
for input_dict in inputs:
inputs_list.append({INPUT_FIELD_NAME: cast(str, input_dict.get("input_value"))})
inputs_components.append(input_dict.get("components", []))
types.append(input_dict.get("type", []))
types.append(input_dict.get("type", "chat"))
return await graph.arun(inputs_list, inputs_components=inputs_components, types=types)
def generate_function_for_flow(inputs: List["Vertex"], flow_id: str) -> Callable[..., Awaitable[Any]]:
def generate_function_for_flow(
inputs: List["Vertex"], flow_id: str, user_id: str | UUID | None
) -> Callable[..., Awaitable[Any]]:
"""
Generate a dynamic flow function based on the given inputs and flow ID.
@ -129,11 +136,23 @@ async def flow_function({func_args}):
tweaks = {{ {arg_mappings} }}
from langflow.helpers.flow import run_flow
from langchain_core.tools import ToolException
from langflow.base.flow_processing.utils import build_records_from_result_data, format_flow_output_records
try:
return await run_flow(
run_outputs = await run_flow(
tweaks={{key: {{'input_value': value}} for key, value in tweaks.items()}},
flow_id="{flow_id}",
user_id="{user_id}"
)
if not run_outputs:
return []
run_output = run_outputs[0]
records = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(build_records_from_result_data(output, get_final_results_only=True))
return format_flow_output_records(records)
except Exception as e:
raise ToolException(f'Error running flow: ' + e)
"""
@ -145,7 +164,7 @@ async def flow_function({func_args}):
def build_function_and_schema(
flow_record: Record, graph: "Graph"
flow_record: Record, graph: "Graph", user_id: str | UUID | None
) -> Tuple[Callable[..., Awaitable[Any]], Type[BaseModel]]:
"""
Builds a dynamic function and schema for a given flow.
@ -159,7 +178,7 @@ def build_function_and_schema(
"""
flow_id = flow_record.id
inputs = get_flow_inputs(graph)
dynamic_flow_function = generate_function_for_flow(inputs, flow_id)
dynamic_flow_function = generate_function_for_flow(inputs, flow_id, user_id=user_id)
schema = build_schema_from_inputs(flow_record.name, inputs)
return dynamic_flow_function, schema
@ -200,3 +219,19 @@ def build_schema_from_inputs(name: str, inputs: List["Vertex"]) -> Type[BaseMode
description = input_.description
fields[field_name] = (str, Field(default="", description=description))
return create_model(name, **fields) # type: ignore
def get_arg_names(inputs: List["Vertex"]) -> List[dict[str, str]]:
"""
Returns a list of dictionaries containing the component name and its corresponding argument name.
Args:
inputs (List[Vertex]): A list of Vertex objects representing the inputs.
Returns:
List[dict[str, str]]: A list of dictionaries, where each dictionary contains the component name and its argument name.
"""
return [
{"component_name": input_.display_name, "arg_name": input_.display_name.lower().replace(" ", "_")}
for input_ in inputs
]

View file

@ -126,5 +126,13 @@ async def instantiate_custom_component(params, user_id, vertex, fallback_to_env_
custom_repr = build_result
if not isinstance(custom_repr, str):
custom_repr = str(custom_repr)
artifact = {"repr": custom_repr, "type": get_artifact_type(build_result)}
raw = custom_component.repr_value
if hasattr(raw, "data"):
raw = raw.data
elif hasattr(raw, "model_dump"):
raw = raw.model_dump()
artifact = {"repr": custom_repr, "raw": raw, "type": get_artifact_type(custom_component.repr_value)}
return custom_component, build_result, artifact

View file

@ -104,6 +104,10 @@ class Settings(BaseSettings):
"""Whether to store environment variables as Global Variables in the database."""
variables_to_get_from_environment: list[str] = VARIABLES_TO_GET_FROM_ENVIRONMENT
"""List of environment variables to get from the environment and store in the database."""
worker_timeout: int = 300
"""Timeout for the API calls in seconds."""
frontend_timeout: int = 0
"""Timeout for the frontend API calls in seconds."""
@field_validator("config_dir", mode="before")
def set_langflow_dir(cls, value):

View file

@ -42,3 +42,7 @@ class SettingsService(Service):
CONFIG_DIR=settings.config_dir,
)
return cls(settings, auth_settings)
def set(self, key, value):
setattr(self.settings, key, value)
return self.settings

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "langflow-base"
version = "0.0.49"
version = "0.0.50"
description = "A Python package with a built-in web application"
authors = ["Langflow <contact@langflow.org>"]
maintainers = [

View file

@ -1,3 +1,4 @@
import axios from "axios";
import { useContext, useEffect, useState } from "react";
import { ErrorBoundary } from "react-error-boundary";
import { useNavigate } from "react-router-dom";
@ -15,6 +16,7 @@ import {
} from "./constants/constants";
import { AuthContext } from "./contexts/authContext";
import { autoLogin, getGlobalVariables, getHealth } from "./controllers/API";
import { setupAxiosDefaults } from "./controllers/API/utils";
import useTrackLastVisitedPath from "./hooks/use-track-last-visited-path";
import Router from "./routes";
import useAlertStore from "./stores/alertStore";
@ -114,9 +116,11 @@ export default function App() {
return new Promise<void>(async (resolve, reject) => {
if (isAuthenticated) {
try {
await setupAxiosDefaults();
await getFoldersApi();
await getTypes();
await refreshFlows();
console.log(axios.defaults);
const res = await getGlobalVariables();
setGlobalVariables(res);
checkHasStore();

View file

@ -0,0 +1,32 @@
import axios from "axios";
import { BASE_URL_API } from "../../constants/constants";
/**
* Fetches the configuration data from the API.
* @returns {Promise<any>} A promise that resolves to the configuration data.
* @throws {Error} If there was an error fetching the configuration data.
*/
export async function fetchConfig() {
try {
const response = await axios.get(`${BASE_URL_API}config`);
return response.data;
} catch (error) {
console.error("Failed to fetch configuration:", error);
throw error;
}
}
/**
* Sets up default configurations for Axios.
* Fetches the timeout configuration and sets it as the default timeout for Axios requests.
*/
export async function setupAxiosDefaults() {
const config = await fetchConfig();
// Create Axios instance with the fetched timeout configuration
const timeoutInMilliseconds = config.frontend_timeout
? config.frontend_timeout * 1000
: 30000;
axios.defaults.baseURL = "";
axios.defaults.timeout = timeoutInMilliseconds;
}

View file

@ -12,47 +12,47 @@ export default function getPythonApiCode(
): string {
const tweaksObject = tweaksBuildedObject[0];
return `import requests
from typing import Optional
from typing import Optional
BASE_API_URL = "${window.location.protocol}//${window.location.host}/api/v1/run"
FLOW_ID = "${flowId}"
# You can tweak the flow by adding a tweaks dictionary
# e.g {"OpenAI-XXXXX": {"model_name": "gpt-4"}}
TWEAKS = ${JSON.stringify(tweaksObject, null, 2)}
BASE_API_URL = "${window.location.protocol}//${window.location.host}/api/v1/run"
FLOW_ID = "${flowId}"
# You can tweak the flow by adding a tweaks dictionary
# e.g {"OpenAI-XXXXX": {"model_name": "gpt-4"}}
TWEAKS = ${JSON.stringify(tweaksObject, null, 2)}
def run_flow(message: str,
flow_id: str,
output_type: str = "chat",
input_type: str = "chat",
tweaks: Optional[dict] = None,
api_key: Optional[str] = None) -> dict:
"""
Run a flow with a given message and optional tweaks.
def run_flow(message: str,
flow_id: str,
output_type: str = "chat",
input_type: str = "chat",
tweaks: Optional[dict] = None,
api_key: Optional[str] = None) -> dict:
"""
Run a flow with a given message and optional tweaks.
:param message: The message to send to the flow
:param flow_id: The ID of the flow to run
:param tweaks: Optional tweaks to customize the flow
:return: The JSON response from the flow
"""
api_url = f"{BASE_API_URL}/{flow_id}"
:param message: The message to send to the flow
:param flow_id: The ID of the flow to run
:param tweaks: Optional tweaks to customize the flow
:return: The JSON response from the flow
"""
api_url = f"{BASE_API_URL}/{flow_id}"
payload = {
"input_value": message,
"output_type": output_type,
"input_type": input_type,
}
headers = None
if tweaks:
payload["tweaks"] = tweaks
if api_key:
headers = {"x-api-key": api_key}
response = requests.post(api_url, json=payload, headers=headers)
return response.json()
payload = {
"input_value": message,
"output_type": output_type,
"input_type": input_type,
}
headers = None
if tweaks:
payload["tweaks"] = tweaks
if api_key:
headers = {"x-api-key": api_key}
response = requests.post(api_url, json=payload, headers=headers)
return response.json()
# Setup any tweaks you want to apply to the flow
message = "message"
${!isAuth ? `api_key = "<your api key>"` : ""}
print(run_flow(message=message, flow_id=FLOW_ID, tweaks=TWEAKS${
# Setup any tweaks you want to apply to the flow
message = "message"
${!isAuth ? `api_key = "<your api key>"` : ""}
print(run_flow(message=message, flow_id=FLOW_ID, tweaks=TWEAKS${
!isAuth ? `, api_key=api_key` : ""
}))`;
}

View file

@ -11,10 +11,10 @@ export default function getPythonCode(
const tweaksObject = tweaksBuildedObject[0];
return `from langflow.load import run_flow_from_json
TWEAKS = ${JSON.stringify(tweaksObject, null, 2)}
TWEAKS = ${JSON.stringify(tweaksObject, null, 2)}
result = run_flow_from_json(flow="${flowName}.json",
input_value="message",
fallback_to_env_vars=True, # False by default
tweaks=TWEAKS)`;
result = run_flow_from_json(flow="${flowName}.json",
input_value="message",
fallback_to_env_vars=True, # False by default
tweaks=TWEAKS)`;
}

View file

@ -271,7 +271,7 @@ export default function NodeToolbarComponent({
selected &&
(hasApiKey || hasStore) &&
(event.ctrlKey || event.metaKey) &&
event.key === "u"
event.key.toUpperCase() === "U"
) {
event.preventDefault();
handleSelectChange("update");
@ -280,7 +280,7 @@ export default function NodeToolbarComponent({
selected &&
isGroup &&
(event.ctrlKey || event.metaKey) &&
event.key === "g"
event.key.toUpperCase() === "G"
) {
event.preventDefault();
handleSelectChange("ungroup");
@ -290,7 +290,7 @@ export default function NodeToolbarComponent({
(hasApiKey || hasStore) &&
(event.ctrlKey || event.metaKey) &&
event.shiftKey &&
event.key === "S"
event.key.toUpperCase() === "S"
) {
event.preventDefault();
setShowconfirmShare((state) => !state);
@ -300,7 +300,7 @@ export default function NodeToolbarComponent({
selected &&
(event.ctrlKey || event.metaKey) &&
event.shiftKey &&
event.key === "Q"
event.key.toUpperCase() === "Q"
) {
event.preventDefault();
if (isMinimal) {
@ -317,7 +317,7 @@ export default function NodeToolbarComponent({
selected &&
(event.ctrlKey || event.metaKey) &&
event.shiftKey &&
event.key === "U"
event.key.toUpperCase() === "U"
) {
event.preventDefault();
if (hasCode) return setOpenModal((state) => !state);
@ -327,12 +327,16 @@ export default function NodeToolbarComponent({
selected &&
(event.ctrlKey || event.metaKey) &&
event.shiftKey &&
event.key === "A"
event.key.toUpperCase() === "A"
) {
event.preventDefault();
setShowModalAdvanced((state) => !state);
}
if (selected && (event.ctrlKey || event.metaKey) && event.key === "s") {
if (
selected &&
(event.ctrlKey || event.metaKey) &&
event.key.toUpperCase() === "S"
) {
if (isSaved) {
event.preventDefault();
return setShowOverrideModal((state) => !state);
@ -347,7 +351,7 @@ export default function NodeToolbarComponent({
selected &&
(event.ctrlKey || event.metaKey) &&
event.shiftKey &&
event.key === "D"
event.key.toUpperCase() === "D"
) {
event.preventDefault();
if (data.node?.documentation) {
@ -357,7 +361,11 @@ export default function NodeToolbarComponent({
title: `${data.id} docs is not available at the moment.`,
});
}
if (selected && (event.ctrlKey || event.metaKey) && event.key === "j") {
if (
selected &&
(event.ctrlKey || event.metaKey) &&
event.key.toUpperCase() === "J"
) {
event.preventDefault();
downloadNode(flowComponent!);
}